threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "Hi,\n\nTLDR: this email describes a serialization failure that happens (as I\nunderstand it) due to too coarse predicate locks granularity for primary\nkey index.\n\nI have a concurrent testsuite that runs 14 test cases. Each test case\noperates on a disjoint set of records, doesn't retry transactions and is\nrun under 'serializable' isolation level. The test data is small and likely\nfits within a single tuple page.\n\nWhen I finished the test suite I was surprised that PostgreSQL 14.5 returns\nserialization failure on every test suite run. I was even more surprised\nwhen I tested the suite against the current CockroachDB and didn't get\nserialization failures. Actually I was able to reproduce RETRY_SERIALIZABLE\nerrors a couple of times on CockroachDB but it required me to run the test\nsuite in a loop for more than a half hour.\n\nI started to investigate the test behavior with PostgreSQL with more\nsimplified and shrinked code and found a serialization failure of two\nconcurrent `update_user` operations.\n\nThe test defines the following `Users` table:\n\nCREATE TABLE Users (\n> id UUID,\n> title VARCHAR(255),\n> first_name VARCHAR(40),\n> last_name VARCHAR(80) NOT NULL,\n> email VARCHAR(255) NOT NULL,\n> lower_email VARCHAR(255) GENERATED ALWAYS AS (lower(email)) STORED,\n> marketing_optin BOOLEAN,\n> mobile_phone VARCHAR(50),\n> phone VARCHAR(50),\n> phone_ext VARCHAR(40),\n> is_contact BOOLEAN DEFAULT false NOT NULL,\n> unlinked_link_ids UUID[],\n\n\n> CONSTRAINT unique_user_email UNIQUE(lower_email),\n> PRIMARY KEY (id)\n> );\n\n\nConcurrent `update_user` operation run the UPDATE query to change user\nemail to a unique value\n\nUPDATE Users\n> SET\n> title = CASE WHEN false= true THEN 'foo' ELSE title END,\n> first_name = CASE WHEN false= true THEN 'foo' ELSE first_name END,\n> last_name = CASE WHEN false= true THEN 'foo' ELSE last_name END,\n> email = CASE WHEN true = true THEN 'email2' ELSE email END,\n> marketing_optin = CASE WHEN false = true THEN true ELSE\n> marketing_optin END,\n> mobile_phone = CASE WHEN false = true THEN 'foo' ELSE mobile_phone END,\n> phone = CASE WHEN false = true THEN 'foo' ELSE phone END,\n> phone_ext = CASE WHEN false = true THEN 'foo' ELSE phone_ext END\n> WHERE id = '018629fd-7b28-743c-8647-b6321c166d46';\n>\n\nI use the following helper view to monitor locks:\n\n> CREATE VIEW locks_v AS\n> SELECT pid,\n> virtualtransaction,\n> locktype,\n> CASE locktype\n> WHEN 'relation' THEN relation::regclass::text\n> WHEN 'virtualxid' THEN virtualxid::text\n> WHEN 'transactionid' THEN transactionid::text\n> WHEN 'tuple' THEN\n> relation::regclass::text||':'||page::text||':'||tuple::text\n> WHEN 'page' THEN relation::regclass::text||':'||page::text\n> END AS lockid,\n> mode,\n> granted\n> FROM pg_locks;\n\n\n When the test Users table has only a few records the query uses a\nsequential scan the serialization failure is reproducible without inserting\nsleeps before `update_user` transaction commit.\n\nThis is caused by relation level predicate locks on Users table:\n\n> select * from locks_v;\n> pid | virtualtransaction | locktype | lockid |\n> mode | granted\n>\n> ------+--------------------+---------------+-------------------+------------------+---------\n> 3676 | 5/2444 | relation | unique_user_email |\n> RowExclusiveLock | t\n> 3676 | 5/2444 | relation | users_pkey |\n> RowExclusiveLock | t\n> 3676 | 5/2444 | relation | users |\n> RowExclusiveLock | t\n> 3676 | 5/2444 | virtualxid | 5/2444 |\n> ExclusiveLock | t\n> 3737 | 4/13470 | relation | pg_locks |\n> AccessShareLock | t\n> 3737 | 4/13470 | relation | locks_v |\n> AccessShareLock | t\n> 3737 | 4/13470 | virtualxid | 4/13470 |\n> ExclusiveLock | t\n> 3669 | 3/17334 | relation | unique_user_email |\n> RowExclusiveLock | t\n> 3669 | 3/17334 | relation | users_pkey |\n> RowExclusiveLock | t\n> 3669 | 3/17334 | relation | users |\n> RowExclusiveLock | t\n> 3669 | 3/17334 | virtualxid | 3/17334 |\n> ExclusiveLock | t\n> 3676 | 5/2444 | transactionid | 6571 |\n> ExclusiveLock | t\n> 3669 | 3/17334 | transactionid | 6570 |\n> ExclusiveLock | t\n> 3676 | 5/2444 | relation | users |\n> SIReadLock | t\n> 3669 | 3/17334 | relation | users |\n> SIReadLock | t\n> (15 rows)\n>\n\nIf I add ballast data to Users table (1000 records) the cost optimizer\nswitches to index scan and it's hard to reproduce the issue for two\nconcurrent `update_user` operations without sleeps. After adding long\nsleeps after UPDATE query and before commit I could see page-level\npredicates locks for the primary key index users_pkey:\n\nselect * from locks_v;\n> pid | virtualtransaction | locktype | lockid | mode\n> | granted\n>\n> -----+--------------------+---------------+-------------------+------------------+---------\n> 371 | 6/523 | relation | unique_user_email |\n> RowExclusiveLock | t\n> 371 | 6/523 | relation | users_pkey |\n> RowExclusiveLock | t\n> 371 | 6/523 | relation | users |\n> RowExclusiveLock | t\n> 371 | 6/523 | virtualxid | 6/523 |\n> ExclusiveLock | t\n> 381 | 14/215 | relation | unique_user_email |\n> RowExclusiveLock | t\n> 381 | 14/215 | relation | users_pkey |\n> RowExclusiveLock | t\n> 381 | 14/215 | relation | users |\n> RowExclusiveLock | t\n> 381 | 14/215 | virtualxid | 14/215 |\n> ExclusiveLock | t\n> 350 | 4/885 | relation | pg_locks |\n> AccessShareLock | t\n> 350 | 4/885 | relation | locks_v |\n> AccessShareLock | t\n> 350 | 4/885 | virtualxid | 4/885 |\n> ExclusiveLock | t\n> 371 | 6/523 | transactionid | 1439 |\n> ExclusiveLock | t\n> 381 | 14/215 | transactionid | 1431 |\n> ExclusiveLock | t\n> 381 | 14/215 | page | users_pkey:5 | SIReadLock\n> | t\n> 371 | 6/523 | page | users_pkey:5 | SIReadLock\n> | t\n> (15 rows)\n>\n\nWith sleeps the serialization failure is reproduced on each run.\n\nI started to read more about SSI implementation in PostgreSQL. The article\nhttps://arxiv.org/pdf/1208.4179.pdf mentions that\n\n> Currently, locks on B+-tree indexes are acquired at page granularity; we\n> intend to refine this to next-key locking [16] in a future release.\n>\n[16] C. Mohan. ARIES/KVL: A key-value locking method for concurrency\n> control of multiaction transactions operating on B-tree indexes. In VLDB,\n> pages 392–405, 1990.\n\n\nMy question follows:\n\nDoes the current PostgreSQL release support B+ tree index predicate locks\nmore granular then page-level locks?\n\nWith kindest regards, Rinat Shigapov\n\nHi,TLDR: this email describes a serialization failure that happens (as I understand it) due to too coarse predicate locks granularity for primary key index.I have a concurrent testsuite that runs 14 test cases. Each test case operates on a disjoint set of records, doesn't retry transactions and is run under 'serializable' isolation level. The test data is small and likely fits within a single tuple page.When I finished the test suite I was surprised that PostgreSQL 14.5 returns serialization failure on every test suite run. I was even more surprised when I tested the suite against the current CockroachDB and didn't get serialization failures. Actually I was able to reproduce RETRY_SERIALIZABLE errors a couple of times on CockroachDB but it required me to run the test suite in a loop for more than a half hour.I started to investigate the test behavior with PostgreSQL with more simplified and shrinked code and found a serialization failure of two concurrent `update_user` operations.The test defines the following `Users` table:CREATE TABLE Users ( id UUID, title VARCHAR(255), first_name VARCHAR(40), last_name VARCHAR(80) NOT NULL, email VARCHAR(255) NOT NULL, lower_email VARCHAR(255) GENERATED ALWAYS AS (lower(email)) STORED, marketing_optin BOOLEAN, mobile_phone VARCHAR(50), phone VARCHAR(50), phone_ext VARCHAR(40), is_contact BOOLEAN DEFAULT false NOT NULL, unlinked_link_ids UUID[], CONSTRAINT unique_user_email UNIQUE(lower_email), PRIMARY KEY (id));Concurrent `update_user` operation run the UPDATE query to change user email to a unique value UPDATE UsersSET title = CASE WHEN false= true THEN 'foo' ELSE title END, first_name = CASE WHEN false= true THEN 'foo' ELSE first_name END, last_name = CASE WHEN false= true THEN 'foo' ELSE last_name END, email = CASE WHEN true = true THEN 'email2' ELSE email END, marketing_optin = CASE WHEN false = true THEN true ELSE marketing_optin END, mobile_phone = CASE WHEN false = true THEN 'foo' ELSE mobile_phone END, phone = CASE WHEN false = true THEN 'foo' ELSE phone END, phone_ext = CASE WHEN false = true THEN 'foo' ELSE phone_ext ENDWHERE id = '018629fd-7b28-743c-8647-b6321c166d46';I use the following helper view to monitor locks:CREATE VIEW locks_v ASSELECT pid, virtualtransaction, locktype, CASE locktype WHEN 'relation' THEN relation::regclass::text WHEN 'virtualxid' THEN virtualxid::text WHEN 'transactionid' THEN transactionid::text WHEN 'tuple' THEN relation::regclass::text||':'||page::text||':'||tuple::text WHEN 'page' THEN relation::regclass::text||':'||page::text END AS lockid, mode, grantedFROM pg_locks; When the test Users table has only a few records the query uses a sequential scan the serialization failure is reproducible without inserting sleeps before `update_user` transaction commit.This is caused by relation level predicate locks on Users table:select * from locks_v; pid | virtualtransaction | locktype | lockid | mode | granted------+--------------------+---------------+-------------------+------------------+--------- 3676 | 5/2444 | relation | unique_user_email | RowExclusiveLock | t 3676 | 5/2444 | relation | users_pkey | RowExclusiveLock | t 3676 | 5/2444 | relation | users | RowExclusiveLock | t 3676 | 5/2444 | virtualxid | 5/2444 | ExclusiveLock | t 3737 | 4/13470 | relation | pg_locks | AccessShareLock | t 3737 | 4/13470 | relation | locks_v | AccessShareLock | t 3737 | 4/13470 | virtualxid | 4/13470 | ExclusiveLock | t 3669 | 3/17334 | relation | unique_user_email | RowExclusiveLock | t 3669 | 3/17334 | relation | users_pkey | RowExclusiveLock | t 3669 | 3/17334 | relation | users | RowExclusiveLock | t 3669 | 3/17334 | virtualxid | 3/17334 | ExclusiveLock | t 3676 | 5/2444 | transactionid | 6571 | ExclusiveLock | t 3669 | 3/17334 | transactionid | 6570 | ExclusiveLock | t 3676 | 5/2444 | relation | users | SIReadLock | t 3669 | 3/17334 | relation | users | SIReadLock | t(15 rows)If I add ballast data to Users table (1000 records) the cost optimizer switches to index scan and it's hard to reproduce the issue for two concurrent `update_user` operations without sleeps. After adding long sleeps after UPDATE query and before commit I could see page-level predicates locks for the primary key index users_pkey:select * from locks_v; pid | virtualtransaction | locktype | lockid | mode | granted-----+--------------------+---------------+-------------------+------------------+--------- 371 | 6/523 | relation | unique_user_email | RowExclusiveLock | t 371 | 6/523 | relation | users_pkey | RowExclusiveLock | t 371 | 6/523 | relation | users | RowExclusiveLock | t 371 | 6/523 | virtualxid | 6/523 | ExclusiveLock | t 381 | 14/215 | relation | unique_user_email | RowExclusiveLock | t 381 | 14/215 | relation | users_pkey | RowExclusiveLock | t 381 | 14/215 | relation | users | RowExclusiveLock | t 381 | 14/215 | virtualxid | 14/215 | ExclusiveLock | t 350 | 4/885 | relation | pg_locks | AccessShareLock | t 350 | 4/885 | relation | locks_v | AccessShareLock | t 350 | 4/885 | virtualxid | 4/885 | ExclusiveLock | t 371 | 6/523 | transactionid | 1439 | ExclusiveLock | t 381 | 14/215 | transactionid | 1431 | ExclusiveLock | t 381 | 14/215 | page | users_pkey:5 | SIReadLock | t 371 | 6/523 | page | users_pkey:5 | SIReadLock | t(15 rows) With sleeps the serialization failure is reproduced on each run.I started to read more about SSI implementation in PostgreSQL. The article https://arxiv.org/pdf/1208.4179.pdf mentions thatCurrently,\nlocks on B+-tree indexes are acquired at page granularity; we intend\nto refine this to next-key locking [16] in a future release.[16] C. Mohan. ARIES/KVL: A key-value locking method for\nconcurrency control of multiaction transactions operating on\nB-tree indexes. In VLDB, pages 392–405, 1990. My question follows: Does the current PostgreSQL release support B+ tree index predicate locks more granular then page-level locks? With kindest regards, Rinat Shigapov",
"msg_date": "Tue, 7 Feb 2023 16:23:54 +0600",
"msg_from": "Rinat Shigapov <rinatshigapov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Too coarse predicate locks granularity for B+ tree indexes"
},
{
"msg_contents": "On Tue, 2023-02-07 at 16:23 +0600, Rinat Shigapov wrote:\n> I have a concurrent testsuite that runs 14 test cases. Each test case operates\n> on a disjoint set of records, doesn't retry transactions and is run under\n> 'serializable' isolation level. The test data is small and likely fits within\n> a single tuple page.\n> \n> When I finished the test suite I was surprised that PostgreSQL 14.5 returns\n> serialization failure on every test suite run.\n\nThis is no question for the hackers list; redirecting to general.\n\nThat behavior sounds perfectly normal to me: if everything is in a single\npage, PostgreSQL probably won't use an index scan. With a sequential scan,\nthe predicate lock will be on the whole table. So you should expect\nserialization failures. This is well documented.\n\nPerhaps you should use a more realistic test case with a reasonable\namount of data.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 07 Feb 2023 11:29:50 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Too coarse predicate locks granularity for B+ tree indexes"
},
{
"msg_contents": "Thank you for your prompt reply!\n\nI've mentioned that I've generated ballast data to make the cost optimizer\nto switch to page-level locks.\n\nBut my question is about more finer grained (less then page) predicate\nlocks for indices. With page-level locks I could still get serialization\nfailures if I add more queries (or emulate it with sleeps) to the\ntransaction with the UPDATE Users query.\n\nBelow I describe the problem again for psql-general:\n\nI have a concurrent testsuite that runs 14 test cases. Each test case\noperates on a disjoint set of records, doesn't retry transactions and is\nrun under 'serializable' isolation level. The test data is small and likely\nfits within a single tuple page.\n\nWhen I finished the test suite I was surprised that PostgreSQL 14.5 returns\nserialization failure on every test suite run. I was even more surprised\nwhen I tested the suite against the current CockroachDB and didn't get\nserialization failures. Actually I was able to reproduce RETRY_SERIALIZABLE\nerrors a couple of times on CockroachDB but it required me to run the test\nsuite in a loop for more than a half hour.\n\nI started to investigate the test behavior with PostgreSQL with more\nsimplified and shrinked code and found a serialization failure of two\nconcurrent `update_user` operations.\n\nThe test defines the following `Users` table:\n\nCREATE TABLE Users (\n> id UUID,\n> title VARCHAR(255),\n> first_name VARCHAR(40),\n> last_name VARCHAR(80) NOT NULL,\n> email VARCHAR(255) NOT NULL,\n> lower_email VARCHAR(255) GENERATED ALWAYS AS (lower(email)) STORED,\n> marketing_optin BOOLEAN,\n> mobile_phone VARCHAR(50),\n> phone VARCHAR(50),\n> phone_ext VARCHAR(40),\n> is_contact BOOLEAN DEFAULT false NOT NULL,\n> unlinked_link_ids UUID[],\n\n\n> CONSTRAINT unique_user_email UNIQUE(lower_email),\n> PRIMARY KEY (id)\n> );\n\n\nConcurrent `update_user` operation run the UPDATE query to change user\nemail to a unique value\n\nUPDATE Users\n> SET\n> title = CASE WHEN false= true THEN 'foo' ELSE title END,\n> first_name = CASE WHEN false= true THEN 'foo' ELSE first_name END,\n> last_name = CASE WHEN false= true THEN 'foo' ELSE last_name END,\n> email = CASE WHEN true = true THEN 'email2' ELSE email END,\n> marketing_optin = CASE WHEN false = true THEN true ELSE\n> marketing_optin END,\n> mobile_phone = CASE WHEN false = true THEN 'foo' ELSE mobile_phone END,\n> phone = CASE WHEN false = true THEN 'foo' ELSE phone END,\n> phone_ext = CASE WHEN false = true THEN 'foo' ELSE phone_ext END\n> WHERE id = '018629fd-7b28-743c-8647-b6321c166d46';\n>\n\nI use the following helper view to monitor locks:\n\n> CREATE VIEW locks_v AS\n> SELECT pid,\n> virtualtransaction,\n> locktype,\n> CASE locktype\n> WHEN 'relation' THEN relation::regclass::text\n> WHEN 'virtualxid' THEN virtualxid::text\n> WHEN 'transactionid' THEN transactionid::text\n> WHEN 'tuple' THEN\n> relation::regclass::text||':'||page::text||':'||tuple::text\n> WHEN 'page' THEN relation::regclass::text||':'||page::text\n> END AS lockid,\n> mode,\n> granted\n> FROM pg_locks;\n\n\n When the test Users table has only a few records the query uses a\nsequential scan the serialization failure is reproducible without inserting\nsleeps before `update_user` transaction commit.\n\nThis is caused by relation level predicate locks on Users table:\n\n> select * from locks_v;\n> pid | virtualtransaction | locktype | lockid |\n> mode | granted\n>\n> ------+--------------------+---------------+-------------------+------------------+---------\n> 3676 | 5/2444 | relation | unique_user_email |\n> RowExclusiveLock | t\n> 3676 | 5/2444 | relation | users_pkey |\n> RowExclusiveLock | t\n> 3676 | 5/2444 | relation | users |\n> RowExclusiveLock | t\n> 3676 | 5/2444 | virtualxid | 5/2444 |\n> ExclusiveLock | t\n> 3737 | 4/13470 | relation | pg_locks |\n> AccessShareLock | t\n> 3737 | 4/13470 | relation | locks_v |\n> AccessShareLock | t\n> 3737 | 4/13470 | virtualxid | 4/13470 |\n> ExclusiveLock | t\n> 3669 | 3/17334 | relation | unique_user_email |\n> RowExclusiveLock | t\n> 3669 | 3/17334 | relation | users_pkey |\n> RowExclusiveLock | t\n> 3669 | 3/17334 | relation | users |\n> RowExclusiveLock | t\n> 3669 | 3/17334 | virtualxid | 3/17334 |\n> ExclusiveLock | t\n> 3676 | 5/2444 | transactionid | 6571 |\n> ExclusiveLock | t\n> 3669 | 3/17334 | transactionid | 6570 |\n> ExclusiveLock | t\n> 3676 | 5/2444 | relation | users |\n> SIReadLock | t\n> 3669 | 3/17334 | relation | users |\n> SIReadLock | t\n> (15 rows)\n>\n\nIf I add ballast data to Users table (1000 records) the cost optimizer\nswitches to index scan and it's hard to reproduce the issue for two\nconcurrent `update_user` operations without sleeps. After adding long\nsleeps after UPDATE query and before commit I could see page-level\npredicates locks for the primary key index users_pkey:\n\nselect * from locks_v;\n> pid | virtualtransaction | locktype | lockid | mode\n> | granted\n>\n> -----+--------------------+---------------+-------------------+------------------+---------\n> 371 | 6/523 | relation | unique_user_email |\n> RowExclusiveLock | t\n> 371 | 6/523 | relation | users_pkey |\n> RowExclusiveLock | t\n> 371 | 6/523 | relation | users |\n> RowExclusiveLock | t\n> 371 | 6/523 | virtualxid | 6/523 |\n> ExclusiveLock | t\n> 381 | 14/215 | relation | unique_user_email |\n> RowExclusiveLock | t\n> 381 | 14/215 | relation | users_pkey |\n> RowExclusiveLock | t\n> 381 | 14/215 | relation | users |\n> RowExclusiveLock | t\n> 381 | 14/215 | virtualxid | 14/215 |\n> ExclusiveLock | t\n> 350 | 4/885 | relation | pg_locks |\n> AccessShareLock | t\n> 350 | 4/885 | relation | locks_v |\n> AccessShareLock | t\n> 350 | 4/885 | virtualxid | 4/885 |\n> ExclusiveLock | t\n> 371 | 6/523 | transactionid | 1439 |\n> ExclusiveLock | t\n> 381 | 14/215 | transactionid | 1431 |\n> ExclusiveLock | t\n> 381 | 14/215 | page | users_pkey:5 | SIReadLock\n> | t\n> 371 | 6/523 | page | users_pkey:5 | SIReadLock\n> | t\n> (15 rows)\n>\n\nWith sleeps the serialization failure is reproduced on each run.\n\nI started to read more about SSI implementation in PostgreSQL. The article\nhttps://arxiv.org/pdf/1208.4179.pdf mentions that\n\n> Currently, locks on B+-tree indexes are acquired at page granularity; we\n> intend to refine this to next-key locking [16] in a future release.\n>\n[16] C. Mohan. ARIES/KVL: A key-value locking method for concurrency\n> control of multiaction transactions operating on B-tree indexes. In VLDB,\n> pages 392–405, 1990.\n\n\nMy question follows:\n\nDoes the current PostgreSQL release support B+ tree index predicate locks\nmore granular then page-level locks?\n\nWith kindest regards, Rinat Shigapov\n\n\nвт, 7 февр. 2023 г. в 16:29, Laurenz Albe <laurenz.albe@cybertec.at>:\n\n> On Tue, 2023-02-07 at 16:23 +0600, Rinat Shigapov wrote:\n> > I have a concurrent testsuite that runs 14 test cases. Each test case\n> operates\n> > on a disjoint set of records, doesn't retry transactions and is run under\n> > 'serializable' isolation level. The test data is small and likely fits\n> within\n> > a single tuple page.\n> >\n> > When I finished the test suite I was surprised that PostgreSQL 14.5\n> returns\n> > serialization failure on every test suite run.\n>\n> This is no question for the hackers list; redirecting to general.\n>\n> That behavior sounds perfectly normal to me: if everything is in a single\n> page, PostgreSQL probably won't use an index scan. With a sequential scan,\n> the predicate lock will be on the whole table. So you should expect\n> serialization failures. This is well documented.\n>\n> Perhaps you should use a more realistic test case with a reasonable\n> amount of data.\n>\n> Yours,\n> Laurenz Albe\n>\n\nThank you for your prompt reply!I've mentioned that I've generated ballast data to make the cost optimizer to switch to page-level locks.But my question is about more finer grained (less then page) predicate locks for indices. With page-level locks I could still get serialization failures if I add more queries (or emulate it with sleeps) to the transaction with the UPDATE Users query.Below I describe the problem again for psql-general:I have a concurrent testsuite that runs 14 test cases. Each test case operates on a disjoint set of records, doesn't retry transactions and is run under 'serializable' isolation level. The test data is small and likely fits within a single tuple page.When I finished the test suite I was surprised that PostgreSQL 14.5 returns serialization failure on every test suite run. I was even more surprised when I tested the suite against the current CockroachDB and didn't get serialization failures. Actually I was able to reproduce RETRY_SERIALIZABLE errors a couple of times on CockroachDB but it required me to run the test suite in a loop for more than a half hour.I started to investigate the test behavior with PostgreSQL with more simplified and shrinked code and found a serialization failure of two concurrent `update_user` operations.The test defines the following `Users` table:CREATE TABLE Users ( id UUID, title VARCHAR(255), first_name VARCHAR(40), last_name VARCHAR(80) NOT NULL, email VARCHAR(255) NOT NULL, lower_email VARCHAR(255) GENERATED ALWAYS AS (lower(email)) STORED, marketing_optin BOOLEAN, mobile_phone VARCHAR(50), phone VARCHAR(50), phone_ext VARCHAR(40), is_contact BOOLEAN DEFAULT false NOT NULL, unlinked_link_ids UUID[], CONSTRAINT unique_user_email UNIQUE(lower_email), PRIMARY KEY (id));Concurrent `update_user` operation run the UPDATE query to change user email to a unique value UPDATE UsersSET title = CASE WHEN false= true THEN 'foo' ELSE title END, first_name = CASE WHEN false= true THEN 'foo' ELSE first_name END, last_name = CASE WHEN false= true THEN 'foo' ELSE last_name END, email = CASE WHEN true = true THEN 'email2' ELSE email END, marketing_optin = CASE WHEN false = true THEN true ELSE marketing_optin END, mobile_phone = CASE WHEN false = true THEN 'foo' ELSE mobile_phone END, phone = CASE WHEN false = true THEN 'foo' ELSE phone END, phone_ext = CASE WHEN false = true THEN 'foo' ELSE phone_ext ENDWHERE id = '018629fd-7b28-743c-8647-b6321c166d46';I use the following helper view to monitor locks:CREATE VIEW locks_v ASSELECT pid, virtualtransaction, locktype, CASE locktype WHEN 'relation' THEN relation::regclass::text WHEN 'virtualxid' THEN virtualxid::text WHEN 'transactionid' THEN transactionid::text WHEN 'tuple' THEN relation::regclass::text||':'||page::text||':'||tuple::text WHEN 'page' THEN relation::regclass::text||':'||page::text END AS lockid, mode, grantedFROM pg_locks; When the test Users table has only a few records the query uses a sequential scan the serialization failure is reproducible without inserting sleeps before `update_user` transaction commit.This is caused by relation level predicate locks on Users table:select * from locks_v; pid | virtualtransaction | locktype | lockid | mode | granted------+--------------------+---------------+-------------------+------------------+--------- 3676 | 5/2444 | relation | unique_user_email | RowExclusiveLock | t 3676 | 5/2444 | relation | users_pkey | RowExclusiveLock | t 3676 | 5/2444 | relation | users | RowExclusiveLock | t 3676 | 5/2444 | virtualxid | 5/2444 | ExclusiveLock | t 3737 | 4/13470 | relation | pg_locks | AccessShareLock | t 3737 | 4/13470 | relation | locks_v | AccessShareLock | t 3737 | 4/13470 | virtualxid | 4/13470 | ExclusiveLock | t 3669 | 3/17334 | relation | unique_user_email | RowExclusiveLock | t 3669 | 3/17334 | relation | users_pkey | RowExclusiveLock | t 3669 | 3/17334 | relation | users | RowExclusiveLock | t 3669 | 3/17334 | virtualxid | 3/17334 | ExclusiveLock | t 3676 | 5/2444 | transactionid | 6571 | ExclusiveLock | t 3669 | 3/17334 | transactionid | 6570 | ExclusiveLock | t 3676 | 5/2444 | relation | users | SIReadLock | t 3669 | 3/17334 | relation | users | SIReadLock | t(15 rows)If I add ballast data to Users table (1000 records) the cost optimizer switches to index scan and it's hard to reproduce the issue for two concurrent `update_user` operations without sleeps. After adding long sleeps after UPDATE query and before commit I could see page-level predicates locks for the primary key index users_pkey:select * from locks_v; pid | virtualtransaction | locktype | lockid | mode | granted-----+--------------------+---------------+-------------------+------------------+--------- 371 | 6/523 | relation | unique_user_email | RowExclusiveLock | t 371 | 6/523 | relation | users_pkey | RowExclusiveLock | t 371 | 6/523 | relation | users | RowExclusiveLock | t 371 | 6/523 | virtualxid | 6/523 | ExclusiveLock | t 381 | 14/215 | relation | unique_user_email | RowExclusiveLock | t 381 | 14/215 | relation | users_pkey | RowExclusiveLock | t 381 | 14/215 | relation | users | RowExclusiveLock | t 381 | 14/215 | virtualxid | 14/215 | ExclusiveLock | t 350 | 4/885 | relation | pg_locks | AccessShareLock | t 350 | 4/885 | relation | locks_v | AccessShareLock | t 350 | 4/885 | virtualxid | 4/885 | ExclusiveLock | t 371 | 6/523 | transactionid | 1439 | ExclusiveLock | t 381 | 14/215 | transactionid | 1431 | ExclusiveLock | t 381 | 14/215 | page | users_pkey:5 | SIReadLock | t 371 | 6/523 | page | users_pkey:5 | SIReadLock | t(15 rows) With sleeps the serialization failure is reproduced on each run.I started to read more about SSI implementation in PostgreSQL. The article https://arxiv.org/pdf/1208.4179.pdf mentions thatCurrently, locks on B+-tree indexes are acquired at page granularity; we intend to refine this to next-key locking [16] in a future release.[16] C. Mohan. ARIES/KVL: A key-value locking method for concurrency control of multiaction transactions operating on B-tree indexes. In VLDB, pages 392–405, 1990. My question follows: Does the current PostgreSQL release support B+ tree index predicate locks more granular then page-level locks?With kindest regards, Rinat Shigapovвт, 7 февр. 2023 г. в 16:29, Laurenz Albe <laurenz.albe@cybertec.at>:On Tue, 2023-02-07 at 16:23 +0600, Rinat Shigapov wrote:\n> I have a concurrent testsuite that runs 14 test cases. Each test case operates\n> on a disjoint set of records, doesn't retry transactions and is run under\n> 'serializable' isolation level. The test data is small and likely fits within\n> a single tuple page.\n> \n> When I finished the test suite I was surprised that PostgreSQL 14.5 returns\n> serialization failure on every test suite run.\n\nThis is no question for the hackers list; redirecting to general.\n\nThat behavior sounds perfectly normal to me: if everything is in a single\npage, PostgreSQL probably won't use an index scan. With a sequential scan,\nthe predicate lock will be on the whole table. So you should expect\nserialization failures. This is well documented.\n\nPerhaps you should use a more realistic test case with a reasonable\namount of data.\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 7 Feb 2023 17:08:26 +0600",
"msg_from": "Rinat Shigapov <rinatshigapov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Too coarse predicate locks granularity for B+ tree indexes"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 11:24 PM Rinat Shigapov <rinatshigapov@gmail.com> wrote:\n> Does the current PostgreSQL release support B+ tree index predicate locks more granular then page-level locks?\n\nNo. I tried to follow some breadcrumbs left by Kevin and Dan that\nshould allow unique index scans that find a match to skip the btree\npage lock, though, and p-lock just the heap tuple. If you like\nhalf-baked experimental code, see the v4-0002 patch in this thread,\nwhere I took some shortcuts (jamming stuff that should be in the\nplanner down into the executor) for a proof-of-concept:\n\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D2GK3FVdnt5V3d%2Bh9njWipCv_fNL%3DwjxyUhzsF%3D0PcbNg%40mail.gmail.com\n\nWith that approach, if it *doesn't* find a match, then you're back to\nhaving to p-lock the whole index page to represent the \"gap\", so that\nyou can conflict with anyone who tries to insert a matching value\nlater. I believe the next-key approach would allow for finer grained\ngap-locks (haven't studied that myself), but that's a secondary\nproblem; the primary problem (it seems to me) is getting rid of index\nlocks completely in the (common?) case that you have a qualifying\nmatch.\n\n\n",
"msg_date": "Wed, 8 Feb 2023 00:10:37 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Too coarse predicate locks granularity for B+ tree indexes"
},
{
"msg_contents": "Thomas, thank you for the details!\n\nHave you kept the branch that you used to generate the patch? Which commit\nshould the patch apply to?\n\nWith kindest regards, Rinat Shigapov\n\n\nвт, 7 февр. 2023 г. в 17:11, Thomas Munro <thomas.munro@gmail.com>:\n\n> On Tue, Feb 7, 2023 at 11:24 PM Rinat Shigapov <rinatshigapov@gmail.com>\n> wrote:\n> > Does the current PostgreSQL release support B+ tree index predicate\n> locks more granular then page-level locks?\n>\n> No. I tried to follow some breadcrumbs left by Kevin and Dan that\n> should allow unique index scans that find a match to skip the btree\n> page lock, though, and p-lock just the heap tuple. If you like\n> half-baked experimental code, see the v4-0002 patch in this thread,\n> where I took some shortcuts (jamming stuff that should be in the\n> planner down into the executor) for a proof-of-concept:\n>\n>\n> https://www.postgresql.org/message-id/flat/CAEepm%3D2GK3FVdnt5V3d%2Bh9njWipCv_fNL%3DwjxyUhzsF%3D0PcbNg%40mail.gmail.com\n>\n> With that approach, if it *doesn't* find a match, then you're back to\n> having to p-lock the whole index page to represent the \"gap\", so that\n> you can conflict with anyone who tries to insert a matching value\n> later. I believe the next-key approach would allow for finer grained\n> gap-locks (haven't studied that myself), but that's a secondary\n> problem; the primary problem (it seems to me) is getting rid of index\n> locks completely in the (common?) case that you have a qualifying\n> match.\n>\n\nThomas, thank you for the details!Have you kept the branch that you used to generate the patch? Which commit should the patch apply to?With kindest regards, Rinat Shigapovвт, 7 февр. 2023 г. в 17:11, Thomas Munro <thomas.munro@gmail.com>:On Tue, Feb 7, 2023 at 11:24 PM Rinat Shigapov <rinatshigapov@gmail.com> wrote:\n> Does the current PostgreSQL release support B+ tree index predicate locks more granular then page-level locks?\n\nNo. I tried to follow some breadcrumbs left by Kevin and Dan that\nshould allow unique index scans that find a match to skip the btree\npage lock, though, and p-lock just the heap tuple. If you like\nhalf-baked experimental code, see the v4-0002 patch in this thread,\nwhere I took some shortcuts (jamming stuff that should be in the\nplanner down into the executor) for a proof-of-concept:\n\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D2GK3FVdnt5V3d%2Bh9njWipCv_fNL%3DwjxyUhzsF%3D0PcbNg%40mail.gmail.com\n\nWith that approach, if it *doesn't* find a match, then you're back to\nhaving to p-lock the whole index page to represent the \"gap\", so that\nyou can conflict with anyone who tries to insert a matching value\nlater. I believe the next-key approach would allow for finer grained\ngap-locks (haven't studied that myself), but that's a secondary\nproblem; the primary problem (it seems to me) is getting rid of index\nlocks completely in the (common?) case that you have a qualifying\nmatch.",
"msg_date": "Tue, 7 Feb 2023 18:00:48 +0600",
"msg_from": "Rinat Shigapov <rinatshigapov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Too coarse predicate locks granularity for B+ tree indexes"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 4:01 AM Rinat Shigapov <rinatshigapov@gmail.com> wrote:\n>\n> Thomas, thank you for the details!\n>\n> Have you kept the branch that you used to generate the patch? Which commit should the patch apply to?\n>\n\nYou can try something like\ngit checkout 'master@{2018-05-13 13:37:00}'\nto get a commit by date from rev-parse.\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Tue, 7 Feb 2023 08:24:44 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Too coarse predicate locks granularity for B+ tree indexes"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 5:25 AM Andrey Borodin <amborodin86@gmail.com> wrote:\n> On Tue, Feb 7, 2023 at 4:01 AM Rinat Shigapov <rinatshigapov@gmail.com> wrote:\n> > Thomas, thank you for the details!\n> >\n> > Have you kept the branch that you used to generate the patch? Which commit should the patch apply to?\n>\n> You can try something like\n> git checkout 'master@{2018-05-13 13:37:00}'\n> to get a commit by date from rev-parse.\n\nI don't have time to work on this currently but if Rinat or others\nwant to look into it... maybe I should rebase that experiment on top\nof current master. Here's the branch:\n\nhttps://github.com/macdice/postgres/tree/ssi-index-locking-refinements\n\n\n",
"msg_date": "Wed, 8 Feb 2023 10:44:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Too coarse predicate locks granularity for B+ tree indexes"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 10:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Feb 8, 2023 at 5:25 AM Andrey Borodin <amborodin86@gmail.com> wrote:\n> > On Tue, Feb 7, 2023 at 4:01 AM Rinat Shigapov <rinatshigapov@gmail.com> wrote:\n> > > Thomas, thank you for the details!\n> > >\n> > > Have you kept the branch that you used to generate the patch? Which commit should the patch apply to?\n> >\n> > You can try something like\n> > git checkout 'master@{2018-05-13 13:37:00}'\n> > to get a commit by date from rev-parse.\n>\n> I don't have time to work on this currently but if Rinat or others\n> want to look into it... maybe I should rebase that experiment on top\n> of current master. Here's the branch:\n>\n> https://github.com/macdice/postgres/tree/ssi-index-locking-refinements\n\nErm, I guess I should also post the rebased patches here, for the\nmailing list archives.",
"msg_date": "Wed, 8 Feb 2023 10:51:08 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Too coarse predicate locks granularity for B+ tree indexes"
}
] |
[
{
"msg_contents": "Hi OSS Community,\nWe just wanted to confirm when the TAG will be created for the current FEB minor release as we could not find the TAG for none of the minor versions,\nbelow is the screen shot for the some of the minor versions.\n\n[cid:image001.png@01D93B13.7BF82E20]\nCould you please confirm when we can expect the TAG created for all minor versions?\n\nThanks and Regards,\nSujit Rathod\nSr. Application Developer\nFUJITSU CONSULTING INDIA\nMobile: +91 9730487531",
"msg_date": "Tue, 7 Feb 2023 11:14:48 +0000",
"msg_from": "\"Sujit.Rathod@fujitsu.com\" <Sujit.Rathod@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Missing TAG for FEB (current) Minor Version Release"
},
{
"msg_contents": "On 2023-Feb-07, Sujit.Rathod@fujitsu.com wrote:\n\n> Hi OSS Community,\n> We just wanted to confirm when the TAG will be created for the current FEB minor release as we could not find the TAG for none of the minor versions,\n> below is the screen shot for the some of the minor versions.\n\nYes, it will be created sometime this week, most likely today.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 7 Feb 2023 13:25:47 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Missing TAG for FEB (current) Minor Version Release"
},
{
"msg_contents": "On Tue, Feb 07, 2023 at 11:14:48AM +0000, Sujit.Rathod@fujitsu.com wrote:\n> Hi OSS Community,\n> We just wanted to confirm when the TAG will be created for the current FEB minor release as we could not find the TAG for none of the minor versions,\n> below is the screen shot for the some of the minor versions.\n> \n> [cid:image001.png@01D93B13.7BF82E20]\n> Could you please confirm when we can expect the TAG created for all minor versions?\n\nYou might be interested to read this earlier question:\nhttps://www.postgresql.org/message-id/flat/2e5676ba-e579-09a5-6f3a-d68208052654%40captnemo.in\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 7 Feb 2023 20:22:52 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing TAG for FEB (current) Minor Version Release"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Tue, Feb 07, 2023 at 11:14:48AM +0000, Sujit.Rathod@fujitsu.com wrote:\n>> Could you please confirm when we can expect the TAG created for all minor versions?\n\n> You might be interested to read this earlier question:\n> https://www.postgresql.org/message-id/flat/2e5676ba-e579-09a5-6f3a-d68208052654%40captnemo.in\n\nFYI, I pushed the tags about four hours ago, following our customary\nschedule.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Feb 2023 22:38:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing TAG for FEB (current) Minor Version Release"
}
] |
[
{
"msg_contents": "Hi,\n\n-- commit 9d2d9728b8d546434aade4f9667a59666588edd6\n> Author: Michael Paquier <michael@paquier.xyz>\n> Date: Thu Jan 26 12:23:16 2023 +0900\n> \n> Make auto_explain print the query identifier in verbose mode\n> ..(snip)..\n> While looking at the area, I have noticed that more consolidation\n> between EXPLAIN and auto_explain would be in order for the logging of\n> the plan duration and the buffer usage. This refactoring is left as a\n> future change.\n\nI'm working on this now.\nAttached a PoC patch which enables auto_explain to log plan duration and \nbuffer usage on planning phase.\nLast 3 lines are added by this patch:\n\n ```\n=# set auto_explain.log_min_duration = 0;\n=# set auto_explain.log_verbose = on;\n=# set auto_explain.log_analyze = on;\n=# select * from pg_class;\n\nLOG: 00000: duration: 6.774 ms plan:\n Query Text: select * from pg_class;\n Seq Scan on pg_catalog.pg_class (cost=0.00..18.12 rows=412 \nwidth=273) (actual time=0.009..0.231 rows=412 loops=1)\n Output: oid, relname, relnamespace, reltype, reloftype, \nrelowner, relam, relfilenode, reltablespace, relpages, reltuples, \nrelallvisible, reltoastrelid, relhasindex, relisshared, relpersistence, \nrelkind, relnatts, relchecks, relhasrules, relhastriggers, \nrelhassubclass, relrowsecurity, relforcerowsecurity, relispopulated, \nrelreplident, relispartition, relrewrite, relfrozenxid, relminmxid, \nrelacl, reloptions, relpartbound\n Buffers: shared hit=14\n Query Identifier: 8034096446570639838\n Planning\n Buffers: shared hit=120\n Planning Time: 3.908 ms\n ```\n\nIt adds a planner hook to track the plan duration and buffer usage for \nplanning.\nI'm considering the following points and any comments are welcome:\n\n- Plan duration and buffer usage are saved on PlannedStmt. As far as I \nreferred totaltime in QueryDesc, adding elements for extensions is not \nalways prohibited, but I'm wondering it's ok to add them in this case.\n- Just as pg_stat_statements made it possible to add planner information \nin v13, it may be useful for auto_explain to log planner phase \ninformation, especially plan duration. However, I am not sure to what \nextent information about the buffers used in the plan phase would be \nuseful.\n- Plan duration and buffer usage may differ from the output of EXPLAIN \ncommand since EXPLAIN command includes pg_plan_query(), but planner hook \ndoesn’t.\npg_plan_query() do things for log_planner_stats, debugging and tracing.\n- (Future works) Log output concerning buffers should be toggled on/off \nby auto_explain.log_buffers. Log output concerning planning should be \ntoggled on/off by new GUC something like auto_explain.track_planning.\n\n\nWhat do you think?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Tue, 07 Feb 2023 22:02:00 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Allow auto_explain to log plan duration and buffer usage"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on [1], I was looking for a quick way to tell if a WAL\nrecord is present in the WAL buffers array without scanning but I\ncouldn't find one. Hence, I put up a patch that basically tracks the\noldest initialized WAL buffer page, named OldestInitializedPage, in\nXLogCtl. With OldestInitializedPage, we can easily illustrate WAL\nbuffers array properties:\n\n1) At any given point of time, pages in the WAL buffers array are\nsorted in an ascending order from OldestInitializedPage till\nInitializedUpTo. Note that we verify this property for assert-only\nbuilds, see IsXLogBuffersArraySorted() in the patch for more details.\n\n2) OldestInitializedPage is monotonically increasing (by virtue of how\npostgres generates WAL records), that is, its value never decreases.\nThis property lets someone read its value without a lock. There's no\nproblem even if its value is slightly stale i.e. concurrently being\nupdated. One can still use it for finding if a given WAL record is\navailable in WAL buffers. At worst, one might get false positives\n(i.e. OldestInitializedPage may tell that the WAL record is available\nin WAL buffers, but when one actually looks at it, it isn't really\navailable). This is more efficient and performant than acquiring a\nlock for reading. Note that we may not need a lock to read\nOldestInitializedPage but we need to update it holding\nWALBufMappingLock.\n\n3) One can start traversing WAL buffers from OldestInitializedPage\ntill InitializedUpTo to list out all valid WAL records and stats, and\nexpose them via SQL-callable functions to users, for instance, as\npg_walinspect functions.\n\n4) WAL buffers array is inherently organized as a circular, sorted and\nrotated array with OldestInitializedPage as pivot/first element of the\narray with the property where LSN of previous buffer page (if valid)\nis greater than OldestInitializedPage and LSN of the next buffer page\n(if\nvalid) is greater than OldestInitializedPage.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACXKKK=wbiG5_t6dGao5GoecMwRkhr7GjVBM_jg54+Na=Q@mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 7 Feb 2023 19:30:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Track Oldest Initialized WAL Buffer Page"
},
{
"msg_contents": "On Tue, Feb 07, 2023 at 07:30:00PM +0530, Bharath Rupireddy wrote:\n> +\t\t/*\n> +\t\t * Try updating oldest initialized XLog buffer page.\n> +\t\t *\n> +\t\t * Update it if we are initializing an XLog buffer page for the first\n> +\t\t * time or if XLog buffers are full and we are wrapping around.\n> +\t\t */\n> +\t\tif (XLogRecPtrIsInvalid(XLogCtl->OldestInitializedPage) ||\n> +\t\t\t(!XLogRecPtrIsInvalid(XLogCtl->OldestInitializedPage) &&\n> +\t\t\t XLogRecPtrToBufIdx(XLogCtl->OldestInitializedPage) == nextidx))\n> +\t\t{\n> +\t\t\tAssert(XLogCtl->OldestInitializedPage < NewPageBeginPtr);\n> +\n> +\t\t\tXLogCtl->OldestInitializedPage = NewPageBeginPtr;\n> +\t\t}\n\nnitpick: I think you can simplify the conditional to\n\n\tif (XLogRecPtrIsInvalid(XLogCtl->OldestInitializedPage) ||\n\t\tXLogRecPtrToBufIdx(XLogCtl->OldestInitializedPage) == nextidx)\n\nIt's confusing to me that OldestInitializedPage is set to NewPageBeginPtr.\nDoesn't that set it to the beginning of the newest initialized page?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Feb 2023 16:22:17 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Track Oldest Initialized WAL Buffer Page"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 5:52 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Feb 07, 2023 at 07:30:00PM +0530, Bharath Rupireddy wrote:\n> > + /*\n> > + * Try updating oldest initialized XLog buffer page.\n> > + *\n> > + * Update it if we are initializing an XLog buffer page for the first\n> > + * time or if XLog buffers are full and we are wrapping around.\n> > + */\n> > + if (XLogRecPtrIsInvalid(XLogCtl->OldestInitializedPage) ||\n> > + (!XLogRecPtrIsInvalid(XLogCtl->OldestInitializedPage) &&\n> > + XLogRecPtrToBufIdx(XLogCtl->OldestInitializedPage) == nextidx))\n> > + {\n> > + Assert(XLogCtl->OldestInitializedPage < NewPageBeginPtr);\n> > +\n> > + XLogCtl->OldestInitializedPage = NewPageBeginPtr;\n> > + }\n>\n> nitpick: I think you can simplify the conditional to\n>\n> if (XLogRecPtrIsInvalid(XLogCtl->OldestInitializedPage) ||\n> XLogRecPtrToBufIdx(XLogCtl->OldestInitializedPage) == nextidx)\n\nOh, yes, done that.\n\n> It's confusing to me that OldestInitializedPage is set to NewPageBeginPtr.\n> Doesn't that set it to the beginning of the newest initialized page?\n\nYes, that's the intention, see below. OldestInitializedPage points to\nthe start address of the oldest initialized page whereas the\nInitializedUpTo points to the end address of the latest initialized\npage. With this, one can easily track all the WAL between\nOldestInitializedPage and InitializedUpTo.\n\n+ /*\n+ * OldestInitializedPage and InitializedUpTo are always starting and\n+ * ending addresses of (same or different) XLog buffer page\n+ * respectively. Hence, they can never be same even if there's only one\n+ * initialized page in XLog buffers.\n+ */\n+ Assert(XLogCtl->OldestInitializedPage != XLogCtl->InitializedUpTo);\n\nThanks for looking at it. I'm attaching v2 patch with the above review\ncomment addressed for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 28 Feb 2023 11:12:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Track Oldest Initialized WAL Buffer Page"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 11:12:29AM +0530, Bharath Rupireddy wrote:\n> On Tue, Feb 28, 2023 at 5:52 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> It's confusing to me that OldestInitializedPage is set to NewPageBeginPtr.\n>> Doesn't that set it to the beginning of the newest initialized page?\n> \n> Yes, that's the intention, see below. OldestInitializedPage points to\n> the start address of the oldest initialized page whereas the\n> InitializedUpTo points to the end address of the latest initialized\n> page. With this, one can easily track all the WAL between\n> OldestInitializedPage and InitializedUpTo.\n\nThis is where I'm confused. Why would we set the variable for the start\naddress of the _oldest_ initialized page to the start address of the\n_newest_ initialized page? I must be missing something obvious, so sorry\nif this is a silly question.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Feb 2023 20:19:31 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Track Oldest Initialized WAL Buffer Page"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 9:49 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Feb 28, 2023 at 11:12:29AM +0530, Bharath Rupireddy wrote:\n> > On Tue, Feb 28, 2023 at 5:52 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> It's confusing to me that OldestInitializedPage is set to NewPageBeginPtr.\n> >> Doesn't that set it to the beginning of the newest initialized page?\n> >\n> > Yes, that's the intention, see below. OldestInitializedPage points to\n> > the start address of the oldest initialized page whereas the\n> > InitializedUpTo points to the end address of the latest initialized\n> > page. With this, one can easily track all the WAL between\n> > OldestInitializedPage and InitializedUpTo.\n>\n> This is where I'm confused. Why would we set the variable for the start\n> address of the _oldest_ initialized page to the start address of the\n> _newest_ initialized page? I must be missing something obvious, so sorry\n> if this is a silly question.\n\nThat's the crux of the patch. Let me clarify it a bit.\n\nFirstly, we try to set OldestInitializedPage at the end of the\nrecovery but that's conditional, that is, only when the last replayed\nWAL record spans partially to the end block.\n\nSecondly, we set OldestInitializedPage while initializing the page for\nthe first time, so the missed-conditional case above gets coverd too.\n\nAnd, OldestInitializedPage isn't updated for every new initialized\npage, only when the previous OldestInitializedPage is being reused\ni.e. the wal_buffers are full and it wraps around. Please see the\ncomment and the condition\nXLogRecPtrToBufIdx(XLogCtl->OldestInitializedPage) == nextidx which\nholds true if we're crossing-over/wrapping around previous\nOldestInitializedPage.\n\n+ /*\n+ * Try updating oldest initialized XLog buffer page.\n+ *\n+ * Update it if we are initializing an XLog buffer page for the first\n+ * time or if XLog buffers are full and we are wrapping around.\n+ */\n+ if (XLogRecPtrIsInvalid(XLogCtl->OldestInitializedPage) ||\n+ XLogRecPtrToBufIdx(XLogCtl->OldestInitializedPage) == nextidx)\n+ {\n+ Assert(XLogCtl->OldestInitializedPage < NewPageBeginPtr);\n+\n+ XLogCtl->OldestInitializedPage = NewPageBeginPtr;\n+ }\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 1 Mar 2023 12:33:36 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Track Oldest Initialized WAL Buffer Page"
},
{
"msg_contents": "On 07/02/2023 16:00, Bharath Rupireddy wrote:\n> Hi,\n> \n> While working on [1], I was looking for a quick way to tell if a WAL\n> record is present in the WAL buffers array without scanning but I\n> couldn't find one.\n\n/* The end-ptr of the page that contains the record */\nexpectedEndPtr += XLOG_BLCKSZ - recptr % XLOG_BLCKSZ;\n\n/* get the buffer where the record is, if it's in WAL buffers at all */\nidx = XLogRecPtrToBufIdx(recptr);\n\n/* prevent the WAL buffer from being evicted while we look at it */\nLWLockAcquire(WALBufMappingLock, LW_SHARED);\n\n/* Check if the page we're interested in is in the buffer */\nfound = XLogCtl->xlblocks[idx] == expectedEndPtr;\n\nLWLockRelease(WALBufMappingLock, LW_SHARED);\n\n> Hence, I put up a patch that basically tracks the\n> oldest initialized WAL buffer page, named OldestInitializedPage, in\n> XLogCtl. With OldestInitializedPage, we can easily illustrate WAL\n> buffers array properties:\n> \n> 1) At any given point of time, pages in the WAL buffers array are\n> sorted in an ascending order from OldestInitializedPage till\n> InitializedUpTo. Note that we verify this property for assert-only\n> builds, see IsXLogBuffersArraySorted() in the patch for more details.\n> \n> 2) OldestInitializedPage is monotonically increasing (by virtue of how\n> postgres generates WAL records), that is, its value never decreases.\n> This property lets someone read its value without a lock. There's no\n> problem even if its value is slightly stale i.e. concurrently being\n> updated. One can still use it for finding if a given WAL record is\n> available in WAL buffers. At worst, one might get false positives\n> (i.e. OldestInitializedPage may tell that the WAL record is available\n> in WAL buffers, but when one actually looks at it, it isn't really\n> available). This is more efficient and performant than acquiring a\n> lock for reading. Note that we may not need a lock to read\n> OldestInitializedPage but we need to update it holding\n> WALBufMappingLock.\n\nYou actually hint at the above solution here, so I'm confused. If you're \nOK with slightly stale results, you can skip the WALBufferMappingLock \nabove too, and perform an atomic read of xlblocks[idx] instead.\n\n> 3) One can start traversing WAL buffers from OldestInitializedPage\n> till InitializedUpTo to list out all valid WAL records and stats, and\n> expose them via SQL-callable functions to users, for instance, as\n> pg_walinspect functions.\n> \n> 4) WAL buffers array is inherently organized as a circular, sorted and\n> rotated array with OldestInitializedPage as pivot/first element of the\n> array with the property where LSN of previous buffer page (if valid)\n> is greater than OldestInitializedPage and LSN of the next buffer page\n> (if\n> valid) is greater than OldestInitializedPage.\n\nThese properties are true, maybe we should document them explicitly in a \ncomment. But I don't see the point of tracking OldestInitializedPage. It \nseems cheap enough that we could, if there's a need for it, but I don't \nsee the need.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 16:27:27 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Track Oldest Initialized WAL Buffer Page"
},
{
"msg_contents": "> On 3 Jul 2023, at 15:27, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> But I don't see the point of tracking OldestInitializedPage. It seems cheap enough that we could, if there's a need for it, but I don't see the need.\n\nBased on the above comments, and the thread stalling, I am marking this\nreturned with feedback. Please feel free to continue the discussion here and\nre-open a new entry in a future CF if there is a new version of the patch.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 3 Aug 2023 23:08:27 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Track Oldest Initialized WAL Buffer Page"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 6:57 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n\nThanks a lot for responding. Sorry for being late.\n\n> On 07/02/2023 16:00, Bharath Rupireddy wrote:\n> > Hi,\n> >\n> > While working on [1], I was looking for a quick way to tell if a WAL\n> > record is present in the WAL buffers array without scanning but I\n> > couldn't find one.\n>\n> /* The end-ptr of the page that contains the record */\n> expectedEndPtr += XLOG_BLCKSZ - recptr % XLOG_BLCKSZ;\n>\n> /* get the buffer where the record is, if it's in WAL buffers at all */\n> idx = XLogRecPtrToBufIdx(recptr);\n>\n> /* prevent the WAL buffer from being evicted while we look at it */\n> LWLockAcquire(WALBufMappingLock, LW_SHARED);\n>\n> /* Check if the page we're interested in is in the buffer */\n> found = XLogCtl->xlblocks[idx] == expectedEndPtr;\n>\n> LWLockRelease(WALBufMappingLock, LW_SHARED);\n\nThis is exactly what I'm doing in the 0001 patch here\nhttps://www.postgresql.org/message-id/CALj2ACU3ZYzjOv4vZTR+LFk5PL4ndUnbLS6E1vG2dhDBjQGy2A@mail.gmail.com.\n\nMy bad! I should have mentioned the requirement properly - I want to\navoid taking WALBufMappingLock to peek into wal_buffers to determine\nif the WAL buffer page containing the required WAL record exists.\n\n> You actually hint at the above solution here, so I'm confused. If you're\n> OK with slightly stale results, you can skip the WALBufferMappingLock\n> above too, and perform an atomic read of xlblocks[idx] instead.\n\nI get that and I see GetXLogBuffer first reading xlblocks without lock\nand then to confirm it anyways takes the lock again in\nAdvanceXLInsertBuffer.\n\n * However, we don't hold a lock while we read the value. If someone has\n * just initialized the page, it's possible that we get a \"torn read\" of\n * the XLogRecPtr if 64-bit fetches are not atomic on this platform. In\n * that case we will see a bogus value. That's ok, we'll grab the mapping\n * lock (in AdvanceXLInsertBuffer) and retry if we see anything else than\n * the page we're looking for. But it means that when we do this unlocked\n * read, we might see a value that appears to be ahead of the page we're\n * looking for. Don't PANIC on that, until we've verified the value while\n * holding the lock.\n */\n\nThe the 0001 patch at\nhttps://www.postgresql.org/message-id/CALj2ACU3ZYzjOv4vZTR+LFk5PL4ndUnbLS6E1vG2dhDBjQGy2A@mail.gmail.com\nreads the WAL buffer page with WALBufMappingLock. So, the patch can\navoid WALBufMappingLock and do something like [1]:\n\n[1]\n{\n idx = XLogRecPtrToBufIdx(ptr);\n expectedEndPtr = ptr;\n expectedEndPtr += XLOG_BLCKSZ - ptr % XLOG_BLCKSZ;\n\n /*\n * Do a stale read of xlblocks without WALBufMappingLock. All the callers\n * of this function are expected to read WAL that's already flushed to disk\n * from WAL buffers. If this stale read says the requested WAL buffer page\n * doesn't exist, it means that the WAL buffer page either is being or has\n * already been replaced for reuse. If this stale read says the requested\n * WAL buffer page exists, we then take WALBufMappingLock and re-read the\n * xlblocks to ensure the WAL buffer page really exists and nobody is\n * replacing it meanwhile.\n */\n endptr = XLogCtl->xlblocks[idx];\n\n /* Requested WAL isn't available in WAL buffers. */\n if (expectedEndPtr != endptr)\n break;\n\n /*\n * Requested WAL is available in WAL buffers, so recheck the existence\n * under the WALBufMappingLock and read if the page still exists, otherwise\n * return.\n */\n LWLockAcquire(WALBufMappingLock, LW_SHARED);\n\n endptr = XLogCtl->xlblocks[idx];\n\n /* Requested WAL isn't available in WAL buffers. */\n if (expectedEndPtr != endptr)\n break;\n\n /*\n * We found the WAL buffer page containing the given XLogRecPtr.\nGet starting\n * address of the page and a pointer to the right location given\n * XLogRecPtr in that page.\n */\n page = XLogCtl->pages + idx * (Size) XLOG_BLCKSZ;\n data = page + ptr % XLOG_BLCKSZ;\n\n return data;\n}\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 18 Oct 2023 00:16:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Track Oldest Initialized WAL Buffer Page"
}
] |
[
{
"msg_contents": "\nI'm trying to write am table_am extension. But I get \"too many Lwlocks taken\" after I insert \ntoo many tuples. So I try to use UnLockBuffers() everywhere; but it still give me \"too many Lwlocks taken\",\nSo how should I release All locks?\n\n--------------\n\n\n\njacktby@gmail.com\n\n\n",
"msg_date": "Tue, 7 Feb 2023 22:16:36 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "How to solve \"too many Lwlocks taken\"?"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-07 22:16:36 +0800, jacktby@gmail.com wrote:\n> \n> I'm trying to write am table_am extension. But I get \"too many Lwlocks taken\" after I insert \n> too many tuples. So I try to use UnLockBuffers() everywhere; but it still give me \"too many Lwlocks taken\",\n> So how should I release All locks?\n\nThis indicates that you aren't actually releasing all the lwlocks. You can\ninspect\nstatic int\tnum_held_lwlocks = 0;\nstatic LWLockHandle held_lwlocks[MAX_SIMUL_LWLOCKS];\nin a debugger to see which locks you didn't release.\n\n\nYou're currently starting multiple threads with questions a week. Could you at\nleast keep them in one thread?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Feb 2023 10:17:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: How to solve \"too many Lwlocks taken\"?"
}
] |
[
{
"msg_contents": "Hello,\n\nIt has been brought to my attention that SQL functions always use generic \nplans.\n\nTake this function for example:\n\ncreate or replace function test_plpgsql(p1 oid) returns text as $$\nBEGIN\n RETURN (SELECT relname FROM pg_class WHERE oid = p1 OR p1 IS NULL LIMIT 1); \nEND;\n$$ language plpgsql;\n\nAs expected, the PlanCache takes care of generating parameter specific plans, \nand correctly prunes the redundant OR depending on wether we call the function \nwith a NULL value or not:\n\nro=# select test_plpgsql(NULL);\nLOG: duration: 0.030 ms plan:\nQuery Text: (SELECT relname FROM pg_class WHERE oid = p1 OR p1 IS NULL LIMIT \n1)\nResult (cost=0.04..0.05 rows=1 width=64)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..0.04 rows=1 width=64)\n -> Seq Scan on pg_class (cost=0.00..18.12 rows=412 width=64)\nLOG: duration: 0.662 ms plan:\nQuery Text: select test_plpgsql(NULL);\nResult (cost=0.00..0.26 rows=1 width=32)\n\nro=# select test_plpgsql(1);\nLOG: duration: 0.075 ms plan:\nQuery Text: (SELECT relname FROM pg_class WHERE oid = p1 OR p1 IS NULL LIMIT \n1)\nResult (cost=8.29..8.30 rows=1 width=64)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.27..8.29 rows=1 width=64)\n -> Index Scan using pg_class_oid_index on pg_class \n(cost=0.27..8.29 rows=1 width=64)\n Index Cond: (oid = '1'::oid)\nLOG: duration: 0.675 ms plan:\nQuery Text: select test_plpgsql(1);\nResult (cost=0.00..0.26 rows=1 width=32)\n\n\nBut writing the same function in SQL:\ncreate or replace function test_sql(p1 oid) returns text as $$\nSELECT relname FROM pg_class WHERE oid = p1 OR p1 IS NULL LIMIT 1\n$$ language sql;\n\nwe end up with a generic plan:\n\nro=# select test_sql(1);\nLOG: duration: 0.287 ms plan:\nQuery Text: SELECT relname FROM pg_class WHERE oid = p1 OR p1 IS NULL LIMIT 1\nQuery Parameters: $1 = '1'\nLimit (cost=0.00..6.39 rows=1 width=32)\n -> Seq Scan on pg_class (cost=0.00..19.16 rows=3 width=32)\n Filter: ((oid = $1) OR ($1 IS NULL))\n\nThis is due to the fact that SQL functions are planned once for the whole \nquery using a specific SQLFunctionCache instead of using the whole PlanCache \nmachinery. \n\nThe following comment can be found in functions.c, about the SQLFunctionCache:\n\n * Note that currently this has only the lifespan of the calling query.\n * Someday we should rewrite this code to use plancache.c to save parse/plan\n * results for longer than that.\n\nI would be interested in working on this, primarily to avoid this problem of \nhaving generic query plans for SQL functions but maybe having a longer lived \ncache as well would be nice to have.\n\nIs there any reason not too, or pitfalls we would like to avoid ?\n\nBest regards,\n\n--\nRonan Dunklau\n\n\n\n\n\n",
"msg_date": "Tue, 07 Feb 2023 15:55:34 +0100",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "SQLFunctionCache and generic plans"
},
{
"msg_contents": "Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n> The following comment can be found in functions.c, about the SQLFunctionCache:\n\n> * Note that currently this has only the lifespan of the calling query.\n> * Someday we should rewrite this code to use plancache.c to save parse/plan\n> * results for longer than that.\n\n> I would be interested in working on this, primarily to avoid this problem of \n> having generic query plans for SQL functions but maybe having a longer lived \n> cache as well would be nice to have.\n> Is there any reason not too, or pitfalls we would like to avoid ?\n\nAFAIR it's just lack of round tuits. There would probably be some\nsemantic side-effects, though if you pay attention you could likely\nmake things better while you are at it. The existing behavior of\nparsing and planning all the statements at once is not very desirable\n--- for instance, it doesn't work to do\n\tCREATE TABLE foo AS ...;\n\tSELECT * FROM foo;\nI think if we're going to nuke this code and start over, we should\ntry to make that sort of case work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Feb 2023 10:29:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQLFunctionCache and generic plans"
},
{
"msg_contents": "Tom Lane писал(а) 2023-02-07 18:29:\n> Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n>> The following comment can be found in functions.c, about the \n>> SQLFunctionCache:\n> \n>> * Note that currently this has only the lifespan of the calling \n>> query.\n>> * Someday we should rewrite this code to use plancache.c to save \n>> parse/plan\n>> * results for longer than that.\n> \n>> I would be interested in working on this, primarily to avoid this \n>> problem of\n>> having generic query plans for SQL functions but maybe having a longer \n>> lived\n>> cache as well would be nice to have.\n>> Is there any reason not too, or pitfalls we would like to avoid ?\n> \n> AFAIR it's just lack of round tuits. There would probably be some\n> semantic side-effects, though if you pay attention you could likely\n> make things better while you are at it. The existing behavior of\n> parsing and planning all the statements at once is not very desirable\n> --- for instance, it doesn't work to do\n> \tCREATE TABLE foo AS ...;\n> \tSELECT * FROM foo;\n> I think if we're going to nuke this code and start over, we should\n> try to make that sort of case work.\n> \n> \t\t\tregards, tom lane\n\nHi.\n\nI've tried to make SQL functions use CachedPlan machinery. The main goal \nwas to allow SQL functions to use custom plans\n(the work was started from question - why sql function is so slow \ncompared to plpgsql one). It turned out that\nplpgsql function used custom plan and eliminated scan of all irrelevant \nsections, but\nexec-time pruning didn't cope with pruning when ScalarArrayOpExpr, \nfiltering data using int[] parameter.\n\nIn current prototype there are two restrictions. The first one is that \nCachecPlan has lifetime of a query - it's not\nsaved for future use, as we don't have something like plpgsql hashtable \nfor long live function storage. Second -\nSQL language functions in sql_body form (with stored queryTree_list) are \nhandled in the old way, as we currently lack\ntools to make cached plans from query trees.\n\nCurrently this change solves the issue of inefficient plans for queries \nover partitioned tables. For example, function like\n\nCREATE OR REPLACE FUNCTION public.test_get_records(ids integer[])\n RETURNS SETOF test\n LANGUAGE sql\nAS $function$\n select *\n from test\n where id = any (ids)\n$function$;\n\nfor hash-distributed table test can perform pruning in plan time and \ncan have plan like\n\n Append (cost=0.00..51.88 rows=26 width=36)\n -> Seq Scan on test_0 test_1 (cost=0.00..25.88 rows=13 \nwidth=36)\n Filter: (id = ANY ('{1,2}'::integer[]))\n -> Seq Scan on test_2 (cost=0.00..25.88 rows=13 width=36)\n Filter: (id = ANY ('{1,2}'::integer[]))\n\ninstead of\n\nAppend (cost=0.00..155.54 rows=248 width=36)\n -> Seq Scan on test_0 test_1 (cost=0.00..38.58 rows=62 \nwidth=36)\n Filter: (id = ANY ($1))\n -> Seq Scan on test_1 test_2 (cost=0.00..38.58 rows=62 \nwidth=36)\n Filter: (id = ANY ($1))\n -> Seq Scan on test_2 test_3 (cost=0.00..38.58 rows=62 \nwidth=36)\n Filter: (id = ANY ($1))\n -> Seq Scan on test_3 test_4 (cost=0.00..38.58 rows=62 \nwidth=36)\n Filter: (id = ANY ($1))\n\nThis patch definitely requires more work, and I share it to get some \nearly feedback.\n\nWhat should we do with \"pre-parsed\" SQL functions (when prosrc is \nempty)? How should we create cached plans when we don't have raw \nparsetrees?\nCurrently we can create cached plans without raw parsetrees, but this \nmeans that plan revalidation doesn't work, choose_custom_plan()\nalways returns false and we get generic plan. Perhaps, we need some form \nof GetCachedPlan(), which ignores raw_parse_tree?\nIn this case how could we possibly cache plans for session lifetime \n(like plpgsql language does) if we can't use cached revalidation \nmachinery?\nI hope to get some hints to move further.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Tue, 03 Sep 2024 10:33:23 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQLFunctionCache and generic plans"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Tue, Sep 3, 2024 at 10:33 AM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n> Tom Lane писал(а) 2023-02-07 18:29:\n> > Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n> >> The following comment can be found in functions.c, about the\n> >> SQLFunctionCache:\n> >\n> >> * Note that currently this has only the lifespan of the calling\n> >> query.\n> >> * Someday we should rewrite this code to use plancache.c to save\n> >> parse/plan\n> >> * results for longer than that.\n> >\n> >> I would be interested in working on this, primarily to avoid this\n> >> problem of\n> >> having generic query plans for SQL functions but maybe having a longer\n> >> lived\n> >> cache as well would be nice to have.\n> >> Is there any reason not too, or pitfalls we would like to avoid ?\n> >\n> > AFAIR it's just lack of round tuits. There would probably be some\n> > semantic side-effects, though if you pay attention you could likely\n> > make things better while you are at it. The existing behavior of\n> > parsing and planning all the statements at once is not very desirable\n> > --- for instance, it doesn't work to do\n> > CREATE TABLE foo AS ...;\n> > SELECT * FROM foo;\n> > I think if we're going to nuke this code and start over, we should\n> > try to make that sort of case work.\n> >\n> > regards, tom lane\n>\n> Hi.\n>\n> I've tried to make SQL functions use CachedPlan machinery. The main goal\n> was to allow SQL functions to use custom plans\n> (the work was started from question - why sql function is so slow\n> compared to plpgsql one). It turned out that\n> plpgsql function used custom plan and eliminated scan of all irrelevant\n> sections, but\n> exec-time pruning didn't cope with pruning when ScalarArrayOpExpr,\n> filtering data using int[] parameter.\n>\n> In current prototype there are two restrictions. The first one is that\n> CachecPlan has lifetime of a query - it's not\n> saved for future use, as we don't have something like plpgsql hashtable\n> for long live function storage. Second -\n> SQL language functions in sql_body form (with stored queryTree_list) are\n> handled in the old way, as we currently lack\n> tools to make cached plans from query trees.\n>\n> Currently this change solves the issue of inefficient plans for queries\n> over partitioned tables. For example, function like\n>\n> CREATE OR REPLACE FUNCTION public.test_get_records(ids integer[])\n> RETURNS SETOF test\n> LANGUAGE sql\n> AS $function$\n> select *\n> from test\n> where id = any (ids)\n> $function$;\n>\n> for hash-distributed table test can perform pruning in plan time and\n> can have plan like\n>\n> Append (cost=0.00..51.88 rows=26 width=36)\n> -> Seq Scan on test_0 test_1 (cost=0.00..25.88 rows=13\n> width=36)\n> Filter: (id = ANY ('{1,2}'::integer[]))\n> -> Seq Scan on test_2 (cost=0.00..25.88 rows=13 width=36)\n> Filter: (id = ANY ('{1,2}'::integer[]))\n>\n> instead of\n>\n> Append (cost=0.00..155.54 rows=248 width=36)\n> -> Seq Scan on test_0 test_1 (cost=0.00..38.58 rows=62\n> width=36)\n> Filter: (id = ANY ($1))\n> -> Seq Scan on test_1 test_2 (cost=0.00..38.58 rows=62\n> width=36)\n> Filter: (id = ANY ($1))\n> -> Seq Scan on test_2 test_3 (cost=0.00..38.58 rows=62\n> width=36)\n> Filter: (id = ANY ($1))\n> -> Seq Scan on test_3 test_4 (cost=0.00..38.58 rows=62\n> width=36)\n> Filter: (id = ANY ($1))\n>\n> This patch definitely requires more work, and I share it to get some\n> early feedback.\n>\n> What should we do with \"pre-parsed\" SQL functions (when prosrc is\n> empty)? How should we create cached plans when we don't have raw\n> parsetrees?\n> Currently we can create cached plans without raw parsetrees, but this\n> means that plan revalidation doesn't work, choose_custom_plan()\n> always returns false and we get generic plan. Perhaps, we need some form\n> of GetCachedPlan(), which ignores raw_parse_tree?\n\nI don't think you need a new form of GetCachedPlan(). Instead, it\nseems that StmtPlanRequiresRevalidation() should be revised. As I got\nfrom comments and the d8b2fcc9d4 commit message, the primary goal was\nto skip revalidation of utility statements. Skipping revalidation was\na positive side effect, as long as we didn't support custom plans for\nthem anyway. But as you're going to change this,\nStmtPlanRequiresRevalidation() needs to be revised.\n\nI also think it's not necessary to implement long-lived plan cache in\nthe initial patch. The work could be split into two patches. The\nfirst could implement query lifetime plan cache. This is beneficial\nalready by itself as you've shown by example. The second could\nimplement long-lived plan cache.\n\nI appreciate your work in this direction. I hope you got the feedback\nto go ahead and work on remaining issues.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Fri, 20 Sep 2024 15:06:27 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQLFunctionCache and generic plans"
}
] |
[
{
"msg_contents": "Hi\n\nI have a question about the possibility of simply getting the name of the\ncurrently executed function. The reason for this request is simplification\nof writing debug messages.\n\nGET DIAGNOSTICS _oid = PG_ROUTINE_OID;\nRAISE NOTICE '... % ... %', _oid, _oid::regproc::text;\n\nThe advantage of this dynamic access to function name is always valid value\nnot sensitive to some renaming or moving between schemas.\n\nI am able to separate a name from context, but it can be harder to write\nthis separation really robustly. It can be very easy to enhance the GET\nDIAGNOSTICS statement to return the oid of currently executed function.\n\nDo you think it can be useful feature?\n\nThe implementation should be trivial.\n\nComments, notes?\n\nRegards\n\nPavel\n\nHiI have a question about the possibility of simply getting the name of the currently executed function. The reason for this request is simplification of writing debug messages. GET DIAGNOSTICS _oid = PG_ROUTINE_OID;RAISE NOTICE '... % ... %', _oid, _oid::regproc::text;The advantage of this dynamic access to function name is always valid value not sensitive to some renaming or moving between schemas.I am able to separate a name from context, but it can be harder to write this separation really robustly. It can be very easy to enhance the GET DIAGNOSTICS statement to return the oid of currently executed function. Do you think it can be useful feature?The implementation should be trivial.Comments, notes?RegardsPavel",
"msg_date": "Tue, 7 Feb 2023 20:48:22 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 2:49 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> I have a question about the possibility of simply getting the name of the\n> currently executed function. The reason for this request is simplification\n> of writing debug messages.\n>\n> GET DIAGNOSTICS _oid = PG_ROUTINE_OID;\n> RAISE NOTICE '... % ... %', _oid, _oid::regproc::text;\n>\n> The advantage of this dynamic access to function name is always valid\n> value not sensitive to some renaming or moving between schemas.\n>\n> I am able to separate a name from context, but it can be harder to write\n> this separation really robustly. It can be very easy to enhance the GET\n> DIAGNOSTICS statement to return the oid of currently executed function.\n>\n> Do you think it can be useful feature?\n>\n\nI was hoping it could be a CONSTANT like TG_OP (so the extra GET\nDIAGNOSTICS wasn't invoked, but I have no idea the weight of that CODE\nCHANGE)\n\nRegardless, this concept is what we are looking for. We prefer to leave\nsome debugging scaffolding in our DB Procedures, but disable it by default.\nWe are looking for a way to add something like this as a filter on the\nlevel of output.\n\nOur Current USE CASE is\n CALL LOGGING('Msg'); -- And by default nothing happens, unless we set\nsome session variables appropriately\n\nWe are looking for\n CALL LOGGING('Msg', __PG_ROUTINE_OID ); -- Now we can enable logging by\nthe routine we are interested in!\n\nThe LOGGING routine currently checks a session variable to see if logging\nis EVEN Desired, if not it exits (eg PRODUCTION).\n\nNow we can add a single line check, if p_funcoid is IN my list of routines\nI am debugging, send the output.\n\nI will gladly work on the documentation side to help this happen!\n\n+10\n\n\n\n\n>\n> The implementation should be trivial.\n>\n> Comments, notes?\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n\nOn Tue, Feb 7, 2023 at 2:49 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:HiI have a question about the possibility of simply getting the name of the currently executed function. The reason for this request is simplification of writing debug messages. GET DIAGNOSTICS _oid = PG_ROUTINE_OID;RAISE NOTICE '... % ... %', _oid, _oid::regproc::text;The advantage of this dynamic access to function name is always valid value not sensitive to some renaming or moving between schemas.I am able to separate a name from context, but it can be harder to write this separation really robustly. It can be very easy to enhance the GET DIAGNOSTICS statement to return the oid of currently executed function. Do you think it can be useful feature?I was hoping it could be a CONSTANT like TG_OP (so the extra GET DIAGNOSTICS wasn't invoked, but I have no idea the weight of that CODE CHANGE)Regardless, this concept is what we are looking for. We prefer to leave some debugging scaffolding in our DB Procedures, but disable it by default.We are looking for a way to add something like this as a filter on the level of output.Our Current USE CASE is CALL LOGGING('Msg'); -- And by default nothing happens, unless we set some session variables appropriatelyWe are looking for CALL LOGGING('Msg', __PG_ROUTINE_OID ); -- Now we can enable logging by the routine we are interested in!The LOGGING routine currently checks a session variable to see if logging is EVEN Desired, if not it exits (eg PRODUCTION).Now we can add a single line check, if p_funcoid is IN my list of routines I am debugging, send the output.I will gladly work on the documentation side to help this happen!+10 The implementation should be trivial.Comments, notes?RegardsPavel",
"msg_date": "Tue, 7 Feb 2023 16:08:02 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "On Tue, Feb 07, 2023 at 08:48:22PM +0100, Pavel Stehule wrote:\n>\n> I have a question about the possibility of simply getting the name of the\n> currently executed function. The reason for this request is simplification\n> of writing debug messages.\n>\n> GET DIAGNOSTICS _oid = PG_ROUTINE_OID;\n> RAISE NOTICE '... % ... %', _oid, _oid::regproc::text;\n>\n> The advantage of this dynamic access to function name is always valid value\n> not sensitive to some renaming or moving between schemas.\n>\n> I am able to separate a name from context, but it can be harder to write\n> this separation really robustly. It can be very easy to enhance the GET\n> DIAGNOSTICS statement to return the oid of currently executed function.\n>\n> Do you think it can be useful feature?\n\n+1, it would have been quite handy in a few of my projects.\n\n\n",
"msg_date": "Wed, 8 Feb 2023 14:33:16 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "hi\n\n\n\nst 8. 2. 2023 v 7:33 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Tue, Feb 07, 2023 at 08:48:22PM +0100, Pavel Stehule wrote:\n> >\n> > I have a question about the possibility of simply getting the name of the\n> > currently executed function. The reason for this request is\n> simplification\n> > of writing debug messages.\n> >\n> > GET DIAGNOSTICS _oid = PG_ROUTINE_OID;\n> > RAISE NOTICE '... % ... %', _oid, _oid::regproc::text;\n> >\n> > The advantage of this dynamic access to function name is always valid\n> value\n> > not sensitive to some renaming or moving between schemas.\n> >\n> > I am able to separate a name from context, but it can be harder to write\n> > this separation really robustly. It can be very easy to enhance the GET\n> > DIAGNOSTICS statement to return the oid of currently executed function.\n> >\n> > Do you think it can be useful feature?\n>\n> +1, it would have been quite handy in a few of my projects.\n>\n\nit can looks like that\n\ncreate or replace function foo(a int)\nreturns int as $$\ndeclare s text; n text; o oid;\nbegin\n get diagnostics s = pg_current_routine_signature,\n n = pg_current_routine_name,\n o = pg_current_routine_oid;\n raise notice 'sign:%, name:%, oid:%', s, n, o;\n return a;\nend;\n$$ language plpgsql;\nCREATE FUNCTION\n(2023-02-08 09:04:03) postgres=# select foo(10);\nNOTICE: sign:foo(integer), name:foo, oid:16392\n┌─────┐\n│ foo │\n╞═════╡\n│ 10 │\n└─────┘\n(1 row)\n\nThe name - pg_routine_oid can be confusing, because there is not clean if\nit is oid of currently executed routine or routine from top of exception\n\nRegards\n\nPavel",
"msg_date": "Wed, 8 Feb 2023 09:07:27 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 3:08 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> hi\n>\n> st 8. 2. 2023 v 7:33 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n>\n>> On Tue, Feb 07, 2023 at 08:48:22PM +0100, Pavel Stehule wrote:\n>> >\n>> > GET DIAGNOSTICS _oid = PG_ROUTINE_OID;\n>> > RAISE NOTICE '... % ... %', _oid, _oid::regproc::text;\n>> >\n>> > Do you think it can be useful feature?\n>>\n>> +1, it would have been quite handy in a few of my projects.\n>>\n>\n> it can looks like that\n>\n> create or replace function foo(a int)\n> returns int as $$\n> declare s text; n text; o oid;\n> begin\n> get diagnostics s = pg_current_routine_signature,\n> n = pg_current_routine_name,\n> o = pg_current_routine_oid;\n> raise notice 'sign:%, name:%, oid:%', s, n, o;\n> return a;\n> end;\n> $$ language plpgsql;\n> CREATE FUNCTION\n> (2023-02-08 09:04:03) postgres=# select foo(10);\n> NOTICE: sign:foo(integer), name:foo, oid:16392\n> ┌─────┐\n> │ foo │\n> ╞═════╡\n> │ 10 │\n> └─────┘\n> (1 row)\n>\n> The name - pg_routine_oid can be confusing, because there is not clean if\n> it is oid of currently executed routine or routine from top of exception\n>\n> Regards\n>\n> Pavel\n>\n\nI agree that the name changed to pg_current_routine_... makes the most\nsense, great call...\n\n+1\n\nOn Wed, Feb 8, 2023 at 3:08 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:hist 8. 2. 2023 v 7:33 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Tue, Feb 07, 2023 at 08:48:22PM +0100, Pavel Stehule wrote:>\n> GET DIAGNOSTICS _oid = PG_ROUTINE_OID;\n> RAISE NOTICE '... % ... %', _oid, _oid::regproc::text;\n>> Do you think it can be useful feature?\n\n+1, it would have been quite handy in a few of my projects.it can looks like that create or replace function foo(a int)returns int as $$declare s text; n text; o oid;begin get diagnostics s = pg_current_routine_signature, n = pg_current_routine_name, o = pg_current_routine_oid; raise notice 'sign:%, name:%, oid:%', s, n, o; return a;end;$$ language plpgsql;CREATE FUNCTION(2023-02-08 09:04:03) postgres=# select foo(10);NOTICE: sign:foo(integer), name:foo, oid:16392┌─────┐│ foo │╞═════╡│ 10 │└─────┘(1 row)The name - pg_routine_oid can be confusing, because there is not clean if it is oid of currently executed routine or routine from top of exceptionRegardsPavelI agree that the name changed to pg_current_routine_... makes the most sense, great call...+1",
"msg_date": "Wed, 8 Feb 2023 10:56:04 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 10:56 AM Kirk Wolak <wolakk@gmail.com> wrote:\n\n> On Wed, Feb 8, 2023 at 3:08 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> hi\n>>\n>> st 8. 2. 2023 v 7:33 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n>> napsal:\n>>\n>>> On Tue, Feb 07, 2023 at 08:48:22PM +0100, Pavel Stehule wrote:\n>>> >\n>>> > GET DIAGNOSTICS _oid = PG_ROUTINE_OID;\n>>> > RAISE NOTICE '... % ... %', _oid, _oid::regproc::text;\n>>> >\n>>> > Do you think it can be useful feature?\n>>>\n>>> +1, it would have been quite handy in a few of my projects.\n>>>\n>>\n>> it can looks like that\n>>\n>> create or replace function foo(a int)\n>> returns int as $$\n>> declare s text; n text; o oid;\n>> begin\n>> get diagnostics s = pg_current_routine_signature,\n>> n = pg_current_routine_name,\n>> o = pg_current_routine_oid;\n>> raise notice 'sign:%, name:%, oid:%', s, n, o;\n>> return a;\n>> end;\n>> $$ language plpgsql;\n>> CREATE FUNCTION\n>> (2023-02-08 09:04:03) postgres=# select foo(10);\n>> NOTICE: sign:foo(integer), name:foo, oid:16392\n>> ┌─────┐\n>> │ foo │\n>> ╞═════╡\n>> │ 10 │\n>> └─────┘\n>> (1 row)\n>>\n>> The name - pg_routine_oid can be confusing, because there is not clean if\n>> it is oid of currently executed routine or routine from top of exception\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>\n> I agree that the name changed to pg_current_routine_... makes the most\n> sense, great call...\n>\n> +1\n>\n\nOkay, I reviewed this. I tested it (allocating too small of\nvarchar's for values, various \"signature types\"),\nand also a performance test... Wow, on my VM, 10,000 Calls in a loop was\n2-4ms...\n\nThe names are clear. Again, I tested with various options, and including\nROW_COUNT, or not.\n\nThis functions PERFECTLY.... Except there are no documentation changes.\nBecause of that, I set it to Waiting on Author.\nWhich might be unfair, because I could take a stab at doing the\ndocumentation (but docs are not compiling on my setup yet).\n\nThe documentation changes are simple enough.\nIf I can get the docs compiled on my rig, I will see if I can make the\nchanges, and post an updated patch,\nthat contains both...\n\nBut I don't want to be stepping on toes, or having it look like I am taking\ncredit.\n\nRegards - Kirk\n\nOn Wed, Feb 8, 2023 at 10:56 AM Kirk Wolak <wolakk@gmail.com> wrote:On Wed, Feb 8, 2023 at 3:08 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:hist 8. 2. 2023 v 7:33 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Tue, Feb 07, 2023 at 08:48:22PM +0100, Pavel Stehule wrote:>\n> GET DIAGNOSTICS _oid = PG_ROUTINE_OID;\n> RAISE NOTICE '... % ... %', _oid, _oid::regproc::text;\n>> Do you think it can be useful feature?\n\n+1, it would have been quite handy in a few of my projects.it can looks like that create or replace function foo(a int)returns int as $$declare s text; n text; o oid;begin get diagnostics s = pg_current_routine_signature, n = pg_current_routine_name, o = pg_current_routine_oid; raise notice 'sign:%, name:%, oid:%', s, n, o; return a;end;$$ language plpgsql;CREATE FUNCTION(2023-02-08 09:04:03) postgres=# select foo(10);NOTICE: sign:foo(integer), name:foo, oid:16392┌─────┐│ foo │╞═════╡│ 10 │└─────┘(1 row)The name - pg_routine_oid can be confusing, because there is not clean if it is oid of currently executed routine or routine from top of exceptionRegardsPavelI agree that the name changed to pg_current_routine_... makes the most sense, great call...+1 Okay, I reviewed this. I tested it (allocating too small of varchar's for values, various \"signature types\"),and also a performance test... Wow, on my VM, 10,000 Calls in a loop was 2-4ms...The names are clear. Again, I tested with various options, and including ROW_COUNT, or not.This functions PERFECTLY.... Except there are no documentation changes.Because of that, I set it to Waiting on Author. Which might be unfair, because I could take a stab at doing the documentation (but docs are not compiling on my setup yet).The documentation changes are simple enough.If I can get the docs compiled on my rig, I will see if I can make the changes, and post an updated patch,that contains both...But I don't want to be stepping on toes, or having it look like I am taking credit.Regards - Kirk",
"msg_date": "Sun, 26 Mar 2023 17:37:36 -0400",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "On Sun, Mar 26, 2023 at 5:37 PM Kirk Wolak <wolakk@gmail.com> wrote:\n\n> On Wed, Feb 8, 2023 at 10:56 AM Kirk Wolak <wolakk@gmail.com> wrote:\n>\n>> On Wed, Feb 8, 2023 at 3:08 AM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>> hi\n>>>\n>>> st 8. 2. 2023 v 7:33 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n>>> napsal:\n>>>\n>>>> On Tue, Feb 07, 2023 at 08:48:22PM +0100, Pavel Stehule wrote:\n>>>> >\n>>>> > GET DIAGNOSTICS _oid = PG_ROUTINE_OID;\n>>>> > RAISE NOTICE '... % ... %', _oid, _oid::regproc::text;\n>>>> >\n>>>> > Do you think it can be useful feature?\n>>>>\n>>>> +1, it would have been quite handy in a few of my projects.\n>>>>\n>>>\n>>> it can looks like that\n>>>\n>>> create or replace function foo(a int)\n>>> returns int as $$\n>>> declare s text; n text; o oid;\n>>> begin\n>>> get diagnostics s = pg_current_routine_signature,\n>>> n = pg_current_routine_name,\n>>> o = pg_current_routine_oid;\n>>> raise notice 'sign:%, name:%, oid:%', s, n, o;\n>>> return a;\n>>> end;\n>>> $$ language plpgsql;\n>>> CREATE FUNCTION\n>>> (2023-02-08 09:04:03) postgres=# select foo(10);\n>>> NOTICE: sign:foo(integer), name:foo, oid:16392\n>>> ┌─────┐\n>>> │ foo │\n>>> ╞═════╡\n>>> │ 10 │\n>>> └─────┘\n>>> (1 row)\n>>>\n>>> The name - pg_routine_oid can be confusing, because there is not clean\n>>> if it is oid of currently executed routine or routine from top of exception\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>\n>> I agree that the name changed to pg_current_routine_... makes the most\n>> sense, great call...\n>>\n>> +1\n>>\n>\n> Okay, I reviewed this. I tested it (allocating too small of\n> varchar's for values, various \"signature types\"),\n> and also a performance test... Wow, on my VM, 10,000 Calls in a loop was\n> 2-4ms...\n>\n> The names are clear. Again, I tested with various options, and including\n> ROW_COUNT, or not.\n>\n> This functions PERFECTLY.... Except there are no documentation changes.\n> Because of that, I set it to Waiting on Author.\n> Which might be unfair, because I could take a stab at doing the\n> documentation (but docs are not compiling on my setup yet).\n>\n> The documentation changes are simple enough.\n> If I can get the docs compiled on my rig, I will see if I can make the\n> changes, and post an updated patch,\n> that contains both...\n>\n> But I don't want to be stepping on toes, or having it look like I am\n> taking credit.\n>\n> Regards - Kirk\n>\n\nOkay, I have modified the documentation and made sure it compiles. They\nwere simple enough changes.\nI am attaching this updated patch.\n\nI have marked the item Ready for Commiter...\n\nThanks for your patience. I now have a workable hacking environment!\n\nRegards - Kirk",
"msg_date": "Sun, 26 Mar 2023 23:36:10 -0400",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "Hi\n\n\npo 27. 3. 2023 v 5:36 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n\n> On Sun, Mar 26, 2023 at 5:37 PM Kirk Wolak <wolakk@gmail.com> wrote:\n>\n>> On Wed, Feb 8, 2023 at 10:56 AM Kirk Wolak <wolakk@gmail.com> wrote:\n>>\n>>> On Wed, Feb 8, 2023 at 3:08 AM Pavel Stehule <pavel.stehule@gmail.com>\n>>> wrote:\n>>>\n>>>> hi\n>>>>\n>>>> st 8. 2. 2023 v 7:33 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n>>>> napsal:\n>>>>\n>>>>> On Tue, Feb 07, 2023 at 08:48:22PM +0100, Pavel Stehule wrote:\n>>>>> >\n>>>>> > GET DIAGNOSTICS _oid = PG_ROUTINE_OID;\n>>>>> > RAISE NOTICE '... % ... %', _oid, _oid::regproc::text;\n>>>>> >\n>>>>> > Do you think it can be useful feature?\n>>>>>\n>>>>> +1, it would have been quite handy in a few of my projects.\n>>>>>\n>>>>\n>>>> it can looks like that\n>>>>\n>>>> create or replace function foo(a int)\n>>>> returns int as $$\n>>>> declare s text; n text; o oid;\n>>>> begin\n>>>> get diagnostics s = pg_current_routine_signature,\n>>>> n = pg_current_routine_name,\n>>>> o = pg_current_routine_oid;\n>>>> raise notice 'sign:%, name:%, oid:%', s, n, o;\n>>>> return a;\n>>>> end;\n>>>> $$ language plpgsql;\n>>>> CREATE FUNCTION\n>>>> (2023-02-08 09:04:03) postgres=# select foo(10);\n>>>> NOTICE: sign:foo(integer), name:foo, oid:16392\n>>>> ┌─────┐\n>>>> │ foo │\n>>>> ╞═════╡\n>>>> │ 10 │\n>>>> └─────┘\n>>>> (1 row)\n>>>>\n>>>> The name - pg_routine_oid can be confusing, because there is not clean\n>>>> if it is oid of currently executed routine or routine from top of exception\n>>>>\n>>>> Regards\n>>>>\n>>>> Pavel\n>>>>\n>>>\n>>> I agree that the name changed to pg_current_routine_... makes the most\n>>> sense, great call...\n>>>\n>>> +1\n>>>\n>>\n>> Okay, I reviewed this. I tested it (allocating too small of\n>> varchar's for values, various \"signature types\"),\n>> and also a performance test... Wow, on my VM, 10,000 Calls in a loop was\n>> 2-4ms...\n>>\n>> The names are clear. Again, I tested with various options, and including\n>> ROW_COUNT, or not.\n>>\n>> This functions PERFECTLY.... Except there are no documentation changes.\n>> Because of that, I set it to Waiting on Author.\n>> Which might be unfair, because I could take a stab at doing the\n>> documentation (but docs are not compiling on my setup yet).\n>>\n>> The documentation changes are simple enough.\n>> If I can get the docs compiled on my rig, I will see if I can make the\n>> changes, and post an updated patch,\n>> that contains both...\n>>\n>> But I don't want to be stepping on toes, or having it look like I am\n>> taking credit.\n>>\n>> Regards - Kirk\n>>\n>\n> Okay, I have modified the documentation and made sure it compiles. They\n> were simple enough changes.\n> I am attaching this updated patch.\n>\n> I have marked the item Ready for Commiter...\n>\n\nThank you for doc and for review\n\nRegards\n\nPavel\n\n\n>\n> Thanks for your patience. I now have a workable hacking environment!\n>\n> Regards - Kirk\n>\n>\n\nHipo 27. 3. 2023 v 5:36 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:On Sun, Mar 26, 2023 at 5:37 PM Kirk Wolak <wolakk@gmail.com> wrote:On Wed, Feb 8, 2023 at 10:56 AM Kirk Wolak <wolakk@gmail.com> wrote:On Wed, Feb 8, 2023 at 3:08 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:hist 8. 2. 2023 v 7:33 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Tue, Feb 07, 2023 at 08:48:22PM +0100, Pavel Stehule wrote:>\n> GET DIAGNOSTICS _oid = PG_ROUTINE_OID;\n> RAISE NOTICE '... % ... %', _oid, _oid::regproc::text;\n>> Do you think it can be useful feature?\n\n+1, it would have been quite handy in a few of my projects.it can looks like that create or replace function foo(a int)returns int as $$declare s text; n text; o oid;begin get diagnostics s = pg_current_routine_signature, n = pg_current_routine_name, o = pg_current_routine_oid; raise notice 'sign:%, name:%, oid:%', s, n, o; return a;end;$$ language plpgsql;CREATE FUNCTION(2023-02-08 09:04:03) postgres=# select foo(10);NOTICE: sign:foo(integer), name:foo, oid:16392┌─────┐│ foo │╞═════╡│ 10 │└─────┘(1 row)The name - pg_routine_oid can be confusing, because there is not clean if it is oid of currently executed routine or routine from top of exceptionRegardsPavelI agree that the name changed to pg_current_routine_... makes the most sense, great call...+1 Okay, I reviewed this. I tested it (allocating too small of varchar's for values, various \"signature types\"),and also a performance test... Wow, on my VM, 10,000 Calls in a loop was 2-4ms...The names are clear. Again, I tested with various options, and including ROW_COUNT, or not.This functions PERFECTLY.... Except there are no documentation changes.Because of that, I set it to Waiting on Author. Which might be unfair, because I could take a stab at doing the documentation (but docs are not compiling on my setup yet).The documentation changes are simple enough.If I can get the docs compiled on my rig, I will see if I can make the changes, and post an updated patch,that contains both...But I don't want to be stepping on toes, or having it look like I am taking credit.Regards - KirkOkay, I have modified the documentation and made sure it compiles. They were simple enough changes.I am attaching this updated patch.I have marked the item Ready for Commiter...Thank you for doc and for reviewRegardsPavel Thanks for your patience. I now have a workable hacking environment!Regards - Kirk",
"msg_date": "Mon, 27 Mar 2023 08:29:29 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> po 27. 3. 2023 v 5:36 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n>> I have marked the item Ready for Commiter...\n\n> Thank you for doc and for review\n\nI'm kind of surprised there was any interest in this proposal at all,\nTBH, but apparently there is some. Still, I think you over-engineered\nit by doing more than the original proposal of making the function OID\navailable. The other things can be had by casting the OID to regproc\nor regprocedure, so I'd be inclined to add just one new keyword not\nthree. Besides, your implementation is a bit inconsistent: relying\non fn_signature could return a result that is stale or doesn't conform\nto the current search_path.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Apr 2023 13:37:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "Hi\n\n\npo 3. 4. 2023 v 19:37 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > po 27. 3. 2023 v 5:36 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n> >> I have marked the item Ready for Commiter...\n>\n> > Thank you for doc and for review\n>\n> I'm kind of surprised there was any interest in this proposal at all,\n> TBH, but apparently there is some. Still, I think you over-engineered\n> it by doing more than the original proposal of making the function OID\n> available. The other things can be had by casting the OID to regproc\n> or regprocedure, so I'd be inclined to add just one new keyword not\n> three. Besides, your implementation is a bit inconsistent: relying\n> on fn_signature could return a result that is stale or doesn't conform\n> to the current search_path.\n>\n\nok\n\nThere is reduced patch + regress tests\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>",
"msg_date": "Mon, 3 Apr 2023 20:49:35 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> There is reduced patch + regress tests\n\nOne more thing: I do not think it's appropriate to allow this in\nGET STACKED DIAGNOSTICS. That's about reporting the place where\nan error occurred, not the current location. Eventually it might\nbe interesting to retrieve the OID of the function that contained\nthe error, but that would be a pretty complicated patch and I am\nnot sure it's worth it. In the meantime I think we should just\nforbid it.\n\nIf we do that, then the confusion you were concerned about upthread\ngoes away and we could shorten the keyword back down to \"pg_routine_oid\",\nwhich seems like a good thing for our carpal tunnels.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Apr 2023 10:20:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "út 4. 4. 2023 v 16:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > There is reduced patch + regress tests\n>\n> One more thing: I do not think it's appropriate to allow this in\n> GET STACKED DIAGNOSTICS. That's about reporting the place where\n> an error occurred, not the current location. Eventually it might\n> be interesting to retrieve the OID of the function that contained\n> the error, but that would be a pretty complicated patch and I am\n> not sure it's worth it. In the meantime I think we should just\n> forbid it.\n>\n> If we do that, then the confusion you were concerned about upthread\n> goes away and we could shorten the keyword back down to \"pg_routine_oid\",\n> which seems like a good thing for our carpal tunnels.\n>\n> Thoughts?\n>\n\nhas sense\n\nupdated patch attached\n\nRegards\n\nPavel\n\n>\n> regards, tom lane\n>",
"msg_date": "Tue, 4 Apr 2023 18:57:12 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 4. 4. 2023 v 16:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> If we do that, then the confusion you were concerned about upthread\n>> goes away and we could shorten the keyword back down to \"pg_routine_oid\",\n>> which seems like a good thing for our carpal tunnels.\n\n> has sense\n\nOK, pushed like that with some cosmetic adjustments (better test\ncase, mostly).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Apr 2023 13:34:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
},
{
"msg_contents": "út 4. 4. 2023 v 19:34 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > út 4. 4. 2023 v 16:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> If we do that, then the confusion you were concerned about upthread\n> >> goes away and we could shorten the keyword back down to\n> \"pg_routine_oid\",\n> >> which seems like a good thing for our carpal tunnels.\n>\n> > has sense\n>\n> OK, pushed like that with some cosmetic adjustments (better test\n> case, mostly).\n>\n\nThank you very much\n\nRegards\n\nPavel\n\n\n>\n> regards, tom lane\n>\n\nút 4. 4. 2023 v 19:34 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 4. 4. 2023 v 16:20 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> If we do that, then the confusion you were concerned about upthread\n>> goes away and we could shorten the keyword back down to \"pg_routine_oid\",\n>> which seems like a good thing for our carpal tunnels.\n\n> has sense\n\nOK, pushed like that with some cosmetic adjustments (better test\ncase, mostly).Thank you very muchRegardsPavel \n\n regards, tom lane",
"msg_date": "Tue, 4 Apr 2023 19:41:53 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: possible proposal plpgsql GET DIAGNOSTICS oid = PG_ROUTINE_OID"
}
] |
[
{
"msg_contents": "Hi,\n\nOn cfbot / CI, we've recently seen a lot of spurious test failures due to\nsrc/test/isolation/specs/deadlock-hard.spec changing output. Always on\nfreebsd, when running tests against a pre-existing instance.\n\nI'm fairly sure I've seen this failure on the buildfarm as well, but I'm too\nimpatient to wait for the buildfarm database query (it really should be\nupdated to use lz4 toast compression).\n\nExample failures:\n\n1)\nhttps://cirrus-ci.com/task/5307793230528512?logs=test_running#L211\nhttps://api.cirrus-ci.com/v1/artifact/task/5307793230528512/testrun/build/testrun/isolation-running/isolation/regression.diffs\nhttps://api.cirrus-ci.com/v1/artifact/task/5307793230528512/testrun/build/testrun/runningcheck.log\n\n2)\nhttps://cirrus-ci.com/task/6137098198056960?logs=test_running#L212\nhttps://api.cirrus-ci.com/v1/artifact/task/6137098198056960/testrun/build/testrun/isolation-running/isolation/regression.diffs\nhttps://api.cirrus-ci.com/v1/artifact/task/6137098198056960/testrun/build/testrun/runningcheck.log\n\nSo far the diff always is:\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/isolation/expected/deadlock-hard.out /tmp/cirrus-ci-build/build/testrun/isolation-running/isolation/results/deadlock-hard.out\n--- /tmp/cirrus-ci-build/src/test/isolation/expected/deadlock-hard.out\t2023-02-07 05:32:34.536429000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/isolation-running/isolation/results/deadlock-hard.out\t2023-02-07 05:40:33.833908000 +0000\n@@ -25,10 +25,11 @@\n step s6a7: <... completed>\n step s6c: COMMIT;\n step s5a6: <... completed>\n-step s5c: COMMIT;\n+step s5c: COMMIT; <waiting ...>\n step s4a5: <... completed>\n step s4c: COMMIT;\n step s3a4: <... completed>\n+step s5c: <... completed>\n step s3c: COMMIT;\n step s2a3: <... completed>\n step s2c: COMMIT;\n\n\nCommit 741d7f1047f fixed a similar issue in deadlock-hard. But it looks like\nwe need something more. But perhaps this isn't an output ordering issue:\n\nHow can we end up with s5c getting reported as waiting? I don't see how s5c\ncould end up blocking on anything?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Feb 2023 17:10:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "deadlock-hard flakiness"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-07 17:10:21 -0800, Andres Freund wrote:\n> So far the diff always is:\n> \n> diff -U3 /tmp/cirrus-ci-build/src/test/isolation/expected/deadlock-hard.out /tmp/cirrus-ci-build/build/testrun/isolation-running/isolation/results/deadlock-hard.out\n> --- /tmp/cirrus-ci-build/src/test/isolation/expected/deadlock-hard.out\t2023-02-07 05:32:34.536429000 +0000\n> +++ /tmp/cirrus-ci-build/build/testrun/isolation-running/isolation/results/deadlock-hard.out\t2023-02-07 05:40:33.833908000 +0000\n> @@ -25,10 +25,11 @@\n> step s6a7: <... completed>\n> step s6c: COMMIT;\n> step s5a6: <... completed>\n> -step s5c: COMMIT;\n> +step s5c: COMMIT; <waiting ...>\n> step s4a5: <... completed>\n> step s4c: COMMIT;\n> step s3a4: <... completed>\n> +step s5c: <... completed>\n> step s3c: COMMIT;\n> step s2a3: <... completed>\n> step s2c: COMMIT;\n\nWhile trying to debug the create_index issue [1], I did end up hitting a\ndeadlock-soft output difference:\n\nhttps://cirrus-ci.com/task/6332011665686528\ndiff -U3 /tmp/cirrus-ci-build/src/test/isolation/expected/deadlock-soft.out /tmp/cirrus-ci-build/build/testrun/isolation-running/isolation/results/deadlock-soft.out\n--- /tmp/cirrus-ci-build/src/test/isolation/expected/deadlock-soft.out\t2023-02-08 06:55:17.620898000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/isolation-running/isolation/results/deadlock-soft.out\t2023-02-08 06:56:17.622621000 +0000\n@@ -8,10 +8,13 @@\n step d1a2: LOCK TABLE a2 IN ACCESS SHARE MODE; <waiting ...>\n step d2a1: LOCK TABLE a1 IN ACCESS SHARE MODE; <waiting ...>\n step d1a2: <... completed>\n-step d1c: COMMIT;\n+step d1c: COMMIT; <waiting ...>\n step e1l: <... completed>\n step e1c: COMMIT;\n step d2a1: <... completed>\n-step d2c: COMMIT;\n+step d1c: <... completed>\n+step d2c: COMMIT; <waiting ...>\n step e2l: <... completed>\n-step e2c: COMMIT;\n+step e2c: COMMIT; <waiting ...>\n+step d2c: <... completed>\n+step e2c: <... completed>\n\n\nLike in the deadlock-hard case, I don't understand how the commits suddenly\nend up being considered waiting.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20230208024748.ijvwabhqu4xlbvin%40awork3.anarazel.de\n\n\n",
"msg_date": "Tue, 7 Feb 2023 23:06:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: deadlock-hard flakiness"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm fairly sure I've seen this failure on the buildfarm as well, but I'm too\n> impatient to wait for the buildfarm database query (it really should be\n> updated to use lz4 toast compression).\n\nFailures in deadlock-hard (excluding crashes, because they make lots\nof tests appear to fail bogusly) all occurred on animals that are no\nlonger with us:\n\n animal | last_report_time\n-----------+---------------------\n anole | 2022-07-05 12:31:02\n dory | 2021-09-30 04:50:08\n fossa | 2021-10-28 01:50:29\n gharial | 2022-07-05 22:04:23\n hyrax | 2021-05-10 15:11:53\n lousyjack | 2020-05-18 10:03:03\n\nThe failures stopped in mid '21 as far as my scraper noticed:\n\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-06-13%2016:31:57\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-06-11%2017:13:44\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-06-11%2006:14:39\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-05-31%2006:41:25\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-05-23%2019:43:04\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-05-16%2000:36:16\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-05-10%2000:42:43\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-05-08%2006:34:13\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-04-22%2021:24:02\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fossa&dt=2021-04-08%2019:36:06\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-03-22%2013:26:03\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-03-13%2007:24:02\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-03-05%2019:39:46\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-01-08%2003:16:28\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2020-12-28%2011:05:15\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2020-11-27%2015:39:27\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2020-10-25%2023:47:21\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2020-09-29%2021:35:52\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2020-07-29%2014:34:49\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-05-15%2005:03:03\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-05-14%2013:33:03\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-05-13%2022:03:03\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-05-12%2015:03:03\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-05-11%2022:03:06\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-05-10%2022:33:02\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2020-01-14%2010:11:30\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2019-12-04%2001:38:28\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2019-11-12%2005:43:59\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2019-10-16%2016:43:58\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2019-07-18%2021:57:59\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2019-07-10%2005:59:16\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2019-07-08%2015:02:17\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2019-06-23%2004:17:09\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2019-06-12%2021:46:24\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2017-04-09%2021:58:03\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2017-04-08%2021:58:04\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2017-04-08%2005:19:17\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2017-04-07%2000:23:39\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2017-04-05%2018:58:04\n\n\n",
"msg_date": "Wed, 8 Feb 2023 23:34:45 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: deadlock-hard flakiness"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 11:34 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Feb 8, 2023 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'm fairly sure I've seen this failure on the buildfarm as well, but I'm too\n> > impatient to wait for the buildfarm database query (it really should be\n> > updated to use lz4 toast compression).\n>\n> Failures in deadlock-hard (excluding crashes, because they make lots\n> of tests appear to fail bogusly) all occurred on animals that are no\n> longer with us:\n\nOh, and there was this thread:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGJ6xtAsXFFs%2BSGcR%3DJkv0wCje_W-SUxV1%2BN451Q-5t6MA%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 8 Feb 2023 23:39:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: deadlock-hard flakiness"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-07 17:10:21 -0800, Andres Freund wrote:\n> diff -U3 /tmp/cirrus-ci-build/src/test/isolation/expected/deadlock-hard.out /tmp/cirrus-ci-build/build/testrun/isolation-running/isolation/results/deadlock-hard.out\n> --- /tmp/cirrus-ci-build/src/test/isolation/expected/deadlock-hard.out\t2023-02-07 05:32:34.536429000 +0000\n> +++ /tmp/cirrus-ci-build/build/testrun/isolation-running/isolation/results/deadlock-hard.out\t2023-02-07 05:40:33.833908000 +0000\n> @@ -25,10 +25,11 @@\n> step s6a7: <... completed>\n> step s6c: COMMIT;\n> step s5a6: <... completed>\n> -step s5c: COMMIT;\n> +step s5c: COMMIT; <waiting ...>\n> step s4a5: <... completed>\n> step s4c: COMMIT;\n> step s3a4: <... completed>\n> +step s5c: <... completed>\n> step s3c: COMMIT;\n> step s2a3: <... completed>\n> step s2c: COMMIT;\n> \n> \n> Commit 741d7f1047f fixed a similar issue in deadlock-hard. But it looks like\n> we need something more. But perhaps this isn't an output ordering issue:\n> \n> How can we end up with s5c getting reported as waiting? I don't see how s5c\n> could end up blocking on anything?\n\nAfter looking through isolationtester's blocking detection logic I started to\nsuspect that what we're seeing is not being blocked by a heavyweight lock, but\nby a snapshot. So I added logging to\npg_isolation_test_session_is_blocked(). Took a while to reproduce the issue,\nbut indeed:\nhttps://cirrus-ci.com/task/4901334571286528\nhttps://api.cirrus-ci.com/v1/artifact/task/4901334571286528/testrun/build/testrun/isolation-running/isolation/regression.diffs\nhttps://api.cirrus-ci.com/v1/artifact/task/4901334571286528/testrun/build/testrun/runningcheck.log\n\nindicates that we indeed were blocked by a snapshot:\n2023-02-08 21:30:12.123 UTC [9276][client backend] [isolation/deadlock-hard/control connection][3/8971:0] LOG: pid 9280 blocked due to snapshot by pid: 0\n...\n2023-02-08 21:30:12.155 UTC [9276][client backend] [isolation/deadlock-hard/control connection][3/8973:0] LOG: pid 9278 blocked due to snapshot by pid: 0\n\n\nUnclear why we end up without a pid. It looks like 2PC removes the pid from\nthe field? In the problematic case the prepared_xacts test is indeed\nscheduled concurrently:\n\n2023-02-08 21:30:12.100 UTC [9397][client backend] [pg_regress/prepared_xacts][23/1296:39171] ERROR: transaction identifier \"foo3\" is already in use\n2023-02-08 21:30:12.100 UTC [9397][client backend] [pg_regress/prepared_xacts][23/1296:39171] STATEMENT: PREPARE TRANSACTION 'foo3';\n\nfoo3 for example does use SERIALIZABLE.\n\n\nI don't really understand how GetSafeSnapshotBlockingPids() can end up finding\ndeadlock-hard's sessions being blocked by a safe snapshot. Afaict nothing uses\nserializable in that test. How can SXACT_FLAG_DEFERRABLE_WAITING be set for\nthe sxact of a backend that never did serializable? Are we possibly forgetting\nto clear it or such?\n\n\nI don't think it should affect the reports here, but I did break something\nwhen removing SHMQueue - GetSafeSnapshotBlockingPids() doesn't check\noutput_size anymore. Will fix. Thomas, any chance you could do a pass through\n96003717645 to see if I screwed up other things? I stared a lot at that\nchange, but I obviously did miss at least one thing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 14:11:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: deadlock-hard flakiness"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 14:11:45 -0800, Andres Freund wrote:\n> On 2023-02-07 17:10:21 -0800, Andres Freund wrote:\n> I don't really understand how GetSafeSnapshotBlockingPids() can end up finding\n> deadlock-hard's sessions being blocked by a safe snapshot. Afaict nothing uses\n> serializable in that test. How can SXACT_FLAG_DEFERRABLE_WAITING be set for\n> the sxact of a backend that never did serializable? Are we possibly forgetting\n> to clear it or such?\n> \n> \n> I don't think it should affect the reports here, but I did break something\n> when removing SHMQueue - GetSafeSnapshotBlockingPids() doesn't check\n> output_size anymore. Will fix. Thomas, any chance you could do a pass through\n> 96003717645 to see if I screwed up other things? I stared a lot at that\n> change, but I obviously did miss at least one thing.\n\nArgh, it's actually caused by 96003717645 as well: Previously loop iteration\nwithout finding a matching pid ends with sxact == NULL, now it doesn't.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 14:15:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: deadlock-hard flakiness"
}
] |
[
{
"msg_contents": "Hi,\n\nA recent cfbot run caused CI on windows to crash - on a patch that could not\nconceivably cause this issue:\n https://cirrus-ci.com/task/5646021133336576\nthe patch is just:\n https://github.com/postgresql-cfbot/postgresql/commit/dbd4afa6e7583c036b86abe2e3d27b508d335c2b\n\nregression.diffs: https://api.cirrus-ci.com/v1/artifact/task/5646021133336576/testrun/build/testrun/regress/regress/regression.diffs\npostmaster.log: https://api.cirrus-ci.com/v1/artifact/task/5646021133336576/testrun/build/testrun/regress/regress/log/postmaster.log\ncrash info: https://api.cirrus-ci.com/v1/artifact/task/5646021133336576/crashlog/crashlog-postgres.exe_1af0_2023-02-08_00-53-23-997.txt\n\n00000085`f03ffa40 00007ff6`fd89faa8 ucrtbased!abort(void)+0x5a [minkernel\\crts\\ucrt\\src\\appcrt\\startup\\abort.cpp @ 77]\n00000085`f03ffa80 00007ff6`fd6474dc postgres!ExceptionalCondition(\n\t\t\tchar * conditionName = 0x00007ff6`fdd03ca8 \"PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED\",\n\t\t\tchar * fileName = 0x00007ff6`fdd03c80 \"../src/backend/storage/ipc/pmsignal.c\",\n\t\t\tint lineNumber = 0n329)+0x78 [c:\\cirrus\\src\\backend\\utils\\error\\assert.c @ 67]\n00000085`f03ffac0 00007ff6`fd676eff postgres!MarkPostmasterChildActive(void)+0x7c [c:\\cirrus\\src\\backend\\storage\\ipc\\pmsignal.c @ 329]\n00000085`f03ffb00 00007ff6`fd59aa3a postgres!InitProcess(void)+0x2ef [c:\\cirrus\\src\\backend\\storage\\lmgr\\proc.c @ 375]\n00000085`f03ffb60 00007ff6`fd467689 postgres!SubPostmasterMain(\n\t\t\tint argc = 0n3,\n\t\t\tchar ** argv = 0x000001c6`f3814e80)+0x33a [c:\\cirrus\\src\\backend\\postmaster\\postmaster.c @ 4962]\n00000085`f03ffd90 00007ff6`fda0e1c9 postgres!main(\n\t\t\tint argc = 0n3,\n\t\t\tchar ** argv = 0x000001c6`f3814e80)+0x2f9 [c:\\cirrus\\src\\backend\\main\\main.c @ 192]\n\nSo, somehow we ended up a pmsignal slot for a new backend that's not currently\nin PM_CHILD_ASSIGNED state.\n\n\nObviously the first idea is to wonder whether this is a problem introduced as\npart of the the recent postmaster-latchification work.\n\n\nAt first I thought we were failing to terminate running processes, due to the\nfollowing output:\n\nparallel group (20 tests): name char txid text varchar enum float8 regproc int2 boolean bit oid pg_lsn int8 int4 float4 uuid rangetypes numeric money\n boolean ... ok 684 ms\n char ... ok 517 ms\n name ... ok 354 ms\n varchar ... ok 604 ms\n text ... ok 603 ms\n int2 ... ok 676 ms\n int4 ... ok 818 ms\n int8 ... ok 779 ms\n oid ... ok 720 ms\n float4 ... ok 823 ms\n float8 ... ok 628 ms\n bit ... ok 666 ms\n numeric ... ok 1132 ms\n txid ... ok 497 ms\n uuid ... ok 818 ms\n enum ... ok 619 ms\n money ... FAILED (test process exited with exit code 2) 7337 ms\n rangetypes ... ok 813 ms\n pg_lsn ... ok 762 ms\n regproc ... ok 632 ms\n\n\nBut now I realize the reason none of the other tests failed, is because the\ncrash took a long time, presumably due to the debugger creating the above\ninformation, so no other tests failed.\n\n\n2023-02-08 00:53:20.257 GMT client backend[4584] pg_regress/rangetypes STATEMENT: select '-[a,z)'::textrange;\nTRAP: failed Assert(\"PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED\"), File: \"../src/backend/storage/ipc/pmsignal.c\", Line: 329, PID: 5948\n[ quite a few lines ]\n2023-02-08 00:53:27.420 GMT postmaster[872] LOG: server process (PID 5948) was terminated by exception 0xC0000354\n2023-02-08 00:53:27.420 GMT postmaster[872] HINT: See C include file \"ntstatus.h\" for a description of the hexadecimal value.\n2023-02-08 00:53:27.420 GMT postmaster[872] LOG: terminating any other active server processes\n2023-02-08 00:53:27.434 GMT postmaster[872] LOG: all server processes terminated; reinitializing\n2023-02-08 00:53:27.459 GMT startup[5800] LOG: database system was interrupted; last known up at 2023-02-08 00:53:19 GMT\n2023-02-08 00:53:27.459 GMT startup[5800] LOG: database system was not properly shut down; automatic recovery in progress\n2023-02-08 00:53:27.462 GMT startup[5800] LOG: redo starts at 0/20DCF08\n2023-02-08 00:53:27.484 GMT startup[5800] LOG: could not stat file \"pg_tblspc/16502\": No such file or directory\n2023-02-08 00:53:27.484 GMT startup[5800] CONTEXT: WAL redo at 0/20DCFB8 for Tablespace/DROP: 16502\n2023-02-08 00:53:27.614 GMT startup[5800] LOG: invalid record length at 0/25353E8: wanted 24, got 0\n2023-02-08 00:53:27.614 GMT startup[5800] LOG: redo done at 0/2534FE0 system usage: CPU: user: 0.04 s, system: 0.04 s, elapsed: 0.15 s\n\n\nNevertheless, clearly this should never be reached.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Feb 2023 17:28:52 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 2:28 PM Andres Freund <andres@anarazel.de> wrote:\n> 2023-02-08 00:53:20.257 GMT client backend[4584] pg_regress/rangetypes STATEMENT: select '-[a,z)'::textrange;\n> TRAP: failed Assert(\"PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED\"), File: \"../src/backend/storage/ipc/pmsignal.c\", Line: 329, PID: 5948\n\nNo idea what's going on yet, but this assertion failure is very\nfamiliar to me, as one of the ways that lorikeet fails/failed (though\nit hasn't failed like that since the postmaster latchification).\nThere it was because Cygwin's signal blocking is unreliable, so the\npostmaster could start a backend, while already being in the middle of\nstarting a backend. That particular problem shouldn't be possible\nanymore; now we can only start backends from inside the main event\nloop. Hmm. (State machine bug? Some confusion about processes\ncaused by the fact that PID was recycled?)\n\n\n",
"msg_date": "Wed, 8 Feb 2023 14:52:06 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 2:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Feb 8, 2023 at 2:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > TRAP: failed Assert(\"PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED\"), File: \"../src/backend/storage/ipc/pmsignal.c\", Line: 329, PID: 5948\n\nI was wondering if commit 18a4a620 might be relevant, as it touched\nthe management of those slots a few months ago, but then I found a\ncouple of matches from 2021 in my database of build farm assertion\nfailures:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-08-12%2010:38:56\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-09-28%2016:40:49\n\nFairywren is a Windows + msys2 system, so it doesn't share Cygwin's\nsignal system, it's running the pure Windows code (though it's GCC\ninstead of MSVC and has a different libc, it's using our Windows\nnative code paths and defines WIN32).\n\n\n",
"msg_date": "Wed, 8 Feb 2023 16:00:21 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 4:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-08-12%2010:38:56\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-09-28%2016:40:49\n\nThese were a bit different though. They also logged \"could not\nreserve shared memory region\". And they don't have a user of the same\nPID logging stuff immediately preceding the failure.\n\n\n",
"msg_date": "Wed, 8 Feb 2023 16:26:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "I still have no theory for how this condition was reached despite a\nlot of time thinking about it and searching for more clues. As far as\nI can tell, the recent improvements to postmaster's signal and event\nhandling shouldn't be related: the state management and logic was\nunchanged.\n\nWhile failing to understand this, I worked[1] on CI log indexing tool\nwith public reports that highlight this sort of thing[2], so I'll be\nwatching out for more evidence. Unfortunately I have no data from\nbefore 1 Feb (cfbot previously wasn't interested in the past at all;\nI'd need to get my hands on the commit IDs for earlier testing but I\ncan't figure out how to get those out of Cirrus or Github -- anyone\nknow how?). FWIW I have a thing I call bfbot for slurping up similar\ndata from the build farm. It's not pretty enough for public\nconsumption, but I do know that this assertion hasn't failed there,\nexcept the cases I mentioned earlier, and a load of failures on\nlorikeet which was completely b0rked until recently.\n\n[1] https://xkcd.com/974/\n[2] http://cfbot.cputube.org/highlights/assertion-90.html\n\n\n",
"msg_date": "Sat, 18 Feb 2023 13:27:04 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-18 13:27:04 +1300, Thomas Munro wrote:\n> I still have no theory for how this condition was reached despite a\n> lot of time thinking about it and searching for more clues. As far as\n> I can tell, the recent improvements to postmaster's signal and event\n> handling shouldn't be related: the state management and logic was\n> unchanged.\n\nYea, it's all very odd.\n\nIf you look at the log:\n\n2023-02-08 00:53:20.175 GMT client backend[5948] pg_regress/name DETAIL: No valid identifier after \".\".\n2023-02-08 00:53:20.175 GMT client backend[5948] pg_regress/name STATEMENT: SELECT parse_ident('xxx.1020');\n...\nTRAP: failed Assert(\"PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED\"), File: \"../src/backend/storage/ipc/pmsignal.c\", Line: 329, PID: 5948\nabort() has been called\n...\n2023-02-08 00:53:27.420 GMT postmaster[872] LOG: server process (PID 5948) was terminated by exception 0xC0000354\n2023-02-08 00:53:27.420 GMT postmaster[872] HINT: See C include file \"ntstatus.h\" for a description of the hexadecimal value.\n2023-02-08 00:53:27.420 GMT postmaster[872] LOG: terminating any other active server processes\n2023-02-08 00:53:27.434 GMT postmaster[872] LOG: all server processes terminated; reinitializing\n\n\nand that it's indeed the money test that failed:\n money ... FAILED (test process exited with exit code 2) 7337 ms\n\nit's very hard to understand how this stack can come to be:\n\n00000085`f03ffa40 00007ff6`fd89faa8 ucrtbased!abort(void)+0x5a [minkernel\\crts\\ucrt\\src\\appcrt\\startup\\abort.cpp @ 77]\n00000085`f03ffa80 00007ff6`fd6474dc postgres!ExceptionalCondition(\n\t\t\tchar * conditionName = 0x00007ff6`fdd03ca8 \"PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED\", \n\t\t\tchar * fileName = 0x00007ff6`fdd03c80 \"../src/backend/storage/ipc/pmsignal.c\", \n\t\t\tint lineNumber = 0n329)+0x78 [c:\\cirrus\\src\\backend\\utils\\error\\assert.c @ 67]\n00000085`f03ffac0 00007ff6`fd676eff postgres!MarkPostmasterChildActive(void)+0x7c [c:\\cirrus\\src\\backend\\storage\\ipc\\pmsignal.c @ 329]\n00000085`f03ffb00 00007ff6`fd59aa3a postgres!InitProcess(void)+0x2ef [c:\\cirrus\\src\\backend\\storage\\lmgr\\proc.c @ 375]\n00000085`f03ffb60 00007ff6`fd467689 postgres!SubPostmasterMain(\n\t\t\tint argc = 0n3, \n\t\t\tchar ** argv = 0x000001c6`f3814e80)+0x33a [c:\\cirrus\\src\\backend\\postmaster\\postmaster.c @ 4962]\n00000085`f03ffd90 00007ff6`fda0e1c9 postgres!main(\n\t\t\tint argc = 0n3, \n\t\t\tchar ** argv = 0x000001c6`f3814e80)+0x2f9 [c:\\cirrus\\src\\backend\\main\\main.c @ 192]\n\nHow can a process that we did notify crashing, that has already executed SQL\nstatements, end up in MarkPostmasterChildActive()?\n\n\n\n> While failing to understand this, I worked[1] on CI log indexing tool\n> with public reports that highlight this sort of thing[2], so I'll be\n> watching out for more evidence. Unfortunately I have no data from\n> before 1 Feb (cfbot previously wasn't interested in the past at all;\n> I'd need to get my hands on the commit IDs for earlier testing but I\n> can't figure out how to get those out of Cirrus or Github -- anyone\n> know how?). FWIW I have a thing I call bfbot for slurping up similar\n> data from the build farm. It's not pretty enough for public\n> consumption, but I do know that this assertion hasn't failed there,\n> except the cases I mentioned earlier, and a load of failures on\n> lorikeet which was completely b0rked until recently.\n\n> [1] https://xkcd.com/974/\n> [2] http://cfbot.cputube.org/highlights/assertion-90.html\n\nI think this extremely useful.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Feb 2023 17:06:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "On Sat, Feb 18, 2023 at 01:27:04PM +1300, Thomas Munro wrote:\n> (cfbot previously wasn't interested in the past at all;\n> I'd need to get my hands on the commit IDs for earlier testing but I\n> can't figure out how to get those out of Cirrus or Github -- anyone\n> know how?).\n\nI wish I knew - my only suggestion is to scrape it out of \"git reflog\",\nbut that only works if you configured it to save a huge reflog or saved\nthe historic output of \"git reflog\". Or if you don't change the branch\noften, which I imagine doesn't hold true here.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 18 Feb 2023 08:09:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "Hello,\n18.02.2023 04:06, Andres Freund wrote:\n> Hi,\n>\n> On 2023-02-18 13:27:04 +1300, Thomas Munro wrote:\n>> I still have no theory for how this condition was reached despite a\n>> lot of time thinking about it and searching for more clues. As far as\n>> I can tell, the recent improvements to postmaster's signal and event\n>> handling shouldn't be related: the state management and logic was\n>> unchanged.\n> Yea, it's all very odd.\n>\n> If you look at the log:\n>\n> 2023-02-08 00:53:20.175 GMT client backend[5948] pg_regress/name DETAIL: No valid identifier after \".\".\n> 2023-02-08 00:53:20.175 GMT client backend[5948] pg_regress/name STATEMENT: SELECT parse_ident('xxx.1020');\n> ...\n> TRAP: failed Assert(\"PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED\"), File: \"../src/backend/storage/ipc/pmsignal.c\", Line: 329, PID: 5948\n> abort() has been called\n> ...\n> 2023-02-08 00:53:27.420 GMT postmaster[872] LOG: server process (PID 5948) was terminated by exception 0xC0000354\n> 2023-02-08 00:53:27.420 GMT postmaster[872] HINT: See C include file \"ntstatus.h\" for a description of the hexadecimal value.\n> 2023-02-08 00:53:27.420 GMT postmaster[872] LOG: terminating any other active server processes\n> 2023-02-08 00:53:27.434 GMT postmaster[872] LOG: all server processes terminated; reinitializing\n>\n>\n> and that it's indeed the money test that failed:\n> money ... FAILED (test process exited with exit code 2) 7337 ms\n>\n> it's very hard to understand how this stack can come to be:\n>\n> 00000085`f03ffa40 00007ff6`fd89faa8 ucrtbased!abort(void)+0x5a [minkernel\\crts\\ucrt\\src\\appcrt\\startup\\abort.cpp @ 77]\n> 00000085`f03ffa80 00007ff6`fd6474dc postgres!ExceptionalCondition(\n> \t\t\tchar * conditionName = 0x00007ff6`fdd03ca8 \"PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED\",\n> \t\t\tchar * fileName = 0x00007ff6`fdd03c80 \"../src/backend/storage/ipc/pmsignal.c\",\n> \t\t\tint lineNumber = 0n329)+0x78 [c:\\cirrus\\src\\backend\\utils\\error\\assert.c @ 67]\n> 00000085`f03ffac0 00007ff6`fd676eff postgres!MarkPostmasterChildActive(void)+0x7c [c:\\cirrus\\src\\backend\\storage\\ipc\\pmsignal.c @ 329]\n> 00000085`f03ffb00 00007ff6`fd59aa3a postgres!InitProcess(void)+0x2ef [c:\\cirrus\\src\\backend\\storage\\lmgr\\proc.c @ 375]\n> 00000085`f03ffb60 00007ff6`fd467689 postgres!SubPostmasterMain(\n> \t\t\tint argc = 0n3,\n> \t\t\tchar ** argv = 0x000001c6`f3814e80)+0x33a [c:\\cirrus\\src\\backend\\postmaster\\postmaster.c @ 4962]\n> 00000085`f03ffd90 00007ff6`fda0e1c9 postgres!main(\n> \t\t\tint argc = 0n3,\n> \t\t\tchar ** argv = 0x000001c6`f3814e80)+0x2f9 [c:\\cirrus\\src\\backend\\main\\main.c @ 192]\n>\n> How can a process that we did notify crashing, that has already executed SQL\n> statements, end up in MarkPostmasterChildActive()?\nMaybe it's just the backend started for the money test has got\nthe same PID (5948) that the backend for the name test had?\nA simple script that I've found [1] shows that the pids reused rather often\n(for me, approximately each 300 process starts in Windows 10 H2), buy maybe\nunder some circumstances (many concurrent processes?) PIDs can coincide even\nso often to trigger that behavior.\n\n[1] https://superuser.com/questions/636497/does-windows-7-reuse-process-ids\n\n\n",
"msg_date": "Sat, 18 Feb 2023 18:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-18 18:00:00 +0300, Alexander Lakhin wrote:\n> 18.02.2023 04:06, Andres Freund wrote:\n> > On 2023-02-18 13:27:04 +1300, Thomas Munro wrote:\n> > How can a process that we did notify crashing, that has already executed\n> > SQL statements, end up in MarkPostmasterChildActive()?\n>\n> Maybe it's just the backend started for the money test has got\n> the same PID (5948) that the backend for the name test had?\n\nI somehow mashed name and money into one test in my head... So forget what I\nwrote.\n\nThat doesn't really explain the assertion though.\n\n\nIt's too bad that we didn't use doesn't include\nlog_connections/log_disconnections. If nothing else, it makes it a lot easier\nto identify problems like that. We actually do try to configure it for CI, but\nit currently doesn't work for pg_regress style tests with meson. Need to fix\nthat. Starting a thread.\n\n\n\nOne thing that made me very suspicious when reading related code is this\nremark:\n\nbool\nReleasePostmasterChildSlot(int slot)\n...\n\t/*\n\t * Note: the slot state might already be unused, because the logic in\n\t * postmaster.c is such that this might get called twice when a child\n\t * crashes. So we don't try to Assert anything about the state.\n\t */\n\nThat seems fragile, and potentially racy. What if we somehow can end up\nstarting another backend inbetween the two ReleasePostmasterChildSlot() calls,\nwe can end up marking a slot that, newly, has a process associated with it, as\ninactive? Once the slot has been released the first time, it can be assigned\nagain.\n\n\nISTM that it's not a good idea that we use PM_CHILD_ASSIGNED to signal both,\nthat a slot has not been used yet, and that it's not in use anymore. I think\nthat makes it quite a bit harder to find state management issues.\n\n\n\n> A simple script that I've found [1] shows that the pids reused rather often\n> (for me, approximately each 300 process starts in Windows 10 H2), buy maybe\n> under some circumstances (many concurrent processes?) PIDs can coincide even\n> so often to trigger that behavior.\n\nIt's definitely very aggressive in reusing pids - and it seems to\nintentionally do work to keep pids small. I wonder if it'd be worth trying to\nexercise this path aggressively by configuring a very low max pid on linux, in\nan EXEC_BACKEND build.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Feb 2023 12:09:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "On 2023-02-17 Fr 19:27, Thomas Munro wrote:\n> FWIW I have a thing I call bfbot for slurping up similar\n> data from the build farm. It's not pretty enough for public\n> consumption, but I do know that this assertion hasn't failed there,\n> except the cases I mentioned earlier, and a load of failures on\n> lorikeet which was completely b0rked until recently.\n>\n\nAre there things we need to do on the server side to make data \nextraction easier?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-17 Fr 19:27, Thomas Munro\n wrote:\n\n\nFWIW I have a thing I call bfbot for slurping up similar\ndata from the build farm. It's not pretty enough for public\nconsumption, but I do know that this assertion hasn't failed there,\nexcept the cases I mentioned earlier, and a load of failures on\nlorikeet which was completely b0rked until recently.\n\n\n\n\n\nAre there things we need to do on the server side to make data\n extraction easier?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 19 Feb 2023 08:46:40 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 2:46 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2023-02-17 Fr 19:27, Thomas Munro wrote:\n>> FWIW I have a thing I call bfbot for slurping up similar\n>> data from the build farm. It's not pretty enough for public\n>> consumption, but I do know that this assertion hasn't failed there,\n>> except the cases I mentioned earlier, and a load of failures on\n>> lorikeet which was completely b0rked until recently.\n>\n> Are there things we need to do on the server side to make data extraction easier?\n\nIt's a good question.\n\nOne thought Andres mentioned to me is whether we might want to have an\nin-tree tool to find interesting stuff. That is, even locally during\ndevelopment, but also in the CI + buildfarm, a common tool could find\nand spit out human- and machine-readable highlights (backtraces,\nPANICs, assertions, ... like cfbot is now doing). Then the knowledge\nof what's interesting would be maintained and extended by all of us.\n\nOn the other hand, as we think of new patterns over time to look out\nfor, it's also nice to be able to re-scan old data to see if the new\npatterns occurred in the past (I've done this several times with\ncfbot's new highlight analyser as I corrected mistakes and added\npatterns). So maybe that's also a good idea, but a separate thing.\nEven if the analyser logic is not in-tree, we could try to make\nsomething that works pretty much the same across CI and BF. Perhaps\nwe could think about some of those ideas once the BF is using meson?\nAside from having just one system to think about, the meson build\nsystem is a bit more structured: it has a more formal concept of test\nsuites and tests with machine readable results from the top level\n(JSON files etc), with names strictly corresponding to directories\nwhere the output is, etc. I think I'd basically want a complete list\nof available files (= like the artifacts on CI), and then I'd pull\ndown the meson test result file and then decide which other files I\nalso want to pull down (ie stuff relating to failed tests) to analyse.\n(Not that any of that is intractable with the autoconf or handrolled\nperl/MSVC stuff, it's just messier, and hard to get motivated when its\ndays are numbered.)\n\nOne little thing I remembered while looking into this general topic is\nthe noise you get when we crash during pg_regress, which it'd be nice\nto fix:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGL7hxqbadkto7e1FCOLQhuHg%3DwVn_PDZd6fDMbQrrZisA%40mail.gmail.com\n\nAnother topic I'm interested in is how to find useful signals in the\ntiming data. For example, when Nathan and I worked on walreceiver\nwakeup improvements, we didn't notice that we'd caused some tests to\nbecome dramatically slower, because of a pre-existing bug/thinko we\nhadn't noticed. I want a computer to tell me about this stuff.\nThat's somewhat tricky because of all the noise, but hopefully it's\nnot beyond the powers of statistics to notice that a test unexpectedly\ntook a nap for 10s.\n\n\n",
"msg_date": "Mon, 20 Feb 2023 12:28:54 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "18.02.2023 23:09, Andres Freund wrote:\n> Hi,\n>\n> On 2023-02-18 18:00:00 +0300, Alexander Lakhin wrote:\n>> 18.02.2023 04:06, Andres Freund wrote:\n>> Maybe it's just the backend started for the money test has got\n>> the same PID (5948) that the backend for the name test had?\n> I somehow mashed name and money into one test in my head... So forget what I\n> wrote.\n>\n> That doesn't really explain the assertion though.\n> ,,,\n> It's definitely very aggressive in reusing pids - and it seems to\n> intentionally do work to keep pids small. I wonder if it'd be worth trying to\n> exercise this path aggressively by configuring a very low max pid on linux, in\n> an EXEC_BACKEND build.\nFor information, I performed the simple analysis of\nthe postmaster.log after ordinary `vcregress check`:\ngrep -o -E 'client backend\\[[0-9]+\\] pg_regress/\\w+' \n..\\..\\..\\src\\test\\regress\\log\\postmaster.log | sort | uniq | wc -l\ngrep -o -E 'client backend\\[[0-9]+\\] pg_regress' \n..\\..\\..\\src\\test\\regress\\log\\postmaster.log | sort | uniq | wc -l\nand got the following numbers:\n...\niteration 88\n196\n174\niteration 89\n196\n170\niteration 90\n196\n176\niteration 91\n196\n175\n\nThen I performed some more experiments and came to the conclusion\nthat `vcregress check` is not very suitable for catching duplicate pids.\nSo I've wrote a simple TAP test and caught the condition sought:\nt/099_check_pids.pl .. # 0\n# 1\n# 2\n# 3\n# 4\n# 5\n# 6\n# Thread 15 failed:\n# Got two equal pids in a row: 3552 on iteration 30\n# 1\n\nt/099_check_pids.pl .. 1/1 # Failed test at t/099_check_pids.pl line 107.\n...\n\nserver log contains:\n...\n2023-02-20 06:33:16.142 PST|[unknown]|[unknown]|5704|63f384ac.1648|LOG: \nconnection received: host=127.0.0.1 port=55134\n2023-02-20 06:33:16.142 PST|[unknown]|[unknown]|3552|63f384ac.de0|LOG: \nconnection received: host=127.0.0.1 port=55138\n2023-02-20 06:33:16.144 PST|postgres|postgres|5704|63f384ac.1648|LOG: \nconnection authorized: user=postgres database=postgres \napplication_name=099_check_pids.pl\n2023-02-20 06:33:16.144 PST|postgres|postgres|3552|63f384ac.de0|LOG: \nconnection authorized: user=postgres database=postgres \napplication_name=099_check_pids.pl\n2023-02-20 06:33:16.147 PST|postgres|postgres|5704|63f384ac.1648|LOG: \nstatement: SELECT pg_backend_pid()\n2023-02-20 06:33:16.147 PST|postgres|postgres|3552|63f384ac.de0|LOG: \nstatement: SELECT pg_backend_pid()\n2023-02-20 06:33:16.147 PST|postgres|postgres|5704|63f384ac.1648|LOG: \ndisconnection: session time: 0:00:00.009 user=postgres database=postgres \nhost=127.0.0.1 port=55134\n2023-02-20 06:33:16.147 PST|postgres|postgres|3552|63f384ac.de0|LOG: \ndisconnection: session time: 0:00:00.008 user=postgres database=postgres \nhost=127.0.0.1 port=55138\n2023-02-20 06:33:16.158 PST|[unknown]|[unknown]|1672|63f384ac.688|LOG: \nconnection received: host=127.0.0.1 port=55139\n\n...\n2023-02-20 06:33:16.485 PST|postgres|postgres|2748|63f384ac.abc|LOG: \nconnection authorized: user=postgres database=postgres \napplication_name=099_check_pids.pl\n2023-02-20 06:33:16.486 PST|[unknown]|[unknown]|3552|63f384ac.de0|LOG: \nconnection received: host=127.0.0.1 port=55164\n2023-02-20 06:33:16.487 PST|postgres|postgres|3552|63f384ac.de0|LOG: \nconnection authorized: user=postgres database=postgres \napplication_name=099_check_pids.pl\n2023-02-20 06:33:16.488 PST|postgres|postgres|2748|63f384ac.abc|LOG: \nstatement: SELECT pg_backend_pid()\n2023-02-20 06:33:16.489 PST|postgres|postgres|2748|63f384ac.abc|LOG: \ndisconnection: session time: 0:00:00.007 user=postgres database=postgres \nhost=127.0.0.1 port=55163\n2023-02-20 06:33:16.490 PST|postgres|postgres|3552|63f384ac.de0|LOG: \nstatement: SELECT pg_backend_pid()\n2023-02-20 06:33:16.491 PST|postgres|postgres|3552|63f384ac.de0|LOG: \ndisconnection: session time: 0:00:00.008 user=postgres database=postgres \nhost=127.0.0.1 port=55164\n2023-02-20 06:33:16.503 PST|[unknown]|[unknown]|244|63f384ac.f4|LOG: \nconnection received: host=127.0.0.1 port=55162\n...\n(note that even session IDs are the same)\n\nThough I've got no that assert yet, thus maybe as you said, the pid \nduplication\nis not the (only) condition that triggered it (or maybe something is \nwrong on my side).\n\nInterestingly, but I can get the expected duplicates only if I start the \nVS prompt\nand run the test immediately after a logon (or VM reboot).\nAfter some activity (or may be some time), the test can run for 30 minutes\nwithout success (maybe it depends on OS cache...).\n\nAlso with a similar TAP test I discovered that on Linux (Debian 11 \n32-bit) pids\nare generated sequentially:\n# pid: 329\n# pid: 331\n# pid: 333\n# pid: 336\n# pid: 338\n# pid: 340\n# pid: 343\n# pid: 345\n# pid: 349\n# pid: 353\nSo with an extra small max_pid (<360) I just get\n\"could not fork new process for connection: Resource temporarily \nunavailable'\"\nbefore two processes, that started one after another, get the same pid.\n\nBut on Windows the sequence looks random:\n# pid: 1736\n# pid: 8168\n# pid: 3764\n# pid: 7180\n# pid: 3724\n# pid: 5372\n# pid: 1588\n# pid: 4188\n# pid: 1404\n# pid: 5280\n\nBest regards,\nAlexander",
"msg_date": "Mon, 20 Feb 2023 18:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "And again:\n\nTRAP: failed Assert(\"PMSignalState->PMChildFlags[slot] ==\nPM_CHILD_ASSIGNED\"), File: \"../src/backend/storage/ipc/pmsigna...\n\nhttps://cirrus-ci.com/task/6558324615806976\nhttps://api.cirrus-ci.com/v1/artifact/task/6558324615806976/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/002_pg_upgrade_old_node.log\nhttps://api.cirrus-ci.com/v1/artifact/task/6558324615806976/crashlog/crashlog-postgres.exe_0974_2023-03-11_13-57-27-982.txt\n\n\n",
"msg_date": "Sun, 12 Mar 2023 20:18:30 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "12.03.2023 10:18, Thomas Munro wrote:\n> And again:\n>\n> TRAP: failed Assert(\"PMSignalState->PMChildFlags[slot] ==\n> PM_CHILD_ASSIGNED\"), File: \"../src/backend/storage/ipc/pmsigna...\n>\n> https://cirrus-ci.com/task/6558324615806976\n> https://api.cirrus-ci.com/v1/artifact/task/6558324615806976/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/002_pg_upgrade_old_node.log\n> https://api.cirrus-ci.com/v1/artifact/task/6558324615806976/crashlog/crashlog-postgres.exe_0974_2023-03-11_13-57-27-982.txt\n\nHere we have duplicate PIDs too:\n...\n2023-03-11 13:57:21.277 GMT [2152][client backend] [pg_regress/union][:0] LOG: \ndisconnection: session time: 0:00:00.268 user=SYSTEM database=regression \nhost=[local]\n...\n2023-03-11 13:57:22.320 GMT [4340][client backend] [pg_regress/join][8/947:0] \nLOG: statement: set enable_hashjoin to 0;\nTRAP: failed Assert(\"PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED\"), \nFile: \"../src/backend/storage/ipc/pmsignal.c\", Line: 329, PID: 2152\n\nAnd I see the following code in postmaster.c:\nCleanupBackend(int pid,\n int exitstatus) /* child's exit status. */\n{\n...\n dlist_foreach_modify(iter, &BackendList)\n {\n Backend *bp = dlist_container(Backend, elem, iter.cur);\n if (bp->pid == pid)\n {\n if (!bp->dead_end)\n {\n if (!ReleasePostmasterChildSlot(bp->child_slot))\n...\n\nso if a backend with the same PID happened to start (but not reached\nInitProcess() yet), when CleanBackend() is called to clean after a just\nfinished backend, the slot of the starting one will be released.\n\nI am yet to construct a reproduction of the case, but it seems to me that\nthe race condition is not impossible here.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 13 Mar 2023 17:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-13 17:00:00 +0300, Alexander Lakhin wrote:\n> 12.03.2023 10:18, Thomas Munro wrote:\n> > And again:\n> >\n> > TRAP: failed Assert(\"PMSignalState->PMChildFlags[slot] ==\n> > PM_CHILD_ASSIGNED\"), File: \"../src/backend/storage/ipc/pmsigna...\n> >\n> > https://cirrus-ci.com/task/6558324615806976\n> > https://api.cirrus-ci.com/v1/artifact/task/6558324615806976/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/002_pg_upgrade_old_node.log\n> > https://api.cirrus-ci.com/v1/artifact/task/6558324615806976/crashlog/crashlog-postgres.exe_0974_2023-03-11_13-57-27-982.txt\n>\n> Here we have duplicate PIDs too:\n> ...\n> 2023-03-11 13:57:21.277 GMT [2152][client backend] [pg_regress/union][:0]\n> LOG:� disconnection: session time: 0:00:00.268 user=SYSTEM\n> database=regression host=[local]\n> ...\n> 2023-03-11 13:57:22.320 GMT [4340][client backend]\n> [pg_regress/join][8/947:0] LOG:� statement: set enable_hashjoin to 0;\n> TRAP: failed Assert(\"PMSignalState->PMChildFlags[slot] ==\n> PM_CHILD_ASSIGNED\"), File: \"../src/backend/storage/ipc/pmsignal.c\", Line:\n> 329, PID: 2152\n>\n> And I see the following code in postmaster.c:\n> CleanupBackend(int pid,\n> �� ���� ��� �� int exitstatus)��� /* child's exit status. */\n> {\n> ...\n> ��� dlist_foreach_modify(iter, &BackendList)\n> �� �{\n> �� ���� Backend��� *bp = dlist_container(Backend, elem, iter.cur);\n> �� ���� if (bp->pid == pid)\n> �� ���� {\n> �� ���� ��� if (!bp->dead_end)\n> �� ���� ��� {\n> �� ���� ��� ��� if (!ReleasePostmasterChildSlot(bp->child_slot))\n> ...\n>\n> so if a backend with the same PID happened to start (but not reached\n> InitProcess() yet), when CleanBackend() is called to clean after a just\n> finished backend, the slot of the starting one will be released.\n\nOn unix that ought to be unreachable, because we haven't yet reaped the dead\nprocess. But I suspect that there currently is no such guarantee on\nwindows. Which seems broken.\n\nOn windows it looks like pids can't be reused as long as there are handles for\nthe process. Unfortunately, we close the handle for the process in\npgwin32_deadchild_callback(), which runs in a separate thread, so the pid can\nbe reused before we get to waitpid(). And thus it can happen while we start\nnew children.\n\nI think we need to remove the CloseHandle() from\npgwin32_deadchild_callback(). Likely pgwin32_deadchild_callback() shouldn't do\nanything other than\nUnregisterWaitEx();PostQueuedCompletionStatus(key=childinfo),\npg_queue_signal(), with everything else moved to waitpid().\n\n\n> I am yet to construct a reproduction of the case, but it seems to me that\n> the race condition is not impossible here.\n\nI suspect the issue could be made much more likely by adding a sleep before\nthe pg_queue_signal(SIGCHLD) in pgwin32_deadchild_callback().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Mar 2023 15:20:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 11:20 AM Andres Freund <andres@anarazel.de> wrote:\n> On windows it looks like pids can't be reused as long as there are handles for\n> the process. Unfortunately, we close the handle for the process in\n> pgwin32_deadchild_callback(), which runs in a separate thread, so the pid can\n> be reused before we get to waitpid(). And thus it can happen while we start\n> new children.\n\nAhhh. Right, of course. The handle thing makes total sense now that\nyou point it out, and although I couldn't find it in the fine manual,\na higher authority has it in black and white[1]. Even without knowing\nwhich of those calls is releasing the process table entry, we're doing\nall of them on the wrong side of that IOCP. Alright, here is a patch\nto schlep most of that code over into waitpid(), where it belongs.\n\n[1] https://devblogs.microsoft.com/oldnewthing/20110107-00/?p=11803",
"msg_date": "Tue, 14 Mar 2023 13:01:28 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 01:01:28PM +1300, Thomas Munro wrote:\n> Ahhh. Right, of course. The handle thing makes total sense now that\n> you point it out, and although I couldn't find it in the fine manual,\n> a higher authority has it in black and white[1]. Even without knowing\n> which of those calls is releasing the process table entry, we're doing\n> all of them on the wrong side of that IOCP. Alright, here is a patch\n> to schlep most of that code over into waitpid(), where it belongs.\n> \n> [1] https://devblogs.microsoft.com/oldnewthing/20110107-00/?p=11803\n\nI have a small question here..\n\nThe emulation of waitpid() for WIN32 is now in postmaster.c. Could it\nmake sense for some of the frontend code to be able to rely on that,\nas well?\n--\nMichael",
"msg_date": "Tue, 14 Mar 2023 09:29:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "On 2023-03-14 09:29:56 +0900, Michael Paquier wrote:\n> The emulation of waitpid() for WIN32 is now in postmaster.c. Could it\n> make sense for some of the frontend code to be able to rely on that,\n> as well?\n\nPlease not as part of this bugfix. It's intricately tied to postmaster.c\nspecific code, as it is.\n\n\n",
"msg_date": "Mon, 13 Mar 2023 17:41:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "Here's a better version with more explicit comments about some\ndetails. It passes on CI. Planning to push this soon.",
"msg_date": "Wed, 15 Mar 2023 10:33:05 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "Hi,\n14.03.2023 01:20, Andres Freund wrote:\n>> I am yet to construct a reproduction of the case, but it seems to me that\n>> the race condition is not impossible here.\n> I suspect the issue could be made much more likely by adding a sleep before\n> the pg_queue_signal(SIGCHLD) in pgwin32_deadchild_callback().\n\nThanks for the tip! With pg_usleep(50000) added there, I can reproduce the issue\nreliably during a minute on average with the 099_check_pids.pl I posted before:\n...\n2023-03-15 07:26:14.301 GMT|[unknown]|[unknown]|3748|64117316.ea4|LOG: \nconnection received: host=127.0.0.1 port=49902\n2023-03-15 07:26:14.302 GMT|postgres|postgres|3748|64117316.ea4|LOG: connection \nauthorized: user=postgres database=postgres application_name=099_check-pids.pl\n2023-03-15 07:26:14.304 GMT|postgres|postgres|3748|64117316.ea4|LOG: statement: \nSELECT pg_backend_pid()\n2023-03-15 07:26:14.305 GMT|postgres|postgres|3748|64117316.ea4|LOG: \ndisconnection: session time: 0:00:00.005 user=postgres database=postgres \nhost=127.0.0.1 port=49902\n...\n2023-03-15 07:26:25.592 GMT|[unknown]|[unknown]|3748|64117321.ea4|LOG: \nconnection received: host=127.0.0.1 port=50407\nTRAP: failed Assert(\"PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED\"), \nFile: \"C:\\src\\postgresql\\src\\backend\\storage\\ipc\\pmsignal.c\", Line: 329, PID: 3748\nabort() has been called2023-03-15 07:26:25.608 \nGMT|[unknown]|[unknown]|3524|64117321.dc4|LOG: connection received: \nhost=127.0.0.1 port=50408\n\nThe result depends on some OS conditions (it reproduced pretty well\nimmediately after VM reboot), but it's enough to test the patch proposed.\nAnd I can confirm that the Assert is not observed anymore (with the sleep\nadded after CloseHandle(childinfo->procHandle)).\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 15 Mar 2023 11:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 9:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> The result depends on some OS conditions (it reproduced pretty well\n> immediately after VM reboot), but it's enough to test the patch proposed.\n> And I can confirm that the Assert is not observed anymore (with the sleep\n> added after CloseHandle(childinfo->procHandle)).\n\nThanks for confirming. Pushed earlier today.\n\nDo you know how it fails in non-assert builds, without the fix?\n\n\n",
"msg_date": "Wed, 15 Mar 2023 21:43:23 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "Hi,\n15.03.2023 11:43, Thomas Munro wrote:\n> On Wed, Mar 15, 2023 at 9:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>> The result depends on some OS conditions (it reproduced pretty well\n>> immediately after VM reboot), but it's enough to test the patch proposed.\n>> And I can confirm that the Assert is not observed anymore (with the sleep\n>> added after CloseHandle(childinfo->procHandle)).\n> Thanks for confirming. Pushed earlier today.\n>\n> Do you know how it fails in non-assert builds, without the fix?\n\nI've replaced the Assert with 'if (!...) elog(...)' and got (with a non-assert \nbuild):\nt/099_check-pids.pl .. ok\nAll tests successful.\nFiles=1, Tests=1, 67 wallclock secs ( 0.03 usr + 0.00 sys = 0.03 CPU)\nResult: PASS\n2023-03-15 12:22:46.923 GMT|postgres|postgres|4484|6411b896.1184|LOG: \n!(PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED)\n2023-03-15 12:22:47.806 GMT|postgres|postgres|4180|6411b897.1054|LOG: \n!(PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED)\n2023-03-15 12:23:06.313 GMT|postgres|postgres|4116|6411b8aa.1014|LOG: \n!(PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED)\n2023-03-15 12:23:06.374 GMT|postgres|postgres|4740|6411b8aa.1284|LOG: \n!(PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED)\n2023-03-15 12:23:25.967 GMT|postgres|postgres|6812|6411b8bd.1a9c|LOG: \n!(PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED)\n\nSo at least with my test script that doesn't lead to a crash or something.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 15 Mar 2023 15:59:59 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 2:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> 15.03.2023 11:43, Thomas Munro wrote:\n> > Do you know how it fails in non-assert builds, without the fix?\n\n> So at least with my test script that doesn't lead to a crash or something.\n\nThanks. We were wondering if the retry mechanism might somehow be\nhiding this in non-assert builds, but, looking more closely, that is\ntied specifically to the memory reservation operation.\n\nI noticed that d41a178b missed a comment explaining why we used\nmalloc() instead of palloc(), but that isn't true anymore, so here's a\nsmall patch to clean that up.",
"msg_date": "Thu, 16 Mar 2023 10:13:56 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
},
{
"msg_contents": "16.03.2023 00:13, Thomas Munro wrote:\n> Thanks. We were wondering if the retry mechanism might somehow be\n> hiding this in non-assert builds, but, looking more closely, that is\n> tied specifically to the memory reservation operation.\n\nAs to hiding, when analyzing the Assert issue, I was wondered if \nPMSignalShmemInit() can hide an error when calling ShmemInitStruct() for backends?\nIf I understand correctly, backends get PMSignalState through backend_variables, \nso maybe ShmemInitStruct() should not be executed for backends at all?\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 16 Mar 2023 06:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows CI failing PMSignalState->PMChildFlags[slot] ==\n PM_CHILD_ASSIGNED"
}
] |
[
{
"msg_contents": "Hello\n\nIn PG15, ecpg japanese translation are different from other branches.\nIs there a reason for this?\nIf not, I think it would be better to make it the same as the other branch like the\nattached patch.\n\nregards,\nsho kato",
"msg_date": "Wed, 8 Feb 2023 02:55:21 +0000",
"msg_from": "\"Sho Kato (Fujitsu)\" <kato-sho@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Difference of ecpg japanese translation in PG15"
}
] |
[
{
"msg_contents": "Hi all,\n(Adding Bertrand in CC.)\n\n$subject is a follow-up of the automation of query jumbling for\nutilities and DDLs, and attached is a set of patches that apply\nnormalization to DDL queries across the board, for all utilities.\n\nThis relies on tracking the location of A_Const nodes while removing\nfrom the query jumbling computation the values attached to the node,\nas as utility queries can show be stored as normalized in\npg_stat_statements with some $N parameters. The main case behind\ndoing that is of course monitoring, where we have seen some user\ninstances willing to get more information but see pg_stat_statements\nas a bottleneck because the query ID of utility queries are based on\nthe computation of their string, and is value-sensitive. That's the\ncase mentioned by Bertrand Drouvot for CALL and SET where workloads\nfull of these easily bloat pg_stat_statements, where we concluded\nabout more automation in this area (so here it is):\nhttps://www.postgresql.org/message-id/36e5bffe-e989-194f-85c8-06e7bc88e6f7%40amazon.com\n\nFor example, this makes possible the following grouping:\n- CALL func(1,2); CALL func(1,3); => CALL func($1,$2)\n- EXPLAIN SELECT 1; EXPLAIN SELECT 1; => EXPLAIN SELECT $1;\n- CREATE MATERIALIZED VIEW aam AS SELECT 1; becomes \"CREATE\nMATERIALIZED VIEW aam AS SELECT $1\".\n\nQuery jumbling for DDLs and utilities happens now automatically, still\nare not represented correctly in pg_stat_statements (one bit of\ndocumentation I missed previously refers to the fact that these depend\non their query strings, which is not the case yet).\n\nBy the way, while looking at all that, I have really underestimated\nthe use of Const nodes in utilities, as some queries can finish with\nthe same query ID even if different values are stored in a query,\nstill don't show up as normalized in pg_stat_statements, so the\ncurrent state of HEAD is not good, though you would need to use the\nsame object name to a conflict for most of them. So that's my mistake\nhere with 3db72eb. If folks think that we'd better have a revert of\nthis automated query jumbling facility based on this argument, that\nwould be fine for me, as well. The main case I have noticed in this\narea is EXPLAIN, by the way. Note that it is actually easy to move to\nthe ~15 approach of having a query ID depending on the Const node\nvalues for DDLs, by having a custom implementation in\nqueryjumblefuncs.c for Const nodes, where we apply the constant value\nand don't store a location for normalization if a query has a utility\nonce this information is stored in a JumbleState.\n\nThis rule influences various DDLs, as well, once it gets applied\nacross the board, and it's been some work to identify all of them, but\nI think that I have caught them all as the regression database offers\nall the possible patterns:\n- CREATE VIEW, CTAS, CREATE MATERIALIZED VIEW which have Const nodes\ndepending on their attached queries, for various clauses.\n- ALTER TABLE/INDEX/FOREIGN with DEFAULT, SET components.\n- CREATE TABLE with partition bounds.\n- BEGIN and ABORT, with transaction commands getting grouped\ntogether.\n\nThe attached patch set includes as a set of regression tests for\npg_stat_statements for *all* the utility queries that have either\nConst or A_Const nodes, so as one can see the effect that all this\nstuff has. This is based on a diff of the contents of\npg_stat_statements on the regression database once all these\nnormalization rules are applied.\n\nCompilation of a Const can also be made depending on the type node.\nHowever, all that makes no sense if applying the same normalization\nrules to all the queries across the board, because all the queries\nwould follow the same rules. That's the critical bit IMO. From what\nI get, the bloat of pg_stat_statements for all utilities is something\nthat would be helpful for all such queries, still different things\ncould be done on a per-node basis. Perhaps this is too aggressive as\nit is and people don't like it, though, so feedback is welcome. I'd\nlike to think that maximizing grouping is nice though, because it\nleads to no actual loss of information on the workload pattern for the\nqueries involved, AFAIU. This sentence may be overoptimistic.\n\nSo, attached is a patch set, that does the following:\n- 0001 is a refactoring of the regression tests of\npg_stat_statements by splitting a bit the tests. I bumped into that\nwhile getting confused at how the tests are now when it comes to the\nhandling of utilities and track_planning, where these tests silently\nrely on other parts of the same file with different GUC settings.\nThis refactoring is useful on its own, IMO, and the tests show the\nsame output as previously.\n- 0002 is the addition of tests in pg_stat_statements for all the DDL\nand utility patterns that make use of Const and A_Const nodes. Even\nif query jumbling of utilities is done through their text string or\ntheir nodes, this is also useful.\n- 0003 is the code of the feature, that switches pg_stat_statements to\nproperly normalize utility queries, with a modification to A_Const so\nas normalization can be applied to it. With the generation of the\ncode for query jumbling being automated based on the node definitions,\nthis is straight-forward as a code change, but the changes are\nbasically impossible to track without all the patterns tracked by\n0002.\n\nThoughts and comments are welcome. 0001 and 0002 are useful on their\nown to keep track of utilities that use Const and A_Const after going\nthrough the query jumbling, even if an approach based on query string\nor the automated query jumbling for utilities is used (the query\nstring approach a bit its value). I'll add that to the next commit\nfest.\n\nThanks,\n--\nMichael",
"msg_date": "Wed, 8 Feb 2023 12:05:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "On Wed, Feb 08, 2023 at 12:05:24PM +0900, Michael Paquier wrote:\n> Thoughts and comments are welcome. 0001 and 0002 are useful on their\n> own to keep track of utilities that use Const and A_Const after going\n> through the query jumbling, even if an approach based on query string\n> or the automated query jumbling for utilities is used (the query\n> string approach a bit its value). I'll add that to the next commit\n> fest.\n\nWhile wondering about this stuff about the last few days and\ndiscussing with bertrand, I have changed my mind on the point that\nthere is no need to be that aggressive yet with the normalization of\nthe A_Const nodes, because the query string normalization of\npg_stat_statements is not prepared yet to handle cases where a A_Const\nvalue uses a non-quoted value with whitespaces. The two cases where I\nsaw an impact is on the commands that can define an isolation level:\nSET TRANSACTION and BEGIN.\n\nFor example, applying normalization to A_Const nodes does the\nfollowing as of HEAD:\n1) BEGIN TRANSACTION READ ONLY, READ WRITE, DEFERRABLE, NOT DEFERRABLE;\nBEGIN TRANSACTION $1 ONLY, $2 WRITE, $3, $4 DEFERRABLE\n2) SET TRANSACTION ISOLATION LEVEL READ COMMITTED;\nSET TRANSACTION ISOLATION LEVEL $1 COMMITTED\n\nOn top of that, specifying a different isolation level may cause these\ncommands to be grouped, which is not really cool. All that could be\ndone incrementally later on, in 17~ or later depending on the\nadjustments that make sense.\n\nAttached is an updated patch set. 0003 is basically the same as v3,\nthat I have kept around for clarity in case one wants to see the\neffect of a A_Const normalization to all the related commands, though\nI am not proposing that for an upstream integration. 0002 has been\ncompleted with a couple of commands to track all the commands with\nA_Const, so as we never lose sight of what happens. 0004 is what I\nthink could be done for PG16, where normalization affects only Const.\nAt the end of the day, this reflects the following commands that use\nConst nodes because they use directly queries, so the same rules as\nSELECT and DMLs apply to them:\n- DECLARE\n- EXPLAIN\n- CREATE MATERIALIZED VIEW\n- CTAS, SELECT INTO\n\nComments and thoughts welcome.\nThanks,\n--\nMichael",
"msg_date": "Thu, 16 Feb 2023 09:34:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "Hi,\n\nOn 2/16/23 1:34 AM, Michael Paquier wrote:\n> While wondering about this stuff about the last few days and\n> discussing with bertrand, I have changed my mind on the point that\n> there is no need to be that aggressive yet with the normalization of\n> the A_Const nodes, because the query string normalization of\n> pg_stat_statements is not prepared yet to handle cases where a A_Const\n> value uses a non-quoted value with whitespaces. The two cases where I\n> saw an impact is on the commands that can define an isolation level:\n> SET TRANSACTION and BEGIN.\n> \n> For example, applying normalization to A_Const nodes does the\n> following as of HEAD:\n> 1) BEGIN TRANSACTION READ ONLY, READ WRITE, DEFERRABLE, NOT DEFERRABLE;\n> BEGIN TRANSACTION $1 ONLY, $2 WRITE, $3, $4 DEFERRABLE\n> 2) SET TRANSACTION ISOLATION LEVEL READ COMMITTED;\n> SET TRANSACTION ISOLATION LEVEL $1 COMMITTED\n> \n> On top of that, specifying a different isolation level may cause these\n> commands to be grouped, which is not really cool. All that could be\n> done incrementally later on, in 17~ or later depending on the\n> adjustments that make sense.\n> \n\nThanks for those patches!\n\nYeah, agree about the proposed approach.\n\n\n> 0002 has been\n> completed with a couple of commands to track all the commands with\n> A_Const, so as we never lose sight of what happens. 0004 is what I\n> think could be done for PG16, where normalization affects only Const.\n> At the end of the day, this reflects the following commands that use\n> Const nodes because they use directly queries, so the same rules as\n> SELECT and DMLs apply to them:\n> - DECLARE\n> - EXPLAIN\n> - CREATE MATERIALIZED VIEW\n> - CTAS, SELECT INTO\n> \n\n0001:\n\nI like the idea of splitting the existing tests in dedicated files.\n\nWhat do you think about removing:\n\n\"\nSET pg_stat_statements.track_utility = FALSE;\nSET pg_stat_statements.track_planning = TRUE;\n\"\n\nIn the new pg_stat_statements.sql? That way pg_stat_statements.sql would always behave\nwith default values for those (currently we are setting both of them as non default).\n\nThen, with the default values in place, if we feel that some tests are missing we could add them in\nutility.sql or planning.sql accordingly.\n\n0002:\n\nProduces:\nv2-0002-Add-more-test-for-utility-queries-in-pg_stat_stat.patch:834: trailing whitespace.\nCREATE VIEW view_stats_1 AS\nv2-0002-Add-more-test-for-utility-queries-in-pg_stat_stat.patch:838: trailing whitespace.\nCREATE VIEW view_stats_1 AS\nwarning: 2 lines add whitespace errors.\n\n+-- SET TRANSACTION ISOLATION\n+BEGIN;\n+SET TRANSACTION ISOLATION LEVEL READ COMMITTED;\n+SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;\n+SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n\nWhat about adding things like \"SET SESSION CHARACTERISTICS AS TRANSACTION...\" too?\n\n0003 and 0004:\nThanks for keeping 0003 that's useful to see the impact of A_Const normalization.\n\nLooking at the diff they produce, I also do think that 0004 is what\ncould be done for PG16.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Feb 2023 10:55:32 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 10:55:32AM +0100, Drouvot, Bertrand wrote:\n> In the new pg_stat_statements.sql? That way pg_stat_statements.sql would always behave\n> with default values for those (currently we are setting both of them as non default).\n> \n> Then, with the default values in place, if we feel that some tests\n> are missing we could add them in > utility.sql or planning.sql\n> accordingly.\n\nI am not sure about this part, TBH, so I have left these as they are.\n\nAnyway, while having a second look at that, I have noticed that it is\npossible to extract as an independent piece all the tests related to\nlevel tracking. Things are worse than I thought initially, actually,\nbecause we had test scenarios mixing planning and level tracking, but\nthe tests don't care about measuring plans at all, see around FETCH\nFORWARD, meaning that queries on the table pg_stat_statements have\njust been copy-pasted around for the last few years. There were more\ntests that used \"test\" for a table name ;)\n\nI have been pondering about this part, and the tracking matters for DO\nblocks and PL functions, so I have moved all these cases into a new,\nseparate file. There is a bit more that can be done, for WAL tracking\nand roles near the end of pg_stat_statements.sql, but I have left that\nout for now. I have checked the output of the tests before and after\nthe refactoring for quite a bit of time, and the outputs match so\nthere is no loss of coverage.\n\n0001 looks quite committable at this stage, and that's independent on\nthe rest. At the end this patch creates four new test files that are\nextended in the next patches: utility, planning, track and cleanup.\n\n> 0002:\n> \n> Produces:\n> v2-0002-Add-more-test-for-utility-queries-in-pg_stat_stat.patch:834: trailing whitespace.\n> CREATE VIEW view_stats_1 AS\n> v2-0002-Add-more-test-for-utility-queries-in-pg_stat_stat.patch:838: trailing whitespace.\n> CREATE VIEW view_stats_1 AS\n> warning: 2 lines add whitespace errors.\n\nThanks, fixed.\n\n> +-- SET TRANSACTION ISOLATION\n> +BEGIN;\n> +SET TRANSACTION ISOLATION LEVEL READ COMMITTED;\n> +SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;\n> +SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n> \n> What about adding things like \"SET SESSION CHARACTERISTICS AS\n> TRANSACTION...\" too?\n\nThat's a good idea. It is again one of these fancy cases, better to\nkeep a track of them in the long-term..\n\n> 0003 and 0004:\n> Thanks for keeping 0003 that's useful to see the impact of A_Const normalization.\n> \n> Looking at the diff they produce, I also do think that 0004 is what\n> could be done for PG16.\n\nI am wondering if others have an opinion to share about that, but,\nyes, 0004 seems enough to begin with. We could always study more\nnormalization areas in future releases, taking it slowly.\n--\nMichael",
"msg_date": "Fri, 17 Feb 2023 11:35:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "Hi,\n\nOn 2/17/23 3:35 AM, Michael Paquier wrote:\n> On Thu, Feb 16, 2023 at 10:55:32AM +0100, Drouvot, Bertrand wrote:\n>> In the new pg_stat_statements.sql? That way pg_stat_statements.sql would always behave\n>> with default values for those (currently we are setting both of them as non default).\n>>\n>> Then, with the default values in place, if we feel that some tests\n>> are missing we could add them in > utility.sql or planning.sql\n>> accordingly.\n> \n> I am not sure about this part, TBH, so I have left these as they are.\n> \n> Anyway, while having a second look at that, I have noticed that it is\n> possible to extract as an independent piece all the tests related to\n> level tracking. Things are worse than I thought initially, actually,\n> because we had test scenarios mixing planning and level tracking, but\n> the tests don't care about measuring plans at all, see around FETCH\n> FORWARD, meaning that queries on the table pg_stat_statements have\n> just been copy-pasted around for the last few years. There were more\n> tests that used \"test\" for a table name ;)\n> \n> I have been pondering about this part, and the tracking matters for DO\n> blocks and PL functions, so I have moved all these cases into a new,\n> separate file. There is a bit more that can be done, for WAL tracking\n> and roles near the end of pg_stat_statements.sql, but I have left that\n> out for now. I have checked the output of the tests before and after\n> the refactoring for quite a bit of time, and the outputs match so\n> there is no loss of coverage.\n> \n> 0001 looks quite committable at this stage, and that's independent on\n> the rest. At the end this patch creates four new test files that are\n> extended in the next patches: utility, planning, track and cleanup.\n> \n\nThanks! LGTM.\n\n>> 0002:\n>>\n>> Produces:\n>> v2-0002-Add-more-test-for-utility-queries-in-pg_stat_stat.patch:834: trailing whitespace.\n>> CREATE VIEW view_stats_1 AS\n>> v2-0002-Add-more-test-for-utility-queries-in-pg_stat_stat.patch:838: trailing whitespace.\n>> CREATE VIEW view_stats_1 AS\n>> warning: 2 lines add whitespace errors.\n> \n> Thanks, fixed.\n> \n\nThanks!\n\n>> +-- SET TRANSACTION ISOLATION\n>> +BEGIN;\n>> +SET TRANSACTION ISOLATION LEVEL READ COMMITTED;\n>> +SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;\n>> +SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n>>\n>> What about adding things like \"SET SESSION CHARACTERISTICS AS\n>> TRANSACTION...\" too?\n> \n> That's a good idea. It is again one of these fancy cases, better to\n> keep a track of them in the long-term..\n> \n\nRight.\n\n002 LGTM.\n\n>> 0003 and 0004:\n>> Thanks for keeping 0003 that's useful to see the impact of A_Const normalization.\n>>\n>> Looking at the diff they produce, I also do think that 0004 is what\n>> could be done for PG16.\n> \n> I am wondering if others have an opinion to share about that, but,\n> yes, 0004 seems enough to begin with. We could always study more\n> normalization areas in future releases, taking it slowly.\n\nAgree.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 17 Feb 2023 09:36:27 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 09:36:27AM +0100, Drouvot, Bertrand wrote:\n> On 2/17/23 3:35 AM, Michael Paquier wrote:\n>> 0001 looks quite committable at this stage, and that's independent on\n>> the rest. At the end this patch creates four new test files that are\n>> extended in the next patches: utility, planning, track and cleanup.\n> \n> Thanks! LGTM.\n\nThanks. I have applied the set of regression tests in 0001 and 0002.\nNote that I have changed the order of the attributes when querying\npg_stat_statements, to make easier to follow the diffs generated by\nthe normalization. The unaligned mode would be another option, but\nit makes not much sense as long as there are no more than two fields\nwith variable lengths. Some extra notes about that:\n- Should the test for the validation WAL generation metrics be moved\nout? I am not sure that it makes much sense to separate it as it has\na short purpose.\n- Same issue with user activity, which creates a few roles and makes\nsure that their activity is tracked? We don't look at the userid in\nthis case, which does not make much sense to me.\n- Same issue with locking clauses, worth a file of their own?\n\nThe main file is still named pg_stat_statements.sql, perhaps it should\nbe renamed to something more generic, like general.sql? Or perhaps we\ncould just split the main file with a select.sql (with locking\nclauses) and a dml.sql?\n\n>> I am wondering if others have an opinion to share about that, but,\n>> yes, 0004 seems enough to begin with. We could always study more\n>> normalization areas in future releases, taking it slowly.\n> \n> Agree.\n\nThese last ones are staying around for a few more weeks, until the\nmiddle of the next CF, I guess. After all this is done, the final\nchanges are very short, showing the effects of the normalization, as\nof:\n 6 files changed, 45 insertions(+), 35 deletions(-)\n--\nMichael",
"msg_date": "Mon, 20 Feb 2023 11:32:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 11:32:23AM +0900, Michael Paquier wrote:\n> These last ones are staying around for a few more weeks, until the\n> middle of the next CF, I guess. After all this is done, the final\n> changes are very short, showing the effects of the normalization, as\n> of:\n> 6 files changed, 45 insertions(+), 35 deletions(-)\n\nWith the patches..\n--\nMichael",
"msg_date": "Mon, 20 Feb 2023 11:34:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 11:34:59AM +0900, Michael Paquier wrote:\n> With the patches..\n\nAttached is an updated patch set, where I have done more refactoring\nwork for the regression tests of pg_stat_statements, splitting\npg_stat_statments.sql into the following files:\n- user_activity.sql for the role-level resets.\n- wal.sql for the WAL generation tracking.\n- dml.sql for insert/update/delete/merge and row counts.\n- The main file is renamed to select.sql, as it now only covers SELECT\npatterns.\n\nThere is no change in the code coverage or the patterns tested. And\nwith that, I am rather comfortable with the shape of the regression\ntests moving forward.\n\n0002 and 0003 are equivalent to the previous 0003 and 0004 in v4, that\nswitch pg_stat_statements to apply the normalization to utilities that\nuse Const nodes.\n--\nMichael",
"msg_date": "Wed, 1 Mar 2023 13:47:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "Hi,\n\nOn 3/1/23 5:47 AM, Michael Paquier wrote:\n> On Mon, Feb 20, 2023 at 11:34:59AM +0900, Michael Paquier wrote:\n>> With the patches..\n> \n> Attached is an updated patch set, where I have done more refactoring\n> work for the regression tests of pg_stat_statements, splitting\n> pg_stat_statments.sql into the following files:\n> - user_activity.sql for the role-level resets.\n> - wal.sql for the WAL generation tracking.\n> - dml.sql for insert/update/delete/merge and row counts.\n> - The main file is renamed to select.sql, as it now only covers SELECT\n> patterns.\n> \n\nThanks!\n\nSplitting even more and removing pg_stat_statements.sql/out does make sense to me,\nso +1 for the patch.\n\nApplying 0001 produces:\n\nApplying: Split more regression tests of pg_stat_statements\n.git/rebase-apply/patch:1735: new blank line at EOF.\n+\n.git/rebase-apply/patch:2264: new blank line at EOF.\n+\nwarning: 2 lines add whitespace errors.\n\n\nNits:\n\n+++ b/contrib/pg_stat_statements/sql/wal.sql\n@@ -0,0 +1,22 @@\n+--\n+-- Validate WAL generation metrics\n+--\n+\n+SET pg_stat_statements.track_utility = FALSE;\n+\n+-- utility \"create table\" should not be shown\n\nThis comment is coming from the previous pg_stat_statements.sql but\nI wonder if it makes sense here as testing utility is not the initial purpose\nof wal.sql.\n\nSame comment for dml.sql:\n\n+-- utility \"create table\" should not be shown\n+CREATE TEMP TABLE pgss_dml_tab (a int, b char(20));\n\nWhat about removing those comments?\n\n> There is no change in the code coverage or the patterns tested.\n\nI had a look (comparing the new .sql files with the old pg_stat_statements.sql content) and I agree.\n\nExcept from the Nits above, 0001 LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 2 Mar 2023 08:12:24 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "On Thu, Mar 02, 2023 at 08:12:24AM +0100, Drouvot, Bertrand wrote:\n> Applying 0001 produces:\n> \n> Applying: Split more regression tests of pg_stat_statements\n> .git/rebase-apply/patch:1735: new blank line at EOF.\n> +\n> .git/rebase-apply/patch:2264: new blank line at EOF.\n> +\n> warning: 2 lines add whitespace errors.\n\nIndeed, removed.\n\n> What about removing those comments?\n\nRemoving these two as well.\n\n> Except from the Nits above, 0001 LGTM.\n\nThanks for double-checking, applied 0001 to finish this part of the\nwork. I am attaching the remaining bits as of the attached, combined\ninto a single patch. I am going to look at it again at the beginning\nof next week and potentially apply it so as the normalization reflects\nto the reports of pg_stat_statements.\n--\nMichael",
"msg_date": "Fri, 3 Mar 2023 09:37:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "Hi Michael!\n\nI'm rebasing a patch \"Tracking statements entry timestamp in\npg_stat_statements\" for applying after this patch. I've noted that\ncurrent tests are not quite independent one from another. There is two\nstatements in the end of user_activity.sql test:\n\nDROP ROLE regress_stats_user1;\nDROP ROLE regress_stats_user2;\n\nThose are done after the last pg_stat_statements_reset call in this\ntest file and thus, those are included in checks of wal.out file:\n\n query \n| calls | rows | wal_bytes_generated | wal_records_generated |\nwal_records_ge_rows \n-----------------------------------------------------------------------\n-------+-------+------+---------------------+-----------------------+--\n-------------------\n DELETE FROM pgss_wal_tab WHERE a > $1 \n| 1 | 1 | t | t | t\n DROP ROLE regress_stats_user1 \n| 1 | 0 | t | t | t\n DROP ROLE regress_stats_user2 \n| 1 | 0 | t | t | t\n\nThose statements is not related to any WAL tests. It seems a little bit\nincorrect to me.\n\nAre we need some changes here?\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n",
"msg_date": "Mon, 06 Mar 2023 15:50:55 +0300",
"msg_from": "Andrei Zubkov <zubkov@moonset.ru>",
"msg_from_op": false,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 03:50:55PM +0300, Andrei Zubkov wrote:\n> Those statements is not related to any WAL tests. It seems a little bit\n> incorrect to me.\n\nThe intention is to have each file run in isolation, so this is\nincorrect as it stands. Thanks for the report!\n--\nMichael",
"msg_date": "Tue, 7 Mar 2023 09:01:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "On Fri, Mar 03, 2023 at 09:37:27AM +0900, Michael Paquier wrote:\n> Thanks for double-checking, applied 0001 to finish this part of the\n> work. I am attaching the remaining bits as of the attached, combined\n> into a single patch.\n\nDoing so as a single patch was not feeling right as this actually\nfixes issues with the location calculations for the Const node, so I\nhave split that into three commits and finally applied the whole.\n\nAs a bonus, please see attached a patch to apply the normalization to\nCALL statements using the new automated infrastructure. OUT\nparameters can be passed to a procedure, hence I guess that these had\nbetter be silenced as well. This is not aimed at being integrated,\njust for reference.\n--\nMichael",
"msg_date": "Wed, 8 Mar 2023 15:19:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 2:19 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Mar 03, 2023 at 09:37:27AM +0900, Michael Paquier wrote:\n> > Thanks for double-checking, applied 0001 to finish this part of the\n> > work. I am attaching the remaining bits as of the attached, combined\n> > into a single patch.\n>\n> Doing so as a single patch was not feeling right as this actually\n> fixes issues with the location calculations for the Const node, so I\n> have split that into three commits and finally applied the whole.\n>\n> As a bonus, please see attached a patch to apply the normalization to\n> CALL statements using the new automated infrastructure. OUT\n> parameters can be passed to a procedure, hence I guess that these had\n> better be silenced as well. This is not aimed at being integrated,\n> just for reference.\n> --\n> Michael\n\nI tested it. all normally works fine. only 1 corner case:\nset pg_stat_statements.track = 'all';\ndrop table if Exists cp_test;\nCREATE TABLE cp_test (a int, b text);\nCREATE or REPLACE PROCEDURE ptest1(x text) LANGUAGE SQL AS $$ INSERT\nINTO cp_test VALUES (1, x); $$;\n\nCREATE or REPLACE PROCEDURE ptest3(y text)\nLANGUAGE SQL\nAS $$\nCALL ptest1(y);\nCALL ptest1($1);\n$$;\nSELECT pg_stat_statements_reset();\n\nCALL ptest3('b');\n\nSELECT calls, toplevel, rows, query FROM pg_stat_statements ORDER BY\nquery COLLATE \"C\";\nreturns:\n calls | toplevel | rows | query\n-------+----------+------+------------------------------------\n 1 | t | 0 | CALL ptest3($1)\n 2 | f | 2 | INSERT INTO cp_test VALUES ($2, x)\n 1 | t | 1 | SELECT pg_stat_statements_reset()\n\nhere, the intermediate CALL part is optimized away. or should I expect\nCALL ptest1($1) also in pg_stat_statements?\n\n\n",
"msg_date": "Wed, 16 Aug 2023 17:11:47 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 05:11:47PM +0800, jian he wrote:\n> SELECT calls, toplevel, rows, query FROM pg_stat_statements ORDER BY\n> query COLLATE \"C\";\n> returns:\n> calls | toplevel | rows | query\n> -------+----------+------+------------------------------------\n> 1 | t | 0 | CALL ptest3($1)\n> 2 | f | 2 | INSERT INTO cp_test VALUES ($2, x)\n> 1 | t | 1 | SELECT pg_stat_statements_reset()\n> \n> here, the intermediate CALL part is optimized away. or should I expect\n> CALL ptest1($1) also in pg_stat_statements?\n\nI would have guessed that ptest1() being called as part of ptest3()\nshould show up in the report if you use track = all, as all the nested\nqueries of a function, even if it is pure SQL, ought to show up. Now\nnote that ptest1() not showing up is not a new behavior, ~15 does the\nsame thing by missing it.\n--\nMichael",
"msg_date": "Fri, 18 Aug 2023 08:31:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Normalization of utility queries in pg_stat_statements"
}
] |
[
{
"msg_contents": "\nRight now, ICU locales are not validated:\n\n initdb ... --locale-provider=icu --icu-locale=anything\n CREATE COLLATION foo (PROVIDER=icu, LOCALE='anything');\n CREATE DATABASE anythingdb ICU_LOCALE 'anything';\n\nall succeed.\n\nWe do check that the value is accepted by ICU, but ICU seems to accept\nanything and use some fallback logic. Bogus strings will typically end\nup as the \"root\" locale (spelled \"root\" or \"\").\n\nAt first, I thought this was a bug. The ICU documentation[1] suggests\nthat the fallback logic can result in using the ICU default locale in\nsome cases. The default locale is problematic because it's affected by\nthe environment (LANG, LC_ALL, and strangely LC_MESSAGES; but strangely\nnot LC_COLLATE).\n\nFortunately, I didn't find any cases where it actually does fall back\nto the default locale, so I think we're safe, but validation seems wise\nregrardless. In different contexts we may want to fail (e.g. initdb\nwith a bogus locale), or warn, issue a notice that we changed the\nstring, or just silently change what the user entered to be in a\nconsistent form. BCP47 [2] seems to be the standard here, and we're\nalready using it when importing the icu collations.\n\nICU locale validation is not exactly straightforward, though, and I\nsuppose that's why it isn't already done. There's a document[3] that\nexplains canonicalization in terms of \"level 1\" and \"level 2\", and says\nthat ucol_canonicalize() provides level 2 canonicalization, but I am\nnot seeing all of the documented behavior in my tests. For instance,\nthe document says that \"de__PHONEBOOK\" should canonicalize to\n\"de@collation=phonebook\", but instead I see that it remains\n\"de__PHONEBOOK\". It also says that \"C\" should canonicalize to\n\"en_US_POSIX\", but in my test, it just goes to \"c\".\n\nThe right entry point appears to get uloc_getLanguageTag(), which\ninternally calls uloc_canonicalize, but also converts to BCP47 format,\nand gives the option for strictness. Non-strict mode seems problematic\nbecause for \"de__PHONEBOOK\", it returns a langtag of plain \"de\", which\nis a different actual locale than \"de__PHONEBOOK\". If uloc_canonicalize\nworked as documented, it would have changed it to\n\"de@collation=phonebook\" and the correct language tag \"de-u-co-phonebk\"\nwould be returned, which would find the right collator. I suppose that\nmeans we would need to use strict mode.\n\nAnd then we need to check whether it actually exists; i.e. reject well-\nformed but bogus locales, like \"wx-YZ\". To do that, probably the most\nstraightforward way would be to initialize a UCollator and then query\nit using ucol_getLocaleByType() with ULOC_VALID_LOCALE. If that results\nin the root locale, we could say that it doesn't exist because it\nfailed to find a more suitable match (unless the user explicitly\nrequested the root locale). If it resolves to something else, we could\neither just assume it's fine, or we could try to validate that it\nmatches what we expect in more detail. To be safe, we could double-\ncheck that the resulting BCP 47 locale string loads the same actual\ncollator as what would have been loaded with the original string (also\ncheck attributes?).\n\nThe overall benefit here is that we keep our catalogs consistently\nusing an independent standard format for ICU locale strings, rather\nthan whatever the user specifies. That makes it less likely that ICU\nneeds to use any fallback logic when trying to open a collator, which\ncould only be bad news.\n\nThoughts?\n\n\n[1] https://unicode-org.github.io/icu/userguide/locale/#fallback\n[2] https://en.wikipedia.org/wiki/IETF_language_tag\n[3]\nhttps://unicode-org.github.io/icu/userguide/locale/#canonicalization\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Tue, 07 Feb 2023 23:59:24 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "ICU locale validation / canonicalization"
},
{
"msg_contents": "On 08.02.23 08:59, Jeff Davis wrote:\n> The overall benefit here is that we keep our catalogs consistently\n> using an independent standard format for ICU locale strings, rather\n> than whatever the user specifies. That makes it less likely that ICU\n> needs to use any fallback logic when trying to open a collator, which\n> could only be bad news.\n\nOne use case is that if a user specifies a locale, say, of 'de-AT', this \nmight canonicalize to 'de' today, but we should still store what the \nuser specified because 1) that documents what the user wanted, and 2) it \nmight not canonicalize to the same thing tomorrow.\n\n\n\n",
"msg_date": "Thu, 9 Feb 2023 15:44:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 2:59 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> We do check that the value is accepted by ICU, but ICU seems to accept\n> anything and use some fallback logic. Bogus strings will typically end\n> up as the \"root\" locale (spelled \"root\" or \"\").\n\nI've noticed this, and I think it's really frustrating. There's barely\nany documentation of what strings you're allowed to specify, and the\ndocumentation that does exist is extremely difficult to understand.\nNormally, you could work around that problem to some degree by making\na guess at what you're supposed to be doing and then seeing whether\nthe program accepts it, but here that doesn't work either. It just\naccepts anything you give it and then you have to try to figure out\nwhether the behavior is what you wanted. But there's also no real\ndocumentation of what the behavior of any collation is, so you're\napparently just supposed to magically know what collations exist and\nhow they behave and then you can test whether the string you put in\ngave you the behavior you wanted.\n\nAdding validation and canonicalization wouldn't cure the documentation\nproblems, but it would be a big help. You still wouldn't know what\nstring you were supposed to be passing to ICU, but if you did pass it\na string, you'd find out what it thought that string meant. I think\nthat would be a huge step forward.\n\nUnfortunately, I have no idea whether your specific ideas about how to\nmake that happen are any good or not. But I hope they are, because the\ncurrent situation is pessimal.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Feb 2023 10:53:38 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Thu, 2023-02-09 at 15:44 +0100, Peter Eisentraut wrote:\n> One use case is that if a user specifies a locale, say, of 'de-AT',\n> this \n> might canonicalize to 'de' today,\n\nCanonicalization should not lose useful information, it should just\nrearrange it, so I don't see a risk here based on what I read and the\nbehavior I saw. In ICU, \"de-AT\" canonicalizes to \"de_AT\" and becomes\nthe language tag \"de-AT\".\n\n> but we should still store what the \n> user specified because 1) that documents what the user wanted, and 2)\n> it \n> might not canonicalize to the same thing tomorrow.\n\nWe don't want to store things with ambiguous interpretations that could\nchange tomorrow; that's a recipe for trouble. That's why most people\nstore timestamps as the offset from some epoch in UTC rather than as\n\"2/9/23\" (Feb 9 or Sept 2? 1923 or 2023?). There are exceptions where\nyou would want to store something like that, but I don't see why they'd\napply in this case, where reinterpretation probably means a corrupted\nindex.\n\nIf the user wants to know how their ad-hoc string was interpreted, they\ncan look at the resulting BCP 47 language tag, and see if it's what\nthey meant. We can try to make this user-friendly by offering a NOTICE,\nWARNING, or helper functions that allow them to explore. We can also\ndouble check that the canonicalized form resolves to the same actual\ncollator to be safe, and maybe even fall back to whatever the user\nspecified if not. I'm open to discuss how strict we want to be and what\nkind of escape hatches we need to offer.\n\nThere is still a risk that the BCP 47 language tag resolves to a\ndifferent specific ICU collator or different collator version tomorrow.\nThat's why we need to be careful about versioning (library versions or\ncollator versions or both), and we've had long discussions about that.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Thu, 09 Feb 2023 13:15:43 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Thu, 2023-02-09 at 10:53 -0500, Robert Haas wrote:\n> Unfortunately, I have no idea whether your specific ideas about how\n> to\n> make that happen are any good or not. But I hope they are, because\n> the\n> current situation is pessimal.\n\nIt feels like BCP 47 is the right catalog representation. We are\nalready using it for the import of initial collations, and it's a\nstandard, and there seems to be good support in ICU.\n\nThere are a couple cases where canonicalization will succeed but\nconversion to a BCP 47 language tag will fail. One is for unsupported\nattributes, like \"en_US@foo=bar\". Another is a bug I found and reported\nhere:\n\nhttps://unicode-org.atlassian.net/browse/ICU-22268\n\nIn both cases, we know that conversion has failed, and we have a choice\nabout how to proceed. We can fail, warn and continue with the user-\nentered representation, or turn off the strictness checking and come up\nwith some BCP 47 tag and see if it resolves to the same collator.\n\nI do like the ICU format locale IDs from a readability standpoint.\n\"en_US@colstrength=primary\" is more meaningful to me than \"en-US-u-ks-\nlevel1\" (the equivalent language tag). And the format is specified[1],\neven though it's not an independent standard. But I think the benefits\nof better validation, an independent standard, and the fact that we're\nalready favoring BCP47 outweigh my subjective opinion.\n\nI also attached a simple test program that I've been using to\nexperiment (not intended for code review).\n\nIt's hard for me to say that I'm sure I'm right. I really just got\ninvolved in this a few months back, and had a few off-list\nconversations with Peter Eisentraut to try to learn more (I believe he\nis aligned with my proposal but I will let him speak for himself).\n\nI should also say that I'm not exactly an expert in languages or\nscripts. I assume that ICU and IETF are doing sensible things to\naccommodate the diversity of human language as well as they can (or at\nleast much better than the Postgres project could do on its own).\n\nI'm happy to hear more input or other proposals.\n\n[1]\nhttps://unicode-org.github.io/icu/userguide/locale/#canonicalization\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Thu, 09 Feb 2023 14:09:39 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 2/9/23 23:09, Jeff Davis wrote:\n> I do like the ICU format locale IDs from a readability standpoint.\n> \"en_US@colstrength=primary\" is more meaningful to me than \"en-US-u-ks-\n> level1\" (the equivalent language tag). And the format is specified[1],\n> even though it's not an independent standard. But I think the benefits\n> of better validation, an independent standard, and the fact that we're\n> already favoring BCP47 outweigh my subjective opinion.\n\nI have the same feeling one is readable and the other unreadable but the \nunreadable one is standardized. Hard call.\n\nAnd in general I agree, if we are going to make ICU default it needs to \nbe more user friendly than it is now. Currently there is no nice way to \nunderstand if you entered the right locale or made a typo in the BCP47 \nsyntax.\n\nAndreas\n\n\n\n",
"msg_date": "Fri, 10 Feb 2023 01:04:57 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Fri, 2023-02-10 at 01:04 +0100, Andreas Karlsson wrote:\n> I have the same feeling one is readable and the other unreadable but\n> the \n> unreadable one is standardized. Hard call.\n> \n> And in general I agree, if we are going to make ICU default it needs\n> to \n> be more user friendly than it is now. Currently there is no nice way\n> to \n> understand if you entered the right locale or made a typo in the\n> BCP47 \n> syntax.\n\nWe will still allow the ICU format locale IDs for input; we would just\nconvert them to BCP47 before storing them in the catalog. And there's\nan inverse function, so it's easy enough to offer a view that shows the\nICU format locale IDs in addition to the BCP 47 tags.\n\nI don't think it's hugely important that we use BCP47; ICU format\nlocale IDs would also make sense. But I do think we should be\nconsistent to simplify things where we can -- collator versioning is\nhard enough without wondering how a user-entered string will be\ninterpreted. And if we're going to be consistent, BCP 47 seems like the\nmost obvious choice.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n",
"msg_date": "Thu, 09 Feb 2023 17:22:47 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 2/10/23 02:22, Jeff Davis wrote:\n> We will still allow the ICU format locale IDs for input; we would just\n> convert them to BCP47 before storing them in the catalog. And there's\n> an inverse function, so it's easy enough to offer a view that shows the\n> ICU format locale IDs in addition to the BCP 47 tags.\n\nAha, then I misread your mail. Sorry! BCP 47 sounds perfect for storage.\n\nAndreas\n\n\n\n",
"msg_date": "Fri, 10 Feb 2023 03:33:37 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 09.02.23 22:15, Jeff Davis wrote:\n> On Thu, 2023-02-09 at 15:44 +0100, Peter Eisentraut wrote:\n>> One use case is that if a user specifies a locale, say, of 'de-AT',\n>> this\n>> might canonicalize to 'de' today,\n> Canonicalization should not lose useful information, it should just\n> rearrange it, so I don't see a risk here based on what I read and the\n> behavior I saw. In ICU, \"de-AT\" canonicalizes to \"de_AT\" and becomes\n> the language tag \"de-AT\".\n\nIt turns out that 'de_AT' is actually a distinct collation from 'de' in \nCLDR, so that was not the best example. What behavior do you see for \n'de_CH'?\n\n\n\n",
"msg_date": "Fri, 10 Feb 2023 07:42:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Thu, Feb 9, 2023 at 5:09 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I do like the ICU format locale IDs from a readability standpoint.\n> \"en_US@colstrength=primary\" is more meaningful to me than \"en-US-u-ks-\n> level1\" (the equivalent language tag).\n\nSadly, neither of those means a whole lot to me? :-(\n\nHow did you find out that those are equivalent?\n\n> And the format is specified[1],\n> even though it's not an independent standard. But I think the benefits\n> of better validation, an independent standard, and the fact that we're\n> already favoring BCP47 outweigh my subjective opinion.\n\nSee, I'm confused, because that link says \"If a keyword list is\npresent it must be preceded by an at-sign\" which makes it sound like\nit is talking about stuff like en_US@colstrength=primary rather than\nstuff like en-US-u-ks-level1. The examples are all that way too, like\nit gives examples like en_IE@currency=IEP and\nfr@collation=phonebook;calendar=islamic-civil.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 Feb 2023 09:43:57 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Fri, 2023-02-10 at 07:42 +0100, Peter Eisentraut wrote:\n> It turns out that 'de_AT' is actually a distinct collation from 'de'\n> in \n> CLDR, so that was not the best example. What behavior do you see for\n> 'de_CH'?\n\nThe canonicalized form is de_CH and the bcp47 tag is de-CH.\n\nuloc_canonicalize() and uloc_getLanguageTag() are declared in uloc.h,\nand they aren't (as far as I can tell) tied to which collations are\nactually defined.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n",
"msg_date": "Fri, 10 Feb 2023 08:14:20 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Fri, 2023-02-10 at 09:43 -0500, Robert Haas wrote:\n> On Thu, Feb 9, 2023 at 5:09 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > I do like the ICU format locale IDs from a readability standpoint.\n> > \"en_US@colstrength=primary\" is more meaningful to me than \"en-US-u-\n> > ks-\n> > level1\" (the equivalent language tag).\n> \n> Sadly, neither of those means a whole lot to me? :-(\n> \n> How did you find out that those are equivalent?\n\nIn our tests you can see colstrength=primary is used to mean \"case\ninsensitive\". That's where I picked up the \"colstrength\" keyword, which\nis also present in the ICU sources, but now that you ask I'm embarassed\nthat I don't see the keyword itself documented very well. \n\nThis document\nhttps://unicode-org.github.io/icu/userguide/locale/#keywords\nlists keywords, but colstrength is not there. It's easy enough to find\nin the ICU source; I'm probably just missing the document.\n\nHere's the API reference, which tells you that you can set the strength\nof a collator (using the API, not the keyword):\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/ucol_8h.html#acc801048729e684bcabed328be85f77a\n\nThe more precise definitions of the strengths are here:\nhttps://unicode-org.github.io/icu/userguide/collation/concepts.html#comparison-levels\n\nRegarding the equivalence of the two forms, uloc_toLanguageTag() and\nuloc_toLanguageTag() are inverses. As far as I can tell (a lower degree\nof assurance than you are looking for), if one succeeds, then the other\nwill also succeed and produce the original result.\n\nThere are another couple documents here (TR35):\nhttp://www.unicode.org/reports/tr35/\nhttps://www.unicode.org/reports/tr35/tr35-collation.html#Setting_Options\nthat seems to cover the \"ks-level1\" and how it maps to the collation\nstrength.\n\nMy examination of these standards is very superficial -- I'm basically\njust checking that they seem to be there. If I search for a string like\n\"en-US-u-ks-level1\", I only find Postgres-related results, so you could\nalso question whether these standards are actually used.\n\nUsing BCP 47 tags for icu locale strings, and moving to ICU (as\ndiscussed in the other thread) is basically a leap of faith in ICU. The\ndocs aren't perfect, the source is hard to read, and we've found bugs.\nBut it seems like a better place for us than libc for the reasons I\nmentioned in the other thread.\n\n> > And the format is specified[1],\n> > even though it's not an independent standard. But I think the\n> > benefits\n> > of better validation, an independent standard, and the fact that\n> > we're\n> > already favoring BCP47 outweigh my subjective opinion.\n> \n> See, I'm confused, because that link says \"If a keyword list is\n> present it must be preceded by an at-sign\" which makes it sound like\n> it is talking about stuff like en_US@colstrength=primary rather than\n> stuff like en-US-u-ks-level1. The examples are all that way too, like\n> it gives examples like en_IE@currency=IEP and\n> fr@collation=phonebook;calendar=islamic-civil.\n\nMy paragraph was unclear, let me restate the point:\n\nTo represent ICU locale strings in the catalog consistently, we have\ntwo choices, which as far as I can tell are equivalent:\n\n1. ICU format Locale IDs. These are more readable, and still specified\n(albeit non-standard).\n\n2. BCP47 language tags. These are standardized, there's better\nvalidation with \"strict\" mode, and we are already using them.\n\nHonestly I don't think it's hugely important which one we pick. But\nbeing consistent is important, so we need to pick one, and BCP 47 seems\nlike the better option to me.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Fri, 10 Feb 2023 09:53:58 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 12:54 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> In our tests you can see colstrength=primary is used to mean \"case\n> insensitive\". That's where I picked up the \"colstrength\" keyword, which\n> is also present in the ICU sources, but now that you ask I'm embarassed\n> that I don't see the keyword itself documented very well.\n>\n> This document\n> https://unicode-org.github.io/icu/userguide/locale/#keywords\n> lists keywords, but colstrength is not there. It's easy enough to find\n> in the ICU source; I'm probably just missing the document.\n\nThe fact that you're figuring out how it all works from reading the\nsource code does not give me a warm feeling.\n\n> But it seems like a better place for us than libc for the reasons I\n> mentioned in the other thread.\n\nIt may be. But sometimes I feel that's not setting our sights very high. :-(\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 Feb 2023 22:50:56 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Fri, 2023-02-10 at 22:50 -0500, Robert Haas wrote:\n> The fact that you're figuring out how it all works from reading the\n> source code does not give me a warm feeling.\n\nRight. On the other hand, the behavior is quite well documented, it was\njust the keyword that was undocumented (or I didn't find it).\n\n> > But it seems like a better place for us than libc for the reasons I\n> > mentioned in the other thread.\n> \n> It may be. But sometimes I feel that's not setting our sights very\n> high. :-(\n\nHow much higher could we set our sights? What would the ideal collation\nprovider look like?\n\nThose are good questions, but please let's take those questions to the\nthread about ICU as a default.\n\nThe topic of this thread is:\n\nGiven that we are already offering ICU support, should we canonicalize\nthe locale string stored in the catalog? If so, should we use the ICU\nformat locale IDs, or BCP 47 language tags?\n\nDo you have an opinion on that topic? If not, do you need additional\ninformation?\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Mon, 13 Feb 2023 10:52:11 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Thu, 2023-02-09 at 14:09 -0800, Jeff Davis wrote:\n> It feels like BCP 47 is the right catalog representation. We are\n> already using it for the import of initial collations, and it's a\n> standard, and there seems to be good support in ICU.\n\nPatch attached.\n\nWe should have been canonicalizing all along -- either with\nuloc_toLanguageTag(), as this patch does, or at least with\nuloc_canonicalize() -- before passing to ucol_open().\n\nucol_open() is documented[1] to work on either language tags or ICU\nformat locale IDs. Anything else is invalid and ends up going through\nsome fallback logic, probably after being mis-parsed. For instance, in\nICU 72, \"fr_CA.UTF-8\" is not a valid ICU format locale ID or a valid\nlanguage tag, and is resolved by ucol_open() to the actual locale\n\"root\"; but if you canonicalize it first (to the ICU format locale ID\n\"fr_CA\" or the language tag \"fr-CA\"), it correctly resolves to the\nactual locale \"fr_CA\".\n\nThe correct thing to do is canonicalize first and then pass to\nucol_open().\n\nBut because we didn't canonicalize in the past, there could be raw\nlocale strings stored in the catalog that resolve to the wrong actual\ncollator, and there could be indexes depending on the wrong collator,\nso we have to be careful during pg_upgrade.\n\nSay someone created two ICU collations, one with locale \"en_US.UTF-8\"\nand one with locale \"fr_CA.UTF-8\" in PG15. When they upgrade to PG16,\nthis patch will check the language tag \"en-US\" and see that it resolves\nto the same locale as \"en_US.UTF-8\", and change to the language tag\nduring upgrade (so \"en-US\" will be in the new catalog). But when it\nchecks the language tag \"fr-CA\", it will notice that it resolves to a\ndifferent locale than \"fr_CA.UTF-8\", and keep the latter string even\nthough it's wrong, because some indexes might be dependent on that\nwrong collator.\n\n\n[1]\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/ucol_8h.html#a3b0bf34733dc208040e4157b0fe5fcd6\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Thu, 16 Feb 2023 23:45:39 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 10.02.23 18:53, Jeff Davis wrote:\n> To represent ICU locale strings in the catalog consistently, we have\n> two choices, which as far as I can tell are equivalent:\n> \n> 1. ICU format Locale IDs. These are more readable, and still specified\n> (albeit non-standard).\n> \n> 2. BCP47 language tags. These are standardized, there's better\n> validation with \"strict\" mode, and we are already using them.\n> \n> Honestly I don't think it's hugely important which one we pick. But\n> being consistent is important, so we need to pick one, and BCP 47 seems\n> like the better option to me.\n\nI found some discussion about this from when ICU support was first \nadded. See this message as a starting point: \nhttps://www.postgresql.org/message-id/flat/5291804b-169e-3ba9-fdaf-fae8e7d2d959%402ndquadrant.com#96acb7eb9299c2ca64dbabcf58e11a90\n\nThere isn't much detail there, but the discussion and the current code \nseem pretty convinced that\n\na) BCP47 tags are preferred, and\nb) They don't work with ICU versions before 54.\n\nI can't locate the source for claim b) anymore. However, it seems \npretty clear that there is some cutoff, even if it isn't exactly 54.\n\nI would support transitioning this forward somehow, but we would need to \nknow exactly what the impact would be.\n\n\n",
"msg_date": "Mon, 20 Feb 2023 15:46:23 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "New patch attached. The new patch also includes a GUC that (when\nenabled) validates that the collator is actually found.\n\nOn Mon, 2023-02-20 at 15:46 +0100, Peter Eisentraut wrote:\n> a) BCP47 tags are preferred, and\n\nAgreed.\n\n> b) They don't work with ICU versions before 54.\n\nI tried in versions 50 through 53, and the language tags are supported,\nbut I think I know why we don't use them:\n\nPrior to version 54, ICU would not set the collator attributes based on\nthe locale name. That is the same for either language tags or ICU\nformat locale IDs. However, for ICU format locale IDs, we added special\ncode to parse the locale string and set the attributes ourselves. We\ndidn't bother to add the same parsing logic for language tags, so if a\nlanguage tag is found in the catalog, the parts of it that specify\ncollation strength (for example) would be ignored. I don't know if\nthat's an actual problem when importing the system collations, because\nI don't think we use any collator attributes, but it makes sense that\nwe'd not favor language tags in ICU prior to v54.\n\n> I would support transitioning this forward somehow, but we would need\n> to \n> know exactly what the impact would be.\n\nI've done quite a bit of investigation, which I've described upthread.\n\nWe need to transition somehow, because the prior behavior is incorrect\nfor locales like \"fr_CA.UTF-8\". Our tests suggest that's an acceptable\nthing to do, but if we pass that straight to ucol_open(), then it gets\nmisinterpreted as plain \"fr\" because it doesn't understand the \".\" as a\nvalid separator. We must turn it into a language tag (or at least\ncanonicalize it) before passing the string to ucol_open().\n\nThis misbehavior only affects a small number of locales, which resolve\nto a different actual collator than they should. The most problematic\ncase is during pg_upgrade, where a slight behavior change would result\nin corrupt indexes. So during binary upgrade, my patch falls back to\nthe original raw string (not the language tag) when it resolves to a\ndifferent actual collator. If we want to be more paranoid, we could\nalso provide a compatibility GUC to preserve the old misbehavior for\nnewly-created collations, too, but I don't think that's necessary.\n\nThere is also some interaction with pg_upgrade's ability to check\nwhether the old and new cluster are compatible. If the catalog\nrepresentation of the locale changes, then it could falsely believe the\nicu locales aren't compatible, because it's doing a simple string\ncomparison. But as we are discussing in the other thread[1], the whole\nidea of checking for compatibility of the initialized cluster is\nstrange: pg_upgrade should be in charge of making a compatible cluster\nto upgrade into (assuming the binaries are at least compatible). I\ndon't see this as a major problem; we'll sort out the other thread\nfirst to allow ICU as the default, and then adapt this patch if\nnecessary.\n\n[1]\nhttps://www.postgresql.org/message-id/20230214175957.idkb7shsqzp5nbll@awork3.anarazel.de\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Mon, 20 Feb 2023 15:23:40 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Mon, 2023-02-20 at 15:23 -0800, Jeff Davis wrote:\n> \n> New patch attached. The new patch also includes a GUC that (when\n> enabled) validates that the collator is actually found.\n\nNew patch attached.\n\nNow it always preserves the exact locale string during pg_upgrade, and\ndoes not attempt to canonicalize it. Before it was trying to be clever\nby determining if the language tag was finding the same collator as the\noriginal string -- I didn't find a problem with that, but it just\nseemed a bit too clever. So, only newly-created locales and databases\nhave the ICU locale string canonicalized to a language tag.\n\nAlso, I added a SQL function pg_icu_language_tag() that can convert\nlocale strings to language tags, and check whether they exist or not.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Mon, 27 Feb 2023 21:57:26 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 28.02.23 06:57, Jeff Davis wrote:\n> On Mon, 2023-02-20 at 15:23 -0800, Jeff Davis wrote:\n>>\n>> New patch attached. The new patch also includes a GUC that (when\n>> enabled) validates that the collator is actually found.\n> \n> New patch attached.\n> \n> Now it always preserves the exact locale string during pg_upgrade, and\n> does not attempt to canonicalize it. Before it was trying to be clever\n> by determining if the language tag was finding the same collator as the\n> original string -- I didn't find a problem with that, but it just\n> seemed a bit too clever. So, only newly-created locales and databases\n> have the ICU locale string canonicalized to a language tag.\n> \n> Also, I added a SQL function pg_icu_language_tag() that can convert\n> locale strings to language tags, and check whether they exist or not.\n\nThis patch appears to do about three things at once, and it's not clear \nexactly where the boundaries are between them and which ones we might \nactually want. And I think the terminology also gets mixed up a bit, \nwhich makes following this harder.\n\n1. Canonicalizing the locale string. This is presumably what \nuloc_canonicalize() does, which the patch doesn't actually use. What \nare examples of what this does? Does the patch actually do this?\n\n2. Converting the locale string to BCP 47 format. This converts \n'de@collation=phonebook' to 'de-u-co-phonebk'. This is what \nuloc_getLanguageTag() does.\n\n3. Validating the locale string, to reject faulty input.\n\nWhat are the relationships between these?\n\nI don't understand how the validation actually happens in your patch. \nDoes uloc_getLanguageTag() do the validation also?\n\nCan you do canonicalization without converting to language tag?\n\nCan you do validation of un-canonicalized locale names?\n\nWhat is the guidance for the use of the icu_locale_validation GUC?\n\nThe description throws in yet another term: \"validates that ICU locale \nstrings are well-formed\". What is \"well-formed\"? How does that relate \nto the other concepts?\n\nPersonally, I'm not on board with this behavior:\n\n=> CREATE COLLATION test (provider = icu, locale = \n'de@collation=phonebook');\nNOTICE: 00000: using language tag \"de-u-co-phonebk\" for locale \n\"de@collation=phonebook\"\n\nI mean, maybe that is a thing we want to do somehow sometime, to migrate \npeople to the \"new\" spellings, but the old ones aren't wrong. So this \nshould be a separate consideration, with an option, and it would require \nvarious updates in the documentation. It also doesn't appear to address \nhow to handle ICU before version 54.\n\nBut, see earlier questions, are these three things all connected somehow?\n\n\n\n",
"msg_date": "Thu, 9 Mar 2023 09:46:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Thu, 2023-03-09 at 09:46 +0100, Peter Eisentraut wrote:\n> This patch appears to do about three things at once, and it's not\n> clear \n> exactly where the boundaries are between them and which ones we might\n> actually want. And I think the terminology also gets mixed up a bit,\n> which makes following this harder.\n> \n> 1. Canonicalizing the locale string. This is presumably what \n> uloc_canonicalize() does, which the patch doesn't actually use. What\n> are examples of what this does? Does the patch actually do this?\n\nBoth uloc_canonicalize() and uloc_getLanguageTag() do Level 2\nCanonicalization, which is described here:\n\nhttps://unicode-org.github.io/icu/userguide/locale/#canonicalization\n\n> 2. Converting the locale string to BCP 47 format. This converts \n> 'de@collation=phonebook' to 'de-u-co-phonebk'. This is what \n> uloc_getLanguageTag() does.\n\nYes, though uloc_getLanguageTag() also canonicalizes. I consider\nconverting to the language tag a part of \"canonicalization\", because\nit's the canonical form we agreed on in this thread.\n\n> 3. Validating the locale string, to reject faulty input.\n\nCanonicalization doesn't make sure the locale actually exists in ICU,\nso it's easy to make a typo like \"jp_JP\" instead of \"ja_JP\". After\ncanonicalizing to a language tag, the former is \"jp-JP\" (resolving to\nthe collator with valid locale \"root\") and the latter is \"ja-JP\"\n(resolving to the collator with valid locale \"ja\"). The former is\nclearly a mistake, and I call catching that mistake \"validation\".\n\nIf the user specifies something other than the root locale (i.e. not\n\"root\", \"und\", or \"\"), and the locale resolves to a collator with a\nvalid locale of \"root\", then this patch considers that to be a mistake\nand issues a WARNING (upgraded to ERROR if the GUC\nicu_locale_validation is true).\n\n> What are the relationships between these?\n\n1 & 2 are closely related. If we canonicalize, we need to pick one\ncanonical form: either BCP 47 or ICU format locale IDs.\n\n3 is related, but can be seen as an independent change.\n\n> I don't understand how the validation actually happens in your patch.\n> Does uloc_getLanguageTag() do the validation also?\n\nUsing the above definition of \"validation\" it happens inside\nicu_collator_exists().\n\n> Can you do canonicalization without converting to language tag?\n\nIf we used uloc_canonicalize(), it would give us ICU format locale IDs,\nand that would be a valid thing to do; and we could switch the\ncanonical form from ICU format locale IDs to BCP 47 in a separate\npatch. I don't have a strong opinion, but if we're going to\ncanonicalize, I think it makes sense to go straight to language tags.\n\n> Can you do validation of un-canonicalized locale names?\n\nYes, though I feel like an un-canonicalized name is less stable in\nmeaning, and so validation on that name may also be less stable.\n\nFor instance, if we don't canonicalize \"fr_CA.UTF-8\", it resolves to\nplain \"fr\"; but if we do canonicalize it first, it resolves to \"fr-CA\".\nWill the uncanonicalized name always resolve to \"fr\"? I'm not sure,\nbecause the documentation says that ucol_open() expects either an ICU\nformat locale ID or, preferably, a language tag.\n\nSo they are technically independently useful changes, but I would\nrecommend that canonicalization goes in first.\n\n> What is the guidance for the use of the icu_locale_validation GUC?\n\nIf an error when creating a new collation or database due to a bad\nlocale name would be highly disruptive, leave it false. If such an\nerror would be helpful to make sure you get the locale you expect, then\nturn it on. In practice, existing important production systems would\nleave it off; new systems could turn it on to help avoid\nmisconfigurations/mistakes.\n\n> The description throws in yet another term: \"validates that ICU\n> locale \n> strings are well-formed\". What is \"well-formed\"? How does that\n> relate \n> to the other concepts?\n\nGood point, I don't think I need to redefine \"validation\". Maybe I\nshould just describe it as elevating canonicalization or validation\nproblems from WARNING to ERROR.\n\n> Personally, I'm not on board with this behavior:\n> \n> => CREATE COLLATION test (provider = icu, locale = \n> 'de@collation=phonebook');\n> NOTICE: 00000: using language tag \"de-u-co-phonebk\" for locale \n> \"de@collation=phonebook\"\n> \n> I mean, maybe that is a thing we want to do somehow sometime, to\n> migrate \n> people to the \"new\" spellings, but the old ones aren't wrong.\n\nI see what you mean; I'm not sure the best thing to do here. We are\nadjusting the string passed by the user, and it feels like some users\nmight want to know that. It's a NOTICE, not a WARNING, so it's not\nmeant to imply that it's wrong.\n\nBut at the same time I can see it being annoying or confusing. If it's\nconfusing, perhaps a wording change and documentation would improve it?\nIf it's annoying, we might need to have an option and/or a different\nlog level?\n\n> It also doesn't appear to address \n> how to handle ICU before version 54.\n\nDo you have a specific concern here?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 09 Mar 2023 12:17:53 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 09.03.23 21:17, Jeff Davis wrote:\n>> Personally, I'm not on board with this behavior:\n>>\n>> => CREATE COLLATION test (provider = icu, locale =\n>> 'de@collation=phonebook');\n>> NOTICE: 00000: using language tag \"de-u-co-phonebk\" for locale\n>> \"de@collation=phonebook\"\n>>\n>> I mean, maybe that is a thing we want to do somehow sometime, to\n>> migrate\n>> people to the \"new\" spellings, but the old ones aren't wrong.\n> \n> I see what you mean; I'm not sure the best thing to do here. We are\n> adjusting the string passed by the user, and it feels like some users\n> might want to know that. It's a NOTICE, not a WARNING, so it's not\n> meant to imply that it's wrong.\n\nFor clarification, I wasn't complaining about the notice, but about the \nautomatic conversion from old-style ICU locale ID to language tag.\n\n>> It also doesn't appear to address\n>> how to handle ICU before version 54.\n> \n> Do you have a specific concern here?\n\nWhat we had discussed a while ago in one of these threads is that ICU \nbefore version 54 do not support keyword lists, and we have custom code \nto do that parsing ourselves, but we don't have code to do the same for \nlanguage tags. Therefore, if I understand this right, if we \nautomatically convert ICU locale IDs to language tags, as shown above, \nthen we break support for such locales in those older ICU versions.\n\n\n",
"msg_date": "Mon, 13 Mar 2023 08:25:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Mon, 2023-03-13 at 08:25 +0100, Peter Eisentraut wrote:\n> For clarification, I wasn't complaining about the notice, but about\n> the \n> automatic conversion from old-style ICU locale ID to language tag.\n\nCanonicalization means that we pick one format, and automatically\nconvert to it, right?\n\n> What we had discussed a while ago in one of these threads is that ICU\n> before version 54 do not support keyword lists, and we have custom\n> code \n> to do that parsing ourselves, but we don't have code to do the same\n> for \n> language tags. Therefore, if I understand this right, if we \n> automatically convert ICU locale IDs to language tags, as shown\n> above, \n> then we break support for such locales in those older ICU versions.\n\nRight. In versions 53 and earlier, and during pg_upgrade, we would just\npreserve the locale string as entered.\n\nAlternatively, we could canonicalize to the ICU format locale IDs. Or\nadd something to parse out the attributes from a language tag.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 13 Mar 2023 08:31:46 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 13.03.23 16:31, Jeff Davis wrote:\n>> What we had discussed a while ago in one of these threads is that ICU\n>> before version 54 do not support keyword lists, and we have custom\n>> code\n>> to do that parsing ourselves, but we don't have code to do the same\n>> for\n>> language tags. Therefore, if I understand this right, if we\n>> automatically convert ICU locale IDs to language tags, as shown\n>> above,\n>> then we break support for such locales in those older ICU versions.\n> \n> Right. In versions 53 and earlier, and during pg_upgrade, we would just\n> preserve the locale string as entered.\n\nAnother issue that came to mind: Right now, you can, say, develop SQL \nschemas on a newer ICU version, say, your laptop, and then deploy them \non a server running an older ICU version. If we have a cutoff beyond \nwhich we convert ICU locale IDs to language tags, then this won't work \nanymore for certain combinations. And RHEL/CentOS 7 is still pretty \npopular.\n\n\n\n",
"msg_date": "Tue, 14 Mar 2023 08:08:55 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Tue, 2023-03-14 at 08:08 +0100, Peter Eisentraut wrote:\n> Another issue that came to mind: Right now, you can, say, develop\n> SQL \n> schemas on a newer ICU version, say, your laptop, and then deploy\n> them \n> on a server running an older ICU version. If we have a cutoff beyond\n> which we convert ICU locale IDs to language tags, then this won't\n> work \n> anymore for certain combinations. And RHEL/CentOS 7 is still pretty \n> popular.\n\nIf we just uloc_canonicalize() in icu_set_collation_attributes() then\nversions 50-53 can support language tags. Patch attached.\n\nOne loose end is that we really should support language tags like \"und\"\nin those older versions (54 and earlier). Your commit d72900bded\navoided the problem, but perhaps we should fix it by looking for \"und\"\nand replacing it with \"root\" while opening, or something.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Tue, 14 Mar 2023 10:10:42 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Tue, 2023-03-14 at 10:10 -0700, Jeff Davis wrote:\n> One loose end is that we really should support language tags like\n> \"und\"\n> in those older versions (54 and earlier). Your commit d72900bded\n> avoided the problem, but perhaps we should fix it by looking for\n> \"und\"\n> and replacing it with \"root\" while opening, or something.\n\nAttached are a few patches to implement this idea.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Tue, 14 Mar 2023 23:47:42 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Tue, 2023-03-14 at 23:47 -0700, Jeff Davis wrote:\n> On Tue, 2023-03-14 at 10:10 -0700, Jeff Davis wrote:\n> > One loose end is that we really should support language tags like\n> > \"und\"\n> > in those older versions (54 and earlier). Your commit d72900bded\n> > avoided the problem, but perhaps we should fix it by looking for\n> > \"und\"\n> > and replacing it with \"root\" while opening, or something.\n> \n> Attached are a few patches to implement this idea.\n\nHere is an updated patch series that includes these earlier fixes for\nolder ICU versions, with the canonicalization patch last (0005).\n\nI left out the validation patch for now, and I'm evaluating a different\napproach that will attempt to match to the locales retrieved with\nuloc_countAvailable()/uloc_getAvailable().\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Wed, 15 Mar 2023 15:18:05 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Wed, 2023-03-15 at 15:18 -0700, Jeff Davis wrote:\n> I left out the validation patch for now, and I'm evaluating a\n> different\n> approach that will attempt to match to the locales retrieved with\n> uloc_countAvailable()/uloc_getAvailable().\n\nI like this approach, attached new patch series with that included as\n0006.\n\nThe first 3 patches are essentially bugfixes -- should they be\nbackported?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Fri, 17 Mar 2023 10:55:56 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 17.03.23 18:55, Jeff Davis wrote:\n> On Wed, 2023-03-15 at 15:18 -0700, Jeff Davis wrote:\n>> I left out the validation patch for now, and I'm evaluating a\n>> different\n>> approach that will attempt to match to the locales retrieved with\n>> uloc_countAvailable()/uloc_getAvailable().\n> \n> I like this approach, attached new patch series with that included as\n> 0006.\n\nI have looked at the first three patches. I think we want what those \npatches do.\n\n\n[PATCH v6 1/6] Support language tags in older ICU versions (53 and\n earlier).\n\nIn pg_import_system_collations(), this is now redundant and can be \nsimplified:\n\n-\t\tif (!pg_is_ascii(langtag) || !pg_is_ascii(iculocstr))\n+\t\tif (!pg_is_ascii(langtag) || !pg_is_ascii(langtag))\n\nicu_set_collation_attributes() needs more commenting about what is going \non. My guess is that uloc_canonicalize() converts from language tag to \nICU locale ID, and then the existing logic to parse that apart would \napply. Is that how it works?\n\n\n[PATCH v6 2/6] Wrap ICU ucol_open().\n\nIt makes sense to try to unify some of this. But I find the naming \nconfusing. If I see pg_ucol_open(), then I would expect that all calls \nto ucol_open() would be replaced by this. But here it's only a few, \nwithout explanation. (pg_ucol_open() has no explanation at all AFAICT.)\n\nI have in my notes that check_icu_locale() and make_icu_collator() \nshould be combined into a single function. I think that would be a \nbetter way to slice it.\n\nBtw., I had intentionally not written code like this\n\n+#if U_ICU_VERSION_MAJOR_NUM < 54\n+\ticu_set_collation_attributes(collator, loc_str);\n+#endif\n\nThe disadvantage of doing it that way is that you then need to dig out \nan old version of ICU in order to check whether the code compiles at \nall. With the current code, you can be sure that that code compiles if \nyou make changes elsewhere.\n\n\n[PATCH v6 3/6] Handle the \"und\" locale in ICU versions 54 and older.\n\nThis makes sense, but the same comment about not #if'ing out code for \nold ICU versions applies here.\n\nThe\n\n+#ifdef USE_ICU\n+\n\nbefore pg_ucol_open() probably belongs in patch 2.\n\n\n",
"msg_date": "Tue, 21 Mar 2023 10:35:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Tue, 2023-03-21 at 10:35 +0100, Peter Eisentraut wrote:\n> [PATCH v6 1/6] Support language tags in older ICU versions (53 and\n> earlier).\n> \n> In pg_import_system_collations(), this is now redundant and can be \n> simplified:\n> \n> - if (!pg_is_ascii(langtag) || !pg_is_ascii(iculocstr))\n> + if (!pg_is_ascii(langtag) || !pg_is_ascii(langtag))\n> \n> icu_set_collation_attributes() needs more commenting about what is\n> going \n> on. My guess is that uloc_canonicalize() converts from language tag\n> to \n> ICU locale ID, and then the existing logic to parse that apart would \n> apply. Is that how it works?\n\nFixed the redundancy, added some comments, and committed 0001.\n\n> [PATCH v6 2/6] Wrap ICU ucol_open().\n> \n> It makes sense to try to unify some of this. But I find the naming \n> confusing. If I see pg_ucol_open(), then I would expect that all\n> calls \n> to ucol_open() would be replaced by this. But here it's only a few, \n> without explanation. (pg_ucol_open() has no explanation at all\n> AFAICT.)\n\nThe remaining callsite which doesn't use the wrapper is in initdb.c,\nwhich can't call into pg_locale.c, and has different intentions. initdb\nuses ucol_open to get the default locale if icu_locale is not\nspecified; and it also uses ucol open to verify that the locale can be\nopened (whether specified or the default). (Aside: I created a tiny\n0004 patch which makes this difference more clear and adds a nice\ncomment.)\n\nThere's no reason to use a wrapper when getting the default locale,\nbecause it's just passing NULL anyway.\n\nWhen verifying that the locale can be opened, ucol_open() doesn't catch\nmany problems anyway, so I'm not sure it's worth a lot of effort to\ncopy these extra checks that the wrapper does into initdb.c. For\ninstance, what's the value in replacing \"und\" with \"root\" if opening\neither will succeed? Parsing the attributes can potentially catch\nproblems, but the later patch 0006 will check the attributes when\nconverting to a language tag at initdb time.\n\nSo I'm inclined to just leave initdb alone in patches 0002 and 0003.\n\n> I have in my notes that check_icu_locale() and make_icu_collator() \n> should be combined into a single function. I think that would be a \n> better way to slice it.\n\nThat would leave out get_collation_actual_version(), which should\nhandle the same fixups for attributes and the \"und\" locale.\n\n> Btw., I had intentionally not written code like this\n> \n> +#if U_ICU_VERSION_MAJOR_NUM < 54\n> + icu_set_collation_attributes(collator, loc_str);\n> +#endif\n> \n> The disadvantage of doing it that way is that you then need to dig\n> out \n> an old version of ICU in order to check whether the code compiles at \n> all. With the current code, you can be sure that that code compiles\n> if \n> you make changes elsewhere.\n\nI was wondering about that -- thank you, I changed it back to use \"if\"\nrather than \"#ifdef\".\n\n\nNew series attached (starting at 0002 to better correspond to the\nprevious series).\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Wed, 22 Mar 2023 11:05:31 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 22.03.23 19:05, Jeff Davis wrote:\n> On Tue, 2023-03-21 at 10:35 +0100, Peter Eisentraut wrote:\n>> [PATCH v6 1/6] Support language tags in older ICU versions (53 and\n>> earlier).\n>>\n>> In pg_import_system_collations(), this is now redundant and can be\n>> simplified:\n>>\n>> - if (!pg_is_ascii(langtag) || !pg_is_ascii(iculocstr))\n>> + if (!pg_is_ascii(langtag) || !pg_is_ascii(langtag))\n>>\n>> icu_set_collation_attributes() needs more commenting about what is\n>> going\n>> on. My guess is that uloc_canonicalize() converts from language tag\n>> to\n>> ICU locale ID, and then the existing logic to parse that apart would\n>> apply. Is that how it works?\n> \n> Fixed the redundancy, added some comments, and committed 0001.\n\nSo, does uloc_canonicalize() always convert to ICU locale IDs? What if \nyou pass a language tag, does it convert it to ICU locale ID as well?\n\n>> [PATCH v6 2/6] Wrap ICU ucol_open().\n\n> So I'm inclined to just leave initdb alone in patches 0002 and 0003.\n\n0002 and 0003 look ok to me now.\n\nIn 0002, the error \"opening default collator is not supported\", should \nthat be an assert or an elog? Is it reachable by the user?\n\nYou might want to check the declarations at the top of pg_ucol_open(). \n0003 reformats them after they were just added in 0002. Maybe check \nthat they are pgindent'ed in 0002 properly.\n\nI don't understand patch 0004. It seems to do two things, handle \nC/POSIX locale specifications and add an SQL-callable function. Are \nthose connected?\n\n\n\n",
"msg_date": "Thu, 23 Mar 2023 07:27:43 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Thu, 2023-03-23 at 07:27 +0100, Peter Eisentraut wrote:\n> So, does uloc_canonicalize() always convert to ICU locale IDs? What\n> if \n> you pass a language tag, does it convert it to ICU locale ID as well?\n\nYes.\n\nThe documentation is not clear on that point, but my testing shows that\nit does. And this is only for old versions of the code, so we don't\nneed to worry about later versions of ICU changing that.\n\nI thought about using uloc_forLanguageTag(), but the documentation for\nthat is not clear what formats it accepts as an input, so it doesn't\nseem like a win. If wanted to be paranoid we could use\nuloc_toLanguageTag() followed by uloc_forLanguageTag(), but that seemed\nexcessive.\n\n> > \n> 0002 and 0003 look ok to me now.\n\nThank you, committed 0002 and 0003.\n\n> In 0002, the error \"opening default collator is not supported\",\n> should \n> that be an assert or an elog? Is it reachable by the user?\n\nIt's not reachable by the user, but could catch a bug if we\naccidentally read a NULL field from the catalog or something like that.\nIt seemed a worthwhile check to leave in production builds.\n\n> You might want to check the declarations at the top of\n> pg_ucol_open(). \n> 0003 reformats them after they were just added in 0002. Maybe check \n> that they are pgindent'ed in 0002 properly.\n\nThey seem to be pgindented fine in 0002, it was unnecessarily\nreindented in 0003 and I fixed that.\n\nI use emacs \"align-current\" and generally that does the right thing,\nbut I'll rely more on pgindent in the future.\n\n> I don't understand patch 0004. It seems to do two things, handle \n> C/POSIX locale specifications and add an SQL-callable function. Are \n> those connected?\n\nIt's hard to test (or even exercise) the former without the latter.\n\nI could get rid of the SQL-callable function and move the rest of the\nchanges into 0006. I'll see if that arrangement works better, and that\nway we can add the SQL-callable function later (or perhaps not at all\nif it's not desired).\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 23 Mar 2023 10:16:45 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Thu, 2023-03-23 at 10:16 -0700, Jeff Davis wrote:\n> I could get rid of the SQL-callable function and move the rest of the\n> changes into 0006. I'll see if that arrangement works better, and\n> that\n> way we can add the SQL-callable function later (or perhaps not at all\n> if it's not desired).\n\nAttached a new series that doesn't include the SQL-callable function.\nIt's probably better to just wait and see what functions seem actually\nuseful to users.\n\nI included a new small patch to fix a potential UCollator leak and make\nthe errors more consistent.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Thu, 23 Mar 2023 23:39:12 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 23.03.23 18:16, Jeff Davis wrote:\n>> In 0002, the error \"opening default collator is not supported\",\n>> should\n>> that be an assert or an elog? Is it reachable by the user?\n> It's not reachable by the user, but could catch a bug if we\n> accidentally read a NULL field from the catalog or something like that.\n> It seemed a worthwhile check to leave in production builds.\n\nThen it ought to be an elog().\n\n\n",
"msg_date": "Fri, 24 Mar 2023 08:50:57 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 24.03.23 07:39, Jeff Davis wrote:\n> On Thu, 2023-03-23 at 10:16 -0700, Jeff Davis wrote:\n>> I could get rid of the SQL-callable function and move the rest of the\n>> changes into 0006. I'll see if that arrangement works better, and\n>> that\n>> way we can add the SQL-callable function later (or perhaps not at all\n>> if it's not desired).\n> \n> Attached a new series that doesn't include the SQL-callable function.\n> It's probably better to just wait and see what functions seem actually\n> useful to users.\n> \n> I included a new small patch to fix a potential UCollator leak and make\n> the errors more consistent.\n\n[PATCH v8 1/4] Avoid potential UCollator leak for older ICU versions.\n\nCouldn't we do this in a simpler way by just freeing the collator before \nthe ereport() calls. Or wrap a PG_TRY/PG_FINALLY around the whole thing?\n\nIt would be nicer to not make the callers of \nicu_set_collation_attributes() responsible for catching and reporting \nthe errors.\n\n\n[PATCH v8 2/4] initdb: emit message when using default ICU locale.\n\nI'm not able to make initdb print this message. Under what \ncircumstances am I supposed to see this? Do you have some examples?\n\nThe function check_icu_locale() has now gotten a lot more functionality \nthan its name suggests. Maybe the part that assigns the default ICU \nlocale should be moved up one level to setlocales(), which has a better \nname and does something similar for the libc locale realm.\n\n\n[PATCH v8 3/4] Canonicalize ICU locale names to language tags.\n\nI'm still on the fence about whether we actually want to do this, but \nI'm warming up to it, now that the issues with pre-54 versions are fixed.\n\nBut if we do this, the documentation needs to be updated. There is a \nbunch of text there that says, like, you can do this format or that \nformat, whatever you like. At least the guidance should be changed there.\n\n\n[PATCH v8 4/4] Validate ICU locales.\n\nI would make icu_locale_validation true by default.\n\nOr maybe it should be a log-level type option, so you can set it to \nerror, warning, and also completely off?\n\n\n",
"msg_date": "Fri, 24 Mar 2023 10:10:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Fri, 2023-03-24 at 10:10 +0100, Peter Eisentraut wrote:\n> Couldn't we do this in a simpler way by just freeing the collator\n> before \n> the ereport() calls.\n\nI committed a tiny patch to do this.\n\nWe still need to address the error inconsistency though. The problem is\nthat, in older ICU versions, if the fixup for \"und@colNumeric=lower\" ->\n\"root@colNumeric=lower\" is applied, then icu_set_collation_attributes()\nwill throw an error reporting \"root@colNumeric=lower\", which is not\nwhat the user typed.\n\nWe could fix that directly by passing the original string to\nicu_set_collation_attributes() instead, or perhaps as an extra\nparameter used only for the ereport().\n\nI like the minor refactoring I did better, though. It puts the\nereports() close to each other, so any differences are more obvious.\nAnd it seems cleaner to me for pg_ucol_open to close the UCollator\nbecause it's the one that opened it. I don't have a strong opinion, but\nthat's my reasoning.\n\n> Or wrap a PG_TRY/PG_FINALLY around the whole thing?\n\nI generally avoid PG_TRY/FINALLY unless it avoids some major\nawkwardness or other problem.\n\n> It would be nicer to not make the callers of \n> icu_set_collation_attributes() responsible for catching and reporting\n> the errors.\n\nThere's only one caller now: pg_ucol_open().\n\n> [PATCH v8 2/4] initdb: emit message when using default ICU locale.\n> \n> I'm not able to make initdb print this message. Under what \n> circumstances am I supposed to see this? Do you have some examples?\n\nIt happens when you don't specify --icu-locale. It is slightly\nredundant with \"ICU locale\", but it lets you see that it came from the\nenvironment rather than the command line:\n\n-------------\n$ initdb -D data \nThe files belonging to this database system will be owned by user\n\"someone\".\nThis user must also own the server process.\n\nUsing default ICU locale \"en_US_POSIX\".\nThe database cluster will be initialized with this locale\nconfiguration:\n provider: icu\n ICU locale: en_US_POSIX\n...\n-------------\n\nThat seems fairly useful for testing, etc., where initdb.log doesn't\nshow the command line options.\n\n> The function check_icu_locale() has now gotten a lot more\n> functionality \n> than its name suggests. Maybe the part that assigns the default ICU \n> locale should be moved up one level to setlocales(), which has a\n> better \n> name and does something similar for the libc locale realm.\n\nAgreed, done.\n\nIn fact, initdb.c:check_icu_locale() is completely unnecessary in that\npatch, because as the comment points out, the backend will try to open\nit during post-bootstrap initialization. I think it was simply a\nmistake to try to do this validation in commit 27b62377b4.\n\nThe later validation patch does do some better validation at initdb\ntime to make sure the language can be found.\n\n> [PATCH v8 3/4] Canonicalize ICU locale names to language tags.\n> \n> I'm still on the fence about whether we actually want to do this, but\n> I'm warming up to it, now that the issues with pre-54 versions are\n> fixed.\n> \n> But if we do this, the documentation needs to be updated. There is a\n> bunch of text there that says, like, you can do this format or that \n> format, whatever you like. At least the guidance should be changed\n> there.\n> \n> \n> [PATCH v8 4/4] Validate ICU locales.\n> \n> I would make icu_locale_validation true by default.\n\nAgreed. I considered also not having a GUC, but it seems like some kind\nof escape hatch is wise, at least for now.\n\n> Or maybe it should be a log-level type option, so you can set it to \n> error, warning, and also completely off?\n\nAs the validation patch seems closer to acceptance, I changed it to be\nbefore the canonicalization patch. New series attached.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Fri, 24 Mar 2023 16:28:24 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "[PATCH v9 1/5] Fix error inconsistency in older ICU versions.\n\nok\n\n\n[PATCH v9 2/5] initdb: replace check_icu_locale() with\n default_icu_locale().\n\nI would keep the #ifdef USE_ICU inside the lower-level function \ndefault_icu_locale(), like it was before, so that the higher-level \nsetlocales() doesn't need to know about it.\n\nOtherwise ok.\n\n\n[PATCH v9 3/5] initdb: emit message when using default ICU locale.\n\nok\n\n\n[PATCH v9 4/5] Validate ICU locales.\n\nAlso here, let's keep the #ifdef USE_ICU in the lower-level function and \nmove more logic in there. Otherwise you have to repeat various things \nin DefineCollation() and createdb().\n\nI'm not sure we need the IsBinaryUpgrade checks. Users can set \nicu_validation_level on the target instance if they don't want that.\n\n\n\n",
"msg_date": "Tue, 28 Mar 2023 08:41:24 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Tue, 2023-03-28 at 08:41 +0200, Peter Eisentraut wrote:\n> [PATCH v9 1/5] Fix error inconsistency in older ICU versions.\n> \n> ok\n\nCommitted 0001.\n\n> [PATCH v9 2/5] initdb: replace check_icu_locale() with\n> default_icu_locale().\n> \n> I would keep the #ifdef USE_ICU inside the lower-level function \n> default_icu_locale(), like it was before, so that the higher-level \n> setlocales() doesn't need to know about it.\n> \n> Otherwise ok.\n\nDone and committed 0002.\n\n> \n> [PATCH v9 3/5] initdb: emit message when using default ICU locale.\n\nDone and committed 0003.\n\n> [PATCH v9 4/5] Validate ICU locales.\n> \n> Also here, let's keep the #ifdef USE_ICU in the lower-level function\n> and \n> move more logic in there. Otherwise you have to repeat various\n> things \n> in DefineCollation() and createdb().\n\nDone.\n\n> I'm not sure we need the IsBinaryUpgrade checks. Users can set \n> icu_validation_level on the target instance if they don't want that.\n\nI committed a version that still performs the checks during binary\nupgrade, but degrades the message to a WARNING if it's set higher than\nthat. I tried some upgrades with invalid locales, and getting an error\ndeep in the logs after the upgrade actually starts is not very user-\nfriendly. We could add something during the --check phase, which would\nbe more helpful, but I didn't do that for this patch.\n\n\nAttached is a new version of the final patch, which performs\ncanonicalization. I'm not 100% sure that it's wanted, but it still\nseems like a good idea to get the locales into a standard format in the\ncatalogs, and if a lot more people start using ICU in v16 (because it's\nthe default), then it would be a good time to do it. But perhaps there\nare risks?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Wed, 29 Mar 2023 19:33:57 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 30.03.23 04:33, Jeff Davis wrote:\n> Attached is a new version of the final patch, which performs\n> canonicalization. I'm not 100% sure that it's wanted, but it still\n> seems like a good idea to get the locales into a standard format in the\n> catalogs, and if a lot more people start using ICU in v16 (because it's\n> the default), then it would be a good time to do it. But perhaps there\n> are risks?\n\nI say, let's do it.\n\n\nI don't think we should show the notice when the canonicalization \ndoesn't change anything. This is not useful:\n\n+NOTICE: using language tag \"und-u-kf-upper\" for locale \"und-u-kf-upper\"\n\nAlso, the message should be phrased more from the perspective of the \nuser instead of using ICU jargon, like\n\nNOTICE: using canonicalized form \"%s\" for locale specification \"%s\"\n\n(Still too many big words?)\n\n\nI don't think the special handling of IsBinaryUpgrade is needed or \nwanted. I would hope that with this feature, all old-style locale IDs \nwould go away, but this way we would keep them forever. If we believe \nthat canonicalization is safe, then I don't see why we cannot apply it \nduring binary upgrade.\n\n\nNeeds documentation updates in doc/src/sgml/charset.sgml.\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 08:59:41 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Thu, 2023-03-30 at 08:59 +0200, Peter Eisentraut wrote:\n\n> I don't think the special handling of IsBinaryUpgrade is needed or \n> wanted. I would hope that with this feature, all old-style locale\n> IDs \n> would go away, but this way we would keep them forever. If we\n> believe \n> that canonicalization is safe, then I don't see why we cannot apply\n> it \n> during binary upgrade.\n\nThere are two issues:\n\n1. Failures can occur. For instance, if an invalid attribute is used,\nlike '@collStrength=primary', then we can't canonicalize it (or if we\ndo, it could end up being not what the user intended).\n\n2. Version 15 and earlier have a subtle bug: it passes the raw locale\nstraight to ucol_open(), and if the locale is \"fr_CA.UTF-8\" ucol_open()\nmis-parses it to have language \"fr\" with no region. If you canonicalize\nfirst, it properly parses the locale and produces \"fr-CA\", which\nresults in a different collator. The 15 behavior is wrong, and this\ncanonicalization patch will fix it, but it doesn't do so during\npg_upgrade because that could change the collator and corrupt an index.\n\nThe current patch deals with these problems by simply preserving the\nlocale (valid or not) during pg_upgrade, and only canonicalizing new\ncollations and databases (so #2 is only fixed for new\ncollations/databases). I think that's a good trade-off because a lot\nmore users will be on ICU now that it's the default, so let's avoid\ncreating more of the problem cases for those new users.\n\nTo get to perfectly-canonicalized catalogs for upgrades from earlier\nversions:\n\n* We need a way to detect #2, which I posted some code for in an\nuncommitted revision[1] of this patch series.\n\n* We need a way to detect #1 and #2 during the pg_upgrade --check\nphase.\n\n* We need actions that the user can take to correct the problems. I\nhave some ideas but they could use some dicsussion.\n\nI'm not sure all of those will be ready for v16, though.\n\nRegards,\n\tJeff Davis\n\n[1] See check_equivalent_icu_locales() and calling code here:\nhttps://www.postgresql.org/message-id/8c7af6820aed94dc7bc259d2aa7f9663518e6137.camel@j-davis.com\n\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 14:15:29 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Thu, 2023-03-30 at 08:59 +0200, Peter Eisentraut wrote:\n> I don't think we should show the notice when the canonicalization \n> doesn't change anything. This is not useful:\n> \n> +NOTICE: using language tag \"und-u-kf-upper\" for locale \"und-u-kf-\n> upper\"\n\nDone.\n\n> Also, the message should be phrased more from the perspective of the \n> user instead of using ICU jargon, like\n> \n> NOTICE: using canonicalized form \"%s\" for locale specification \"%s\"\n> \n> (Still too many big words?)\n\nChanged to:\n\n NOTICE: using standard form \"%s\" for locale \"%s\"\n\n> Needs documentation updates in doc/src/sgml/charset.sgml.\n\nI made a very minor update. Do you have something more specific in\nmind?\n\nRegards,\n\tJeff Davis",
"msg_date": "Fri, 31 Mar 2023 03:11:56 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On 31.03.23 12:11, Jeff Davis wrote:\n> On Thu, 2023-03-30 at 08:59 +0200, Peter Eisentraut wrote:\n>> I don't think we should show the notice when the canonicalization\n>> doesn't change anything. This is not useful:\n>>\n>> +NOTICE: using language tag \"und-u-kf-upper\" for locale \"und-u-kf-\n>> upper\"\n> \n> Done.\n> \n>> Also, the message should be phrased more from the perspective of the\n>> user instead of using ICU jargon, like\n>>\n>> NOTICE: using canonicalized form \"%s\" for locale specification \"%s\"\n>>\n>> (Still too many big words?)\n> \n> Changed to:\n> \n> NOTICE: using standard form \"%s\" for locale \"%s\"\n> \n>> Needs documentation updates in doc/src/sgml/charset.sgml.\n> \n> I made a very minor update. Do you have something more specific in\n> mind?\n\nThis all looks good to me.\n\n\n\n",
"msg_date": "Tue, 4 Apr 2023 15:13:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "MSVC now says this on master:\n\n[17:48:12.446] c:\\cirrus\\src\\backend\\utils\\adt\\pg_locale.c(2912) :\nwarning C4715: 'icu_language_tag': not all control paths return a\nvalue\n\nCI doesn't currently fail for MSVC warnings, so it's a bit hidden.\nFWIW cfbot does show this with a ⚠ sign with its new system for\ngrovelling through logs, which will now show up on every entry now\nthat this warning is in master.\n\n\n",
"msg_date": "Wed, 5 Apr 2023 08:42:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 08:59:41AM +0200, Peter Eisentraut wrote:\n> On 30.03.23 04:33, Jeff Davis wrote:\n> >Attached is a new version of the final patch, which performs\n> >canonicalization. I'm not 100% sure that it's wanted, but it still\n> >seems like a good idea to get the locales into a standard format in the\n> >catalogs, and if a lot more people start using ICU in v16 (because it's\n> >the default), then it would be a good time to do it. But perhaps there\n> >are risks?\n> \n> I say, let's do it.\n\nThe following is not cause for postgresql.git changes at this time, but I'm\nsharing it in case it saves someone else the study effort. Commit ea1db8a\n(\"Canonicalize ICU locale names to language tags.\") slowed buildfarm member\nhoverfly, but that disappears if I drop debug_parallel_query from its config.\nTypical end-to-end duration rose from 2h5m to 2h55m. Most-affected were\ninstallcheck runs, which rose from 11m to 19m. (The \"check\" stage uses\nNO_LOCALE=1, so it changed less.) From profiles, my theory is that each of\nthe many parallel workers burns notable CPU and I/O opening its ICU collator\nfor the first time. debug_parallel_query, by design, pursues parallelism\nindependent of cost, so this is working as intended. If it ever matters in\nnon-debug configurations, we might raise the default parallel_setup_cost or\npre-load ICU collators in the postmaster.\n\n\n",
"msg_date": "Tue, 2 May 2023 07:29:38 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Tue, 2023-05-02 at 07:29 -0700, Noah Misch wrote:\n> On Thu, Mar 30, 2023 at 08:59:41AM +0200, Peter Eisentraut wrote:\n> > On 30.03.23 04:33, Jeff Davis wrote:\n> > > Attached is a new version of the final patch, which performs\n> > > canonicalization. I'm not 100% sure that it's wanted, but it\n> > > still\n> > > seems like a good idea to get the locales into a standard format\n> > > in the\n> > > catalogs, and if a lot more people start using ICU in v16\n> > > (because it's\n> > > the default), then it would be a good time to do it. But perhaps\n> > > there\n> > > are risks?\n> > \n> > I say, let's do it.\n> \n> The following is not cause for postgresql.git changes at this time,\n> but I'm\n> sharing it in case it saves someone else the study effort. Commit\n> ea1db8a\n> (\"Canonicalize ICU locale names to language tags.\") slowed buildfarm\n> member\n> hoverfly, but that disappears if I drop debug_parallel_query from its\n> config.\n> Typical end-to-end duration rose from 2h5m to 2h55m. Most-affected\n> were\n> installcheck runs, which rose from 11m to 19m. (The \"check\" stage\n> uses\n> NO_LOCALE=1, so it changed less.) From profiles, my theory is that\n> each of\n> the many parallel workers burns notable CPU and I/O opening its ICU\n> collator\n> for the first time.\n\nI didn't repro the overall test timings (mine is ~1m40s compared to\n~11-19m on hoverfly) but I think a microbenchmark on the ICU calls\nshowed a possible cause.\n\nI ran open in a loop 10M times on the requested locale. The root locale\n(\"und\"[1], \"root\" and \"\") take about 1.3s to open 10M times; simple\nlocales like 'en' and 'fr-CA' and 'de-DE' are all a little shower at\n3.3s.\n\nUnrecognized locales like \"xyz\" take about 10 times as long: 13s to\nopen 10M times, presumably to perform the fallback logic that\nultimately opens the root locale. Not sure if 10X slower in the open\npath is enough to explain the overall test slowdown.\n\nMy guess is that the ICU locale for these tests is not recognized, or\nis some other locale that opens slowly. Can you tell me the actual\ndaticulocale?\n\nRegards,\n\tJeff Davis\n\n[1] It appears that \"und\" is also slow to open in ICU < 64. Hoverfly is\non v58, so it's possible that's the problem if daticulocale=und.\n\n\n\n",
"msg_date": "Sat, 20 May 2023 10:19:30 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Sat, May 20, 2023 at 10:19:30AM -0700, Jeff Davis wrote:\n> On Tue, 2023-05-02 at 07:29 -0700, Noah Misch wrote:\n> > On Thu, Mar 30, 2023 at 08:59:41AM +0200, Peter Eisentraut wrote:\n> > > On 30.03.23 04:33, Jeff Davis wrote:\n> > > > Attached is a new version of the final patch, which performs\n> > > > canonicalization. I'm not 100% sure that it's wanted, but it\n> > > > still\n> > > > seems like a good idea to get the locales into a standard format\n> > > > in the\n> > > > catalogs, and if a lot more people start using ICU in v16\n> > > > (because it's\n> > > > the default), then it would be a good time to do it. But perhaps\n> > > > there\n> > > > are risks?\n> > > \n> > > I say, let's do it.\n> > \n> > The following is not cause for postgresql.git changes at this time,\n> > but I'm\n> > sharing it in case it saves someone else the study effort.� Commit\n> > ea1db8a\n> > (\"Canonicalize ICU locale names to language tags.\") slowed buildfarm\n> > member\n> > hoverfly, but that disappears if I drop debug_parallel_query from its\n> > config.\n> > Typical end-to-end duration rose from 2h5m to 2h55m.� Most-affected\n> > were\n> > installcheck runs, which rose from 11m to 19m.� (The \"check\" stage\n> > uses\n> > NO_LOCALE=1, so it changed less.)� From profiles, my theory is that\n> > each of\n> > the many parallel workers burns notable CPU and I/O opening its ICU\n> > collator\n> > for the first time.\n> \n> I didn't repro the overall test timings (mine is ~1m40s compared to\n> ~11-19m on hoverfly) but I think a microbenchmark on the ICU calls\n> showed a possible cause.\n> \n> I ran open in a loop 10M times on the requested locale. The root locale\n> (\"und\"[1], \"root\" and \"\") take about 1.3s to open 10M times; simple\n> locales like 'en' and 'fr-CA' and 'de-DE' are all a little shower at\n> 3.3s.\n> \n> Unrecognized locales like \"xyz\" take about 10 times as long: 13s to\n> open 10M times, presumably to perform the fallback logic that\n> ultimately opens the root locale. Not sure if 10X slower in the open\n> path is enough to explain the overall test slowdown.\n> \n> My guess is that the ICU locale for these tests is not recognized, or\n> is some other locale that opens slowly. Can you tell me the actual\n> daticulocale?\n\nAs of commit b8c3f6d, InstallCheck-C got daticulocale=en-US-u-va-posix. Check\ngot daticulocale=NULL.\n\n(The machine in question was unusable for PostgreSQL from 2023-05-12 to\n2023-06-30, due to https://stackoverflow.com/q/76369660/16371536. That\ndelayed my response.)\n\n\n",
"msg_date": "Sat, 1 Jul 2023 10:31:32 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: ICU locale validation / canonicalization"
},
{
"msg_contents": "On Sat, 2023-07-01 at 10:31 -0700, Noah Misch wrote:\n> As of commit b8c3f6d, InstallCheck-C got daticulocale=en-US-u-va-\n> posix. Check\n> got daticulocale=NULL.\n\nWith the same test setup, that locale takes about 8.6 seconds (opening\nit 10M times), about 2.5X slower than \"en-US\" and about 7X slower than\n\"und\". I think that explains it.\n\nThe locale \"en-US-u-va-posix\" normally happens when passing a locale\nbeginning with \"C\" to ICU. After 2535c74b1a we don't get ICU locales\nfrom the environment anywhere, so that should be rare (and probably\nindicates a user mistake). I don't think this is a practical problem\nany more.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 07 Jul 2023 09:13:35 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU locale validation / canonicalization"
}
] |
[
{
"msg_contents": "Greetings, everyone!\n\nWhile working on an extension my colleague and I have found an \ninteresting case;\n\nWhen you try to execute next SQL statements on master branch of \nPostgreSQL:\n\nCREATE TABLE parted_fk_naming (\n id bigint NOT NULL default 1,\n id_abc bigint,\n CONSTRAINT dummy_constr FOREIGN KEY (id_abc)\n REFERENCES parted_fk_naming (id),\n PRIMARY KEY (id)\n )\n PARTITION BY LIST (id);\n\nCREATE TABLE parted_fk_naming_1 (\n id bigint NOT NULL default 1,\n id_abc bigint,\n PRIMARY KEY (id),\n CONSTRAINT dummy_constr CHECK (true)\n );\n\nALTER TABLE parted_fk_naming ATTACH PARTITION parted_fk_naming_1 FOR \nVALUES IN ('1');\n\nseemingly nothing suspicious happens.\nBut if you debug function ExecCheckPermissions and look into what is \npassed to function (contents of rangeTable and rteperminfos to be \nexact),\nyou'll see some strange behaviour:\n\n(\n {RANGETBLENTRY\n :alias <>\n :eref <>\n :rtekind 0\n :relid 16395\n :relkind r\n :rellockmode 1\n :tablesample <>\n :perminfoindex 0\n :lateral false\n :inh false\n :inFromCl false\n :securityQuals <>\n }\n {RANGETBLENTRY\n :alias <>\n :eref <>\n :rtekind 0\n :relid 16384\n :relkind p\n :rellockmode 1\n :tablesample <>\n :perminfoindex 0\n :lateral false\n :inh false\n :inFromCl false\n :securityQuals <>\n }\n)\n\n(\n {RTEPERMISSIONINFO\n :relid 16395\n :inh false\n :requiredPerms 2\n :checkAsUser 0\n :selectedCols (b 9)\n :insertedCols (b)\n :updatedCols (b)\n }\n {RTEPERMISSIONINFO\n :relid 16384\n :inh false\n :requiredPerms 2\n :checkAsUser 0\n :selectedCols (b 8)\n :insertedCols (b)\n :updatedCols (b)\n }\n)\n\nBoth of RangeTableEntries have a perminfoindex of 0 and simultaneously \nhave a RTEPERMISSIONINFO entry for them!\n\nRight now this behaviour isn't affecting anything, but in future should \nsomeone want to use ExecutorCheckPerms_hook from \n/src/backend/executor/execMain.c, its input parameters\nwon't correspond to each other since members of rangeTable will have \nincorrect perminfoindex.\n\nTo fix this, we're setting fk's index to 1 and pk's index to 2 in \n/src/backend/utils/adt/ri_triggers.c so that list being passed to \nExecCheckPermissions and its hook\nhas indexes for corresponding rteperminfos entries. 1 and 2 are chosen \nbecause perminfoindex is 1-based and fk is passed to list_make2 first;\n\nWe are eager to hear some thoughts from the community!\n\nRegards,\n\nOleg Tselebrovskii",
"msg_date": "Wed, 08 Feb 2023 15:21:03 +0700",
"msg_from": "o.tselebrovskiy@postgrespro.ru",
"msg_from_op": true,
"msg_subject": "A bug with ExecCheckPermissions"
},
{
"msg_contents": "On 2023-Feb-08, o.tselebrovskiy@postgrespro.ru wrote:\n\n> But if you debug function ExecCheckPermissions and look into what is passed\n> to function (contents of rangeTable and rteperminfos to be exact),\n> you'll see some strange behaviour:\n\n> Both of RangeTableEntries have a perminfoindex of 0 and simultaneously have\n> a RTEPERMISSIONINFO entry for them!\n\nOuch. Yeah, that's not great. As you say, it doesn't really affect\nanything, and we know full well that these RTEs are ad-hoc\nmanufactured. But as we claim that we still pass the RTEs for the\nbenefit of hooks, then we should at least make them match.\n\nI think we should also patch ExecCheckPermissions to use forboth(),\nscanning the RTEs as it goes over the perminfos, and make sure that the\nentries are consistent.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 8 Feb 2023 11:49:00 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: A bug with ExecCheckPermissions"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 16:19 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2023-Feb-08, o.tselebrovskiy@postgrespro.ru wrote:\n>\n> > But if you debug function ExecCheckPermissions and look into what is\n> passed\n> > to function (contents of rangeTable and rteperminfos to be exact),\n> > you'll see some strange behaviour:\n>\n> > Both of RangeTableEntries have a perminfoindex of 0 and simultaneously\n> have\n> > a RTEPERMISSIONINFO entry for them!\n>\n> Ouch. Yeah, that's not great. As you say, it doesn't really affect\n> anything, and we know full well that these RTEs are ad-hoc\n> manufactured. But as we claim that we still pass the RTEs for the\n> benefit of hooks, then we should at least make them match.\n\n\n+1. We don’t have anything in this (core) code path that would try to use\nperminfoindex for these RTEs, but there might well be in the future.\n\nI think we should also patch ExecCheckPermissions to use forboth(),\n> scanning the RTEs as it goes over the perminfos, and make sure that the\n> entries are consistent.\n\n\nHmm, we can’t use forboth here, because not all RTEs have the corresponding\nRTEPermissionInfo, inheritance children RTEs, for example. Also, it\ndoesn’t make much sense to reinstate the original loop over range table and\nfetch the RTEPermissionInfo for the RTEs with non-0 perminfoindex, because\nthe main goal of the patch was to make ExecCheckPermissions() independent\nof range table length.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Wed, Feb 8, 2023 at 16:19 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2023-Feb-08, o.tselebrovskiy@postgrespro.ru wrote:\n\n> But if you debug function ExecCheckPermissions and look into what is passed\n> to function (contents of rangeTable and rteperminfos to be exact),\n> you'll see some strange behaviour:\n\n> Both of RangeTableEntries have a perminfoindex of 0 and simultaneously have\n> a RTEPERMISSIONINFO entry for them!\n\nOuch. Yeah, that's not great. As you say, it doesn't really affect\nanything, and we know full well that these RTEs are ad-hoc\nmanufactured. But as we claim that we still pass the RTEs for the\nbenefit of hooks, then we should at least make them match.+1. We don’t have anything in this (core) code path that would try to use perminfoindex for these RTEs, but there might well be in the future.\nI think we should also patch ExecCheckPermissions to use forboth(),\nscanning the RTEs as it goes over the perminfos, and make sure that the\nentries are consistent.Hmm, we can’t use forboth here, because not all RTEs have the corresponding RTEPermissionInfo, inheritance children RTEs, for example. Also, it doesn’t make much sense to reinstate the original loop over range table and fetch the RTEPermissionInfo for the RTEs with non-0 perminfoindex, because the main goal of the patch was to make ExecCheckPermissions() independent of range table length.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 8 Feb 2023 16:39:38 +0530",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A bug with ExecCheckPermissions"
},
{
"msg_contents": "On 2023-Feb-08, Amit Langote wrote:\n\n> On Wed, Feb 8, 2023 at 16:19 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > I think we should also patch ExecCheckPermissions to use forboth(),\n> > scanning the RTEs as it goes over the perminfos, and make sure that the\n> > entries are consistent.\n> \n> Hmm, we can’t use forboth here, because not all RTEs have the corresponding\n> RTEPermissionInfo, inheritance children RTEs, for example.\n\nDoh, of course.\n\n> Also, it doesn’t make much sense to reinstate the original loop over\n> range table and fetch the RTEPermissionInfo for the RTEs with non-0\n> perminfoindex, because the main goal of the patch was to make\n> ExecCheckPermissions() independent of range table length.\n\nYeah, I'm thinking in a mechanism that would allow us to detect bugs in\ndevelopment builds — no need to have it run in production builds.\nHowever, I can't see any useful way to implement it.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Estoy de acuerdo contigo en que la verdad absoluta no existe...\nEl problema es que la mentira sí existe y tu estás mintiendo\" (G. Lama)\n\n\n",
"msg_date": "Wed, 8 Feb 2023 19:23:18 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: A bug with ExecCheckPermissions"
},
{
"msg_contents": "On 08.02.2023 21:23, Alvaro Herrera wrote:\n> On 2023-Feb-08, Amit Langote wrote:\n> \n>> On Wed, Feb 8, 2023 at 16:19 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n>>> I think we should also patch ExecCheckPermissions to use forboth(),\n>>> scanning the RTEs as it goes over the perminfos, and make sure that the\n>>> entries are consistent.\n>>\n>> Hmm, we can’t use forboth here, because not all RTEs have the corresponding\n>> RTEPermissionInfo, inheritance children RTEs, for example.\n> \n> Doh, of course.\n> \n>> Also, it doesn’t make much sense to reinstate the original loop over\n>> range table and fetch the RTEPermissionInfo for the RTEs with non-0\n>> perminfoindex, because the main goal of the patch was to make\n>> ExecCheckPermissions() independent of range table length.\n> \n> Yeah, I'm thinking in a mechanism that would allow us to detect bugs in\n> development builds — no need to have it run in production builds.\n> However, I can't see any useful way to implement it.\n>\n\n\nMaybe something like the attached would do?\n\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/",
"msg_date": "Thu, 9 Feb 2023 12:14:44 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: A bug with ExecCheckPermissions"
},
{
"msg_contents": "Hi,\n\nOn Thu, Feb 9, 2023 at 14:44 Sergey Shinderuk <s.shinderuk@postgrespro.ru>\nwrote:\n\n> On 08.02.2023 21:23, Alvaro Herrera wrote:\n> > On 2023-Feb-08, Amit Langote wrote:\n> >\n> >> On Wed, Feb 8, 2023 at 16:19 Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> >\n> >>> I think we should also patch ExecCheckPermissions to use forboth(),\n> >>> scanning the RTEs as it goes over the perminfos, and make sure that the\n> >>> entries are consistent.\n> >>\n> >> Hmm, we can’t use forboth here, because not all RTEs have the\n> corresponding\n> >> RTEPermissionInfo, inheritance children RTEs, for example.\n> >\n> > Doh, of course.\n> >\n> >> Also, it doesn’t make much sense to reinstate the original loop over\n> >> range table and fetch the RTEPermissionInfo for the RTEs with non-0\n> >> perminfoindex, because the main goal of the patch was to make\n> >> ExecCheckPermissions() independent of range table length.\n> >\n> > Yeah, I'm thinking in a mechanism that would allow us to detect bugs in\n> > development builds — no need to have it run in production builds.\n> > However, I can't see any useful way to implement it.\n> >\n>\n>\n> Maybe something like the attached would do?\n\n\nThanks for the patch. Something like this makes sense to me.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nHi,On Thu, Feb 9, 2023 at 14:44 Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:On 08.02.2023 21:23, Alvaro Herrera wrote:\n> On 2023-Feb-08, Amit Langote wrote:\n> \n>> On Wed, Feb 8, 2023 at 16:19 Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n>>> I think we should also patch ExecCheckPermissions to use forboth(),\n>>> scanning the RTEs as it goes over the perminfos, and make sure that the\n>>> entries are consistent.\n>>\n>> Hmm, we can’t use forboth here, because not all RTEs have the corresponding\n>> RTEPermissionInfo, inheritance children RTEs, for example.\n> \n> Doh, of course.\n> \n>> Also, it doesn’t make much sense to reinstate the original loop over\n>> range table and fetch the RTEPermissionInfo for the RTEs with non-0\n>> perminfoindex, because the main goal of the patch was to make\n>> ExecCheckPermissions() independent of range table length.\n> \n> Yeah, I'm thinking in a mechanism that would allow us to detect bugs in\n> development builds — no need to have it run in production builds.\n> However, I can't see any useful way to implement it.\n>\n\n\nMaybe something like the attached would do?Thanks for the patch. Something like this makes sense to me.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 9 Feb 2023 15:00:15 +0530",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A bug with ExecCheckPermissions"
},
{
"msg_contents": "I didn't like very much this business of setting the perminfoindex\ndirectly to '2' and '1'. It looks ugly with no explanation. What do\nyou think of creating the as we go along and set each index\ncorrespondingly, as in the attached?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Ellos andaban todos desnudos como su madre los parió, y también las mujeres,\naunque no vi más que una, harto moza, y todos los que yo vi eran todos\nmancebos, que ninguno vi de edad de más de XXX años\" (Cristóbal Colón)",
"msg_date": "Thu, 9 Mar 2023 11:39:38 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: A bug with ExecCheckPermissions"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Thu, Mar 9, 2023 at 7:39 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I didn't like very much this business of setting the perminfoindex\n> directly to '2' and '1'. It looks ugly with no explanation. What do\n> you think of creating the as we go along and set each index\n> correspondingly, as in the attached?\n\nAgree it looks cleaner and self-explanatory that way. Thanks.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Mar 2023 19:56:58 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A bug with ExecCheckPermissions"
},
{
"msg_contents": "On 2023-Mar-09, Amit Langote wrote:\n\n> On Thu, Mar 9, 2023 at 7:39 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > I didn't like very much this business of setting the perminfoindex\n> > directly to '2' and '1'. It looks ugly with no explanation. What do\n> > you think of creating the as we go along and set each index\n> > correspondingly, as in the attached?\n> \n> Agree it looks cleaner and self-explanatory that way. Thanks.\n\nThanks for looking! I have pushed it now. And many thanks to Oleg for\nnoticing and reporting it.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 4 May 2023 19:59:17 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: A bug with ExecCheckPermissions"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Thanks for looking! I have pushed it now. And many thanks to Oleg for\n> noticing and reporting it.\n\nIt looks like this patch caused a change in the order of output from\nthe sepgsql tests [1]. If you expected it to re-order permissions\nchecking then this is probably fine, and we should just update the\nexpected output.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2023-05-04%2018%3A52%3A12\n\n\n",
"msg_date": "Thu, 04 May 2023 18:11:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A bug with ExecCheckPermissions"
},
{
"msg_contents": "On 2023-May-04, Tom Lane wrote:\n\n> It looks like this patch caused a change in the order of output from\n> the sepgsql tests [1]. If you expected it to re-order permissions\n> checking then this is probably fine, and we should just update the\n> expected output.\n\nYeah, looks correct. Fix pushed.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Every machine is a smoke machine if you operate it wrong enough.\"\nhttps://twitter.com/libseybieda/status/1541673325781196801\n\n\n",
"msg_date": "Fri, 5 May 2023 11:15:36 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: A bug with ExecCheckPermissions"
}
] |
[
{
"msg_contents": "Hi hackers,\n In evaluate_function(), I find codes as shown below:\n\n /*\n * Ordinarily we are only allowed to simplify immutable functions. But for\n * purposes of estimation, we consider it okay to simplify functions that\n * are merely stable; the risk that the result might change from planning\n * time to execution time is worth taking in preference to not being able\n * to estimate the value at all.\n */\nif (funcform->provolatile == PROVOLATILE_IMMUTABLE)\n /* okay */ ;\nelse if (context->estimate && funcform->provolatile == PROVOLATILE_STABLE)\n /* okay */ ;\nelse\n return NULL;\n\nThe codes say that stable function can not be simplified here(e.g. planning\nphase).\nI want to know the reason why stable function can not be simplified in\nplanning phase.\nMaybe show me a example that it will be incorrect for a query if simplify\nstable function in\nplanning phases.\n\nWith kindest regards, tender wang\n\nHi hackers, In evaluate_function(), I find codes as shown below: /* * Ordinarily we are only allowed to simplify immutable functions. But for * purposes of estimation, we consider it okay to simplify functions that * are merely stable; the risk that the result might change from planning * time to execution time is worth taking in preference to not being able * to estimate the value at all. */\tif (funcform->provolatile == PROVOLATILE_IMMUTABLE) /* okay */ ;\telse if (context->estimate && funcform->provolatile == PROVOLATILE_STABLE) /* okay */ ;\telse return NULL;The codes say that stable function can not be simplified here(e.g. planning phase). I want to know the reason why stable function can not be simplified in planning phase.Maybe show me a example that it will be incorrect for a query if simplify stable function in planning phases.With kindest regards, tender wang",
"msg_date": "Wed, 8 Feb 2023 16:59:29 +0800",
"msg_from": "tender wang <tndrwang@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why cann't simplify stable function in planning phase?"
},
{
"msg_contents": "On Wed, 2023-02-08 at 16:59 +0800, tender wang wrote:\n> In evaluate_function(), I find codes as shown below:\n> \n> /*\n> * Ordinarily we are only allowed to simplify immutable functions. But for\n> * purposes of estimation, we consider it okay to simplify functions that\n> * are merely stable; the risk that the result might change from planning\n> * time to execution time is worth taking in preference to not being able\n> * to estimate the value at all.\n> */\n> if (funcform->provolatile == PROVOLATILE_IMMUTABLE)\n> /* okay */ ;\n> else if (context->estimate && funcform->provolatile == PROVOLATILE_STABLE)\n> /* okay */ ;\n> else\n> return NULL;\n> \n> The codes say that stable function can not be simplified here(e.g. planning phase). \n> I want to know the reason why stable function can not be simplified in planning phase.\n> Maybe show me a example that it will be incorrect for a query if simplify stable function in \n> planning phases.\n\nQuery planning and query execution can happen at different times and using\ndifferent snapshots, so the result of a stable function can change in the\nmeantime. Think of prepared statements using a generic plan.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 08 Feb 2023 11:24:33 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Why cann't simplify stable function in planning phase?"
},
{
"msg_contents": "\n\nOn 2/8/23 09:59, tender wang wrote:\n> Hi hackers,\n> In evaluate_function(), I find codes as shown below:\n> \n> /*\n> * Ordinarily we are only allowed to simplify immutable functions. But for\n> * purposes of estimation, we consider it okay to simplify functions that\n> * are merely stable; the risk that the result might change from planning\n> * time to execution time is worth taking in preference to not being able\n> * to estimate the value at all.\n> */\n> if (funcform->provolatile == PROVOLATILE_IMMUTABLE)\n> /* okay */ ;\n> else if (context->estimate && funcform->provolatile == PROVOLATILE_STABLE)\n> /* okay */ ;\n> else\n> return NULL;\n> \n> The codes say that stable function can not be simplified here(e.g.\n> planning phase). \n> I want to know the reason why stable function can not be simplified in\n> planning phase.\n> Maybe show me a example that it will be incorrect for a query if\n> simplify stable function in \n> planning phases.\n> \n\nA function is \"stable\" only within a particular execution - if you run a\nquery with a stable function twice, the function is allowed to return\ndifferent results.\n\nIf you consider parse analysis / planning as a separate query, this\nexplains why we can't simply evaluate the function in parse analysis and\nthen use the value in actual execution. See analyze_requires_snapshot()\nreferences in postgres.c.\n\nNote: To be precise this is not about \"executions\" but about snapshots,\nand we could probably simplify the function call with isolation levels\nthat maintain a single snapshot (e.g. REPEATABLE READ). But we don't.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 Feb 2023 11:27:20 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Why cann't simplify stable function in planning phase?"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> Note: To be precise this is not about \"executions\" but about snapshots,\n> and we could probably simplify the function call with isolation levels\n> that maintain a single snapshot (e.g. REPEATABLE READ). But we don't.\n\nWe don't do that because, in fact, execution is *never* done with the same\nsnapshot used for planning. See comment in postgres.c:\n\n * While it looks promising to reuse the same snapshot for query\n * execution (at least for simple protocol), unfortunately it causes\n * execution to use a snapshot that has been acquired before locking\n * any of the tables mentioned in the query. This creates user-\n * visible anomalies, so refrain. Refer to\n * https://postgr.es/m/flat/5075D8DF.6050500@fuzzy.cz for details.\n\nI'm not entirely sure that that locking argument still holds, but having\nbeen burned once I'm pretty hesitant to try that again.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Feb 2023 09:57:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why cann't simplify stable function in planning phase?"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 09:57:04 -0500, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> > Note: To be precise this is not about \"executions\" but about snapshots,\n> > and we could probably simplify the function call with isolation levels\n> > that maintain a single snapshot (e.g. REPEATABLE READ). But we don't.\n> \n> We don't do that because, in fact, execution is *never* done with the same\n> snapshot used for planning. See comment in postgres.c:\n> \n> * While it looks promising to reuse the same snapshot for query\n> * execution (at least for simple protocol), unfortunately it causes\n> * execution to use a snapshot that has been acquired before locking\n> * any of the tables mentioned in the query. This creates user-\n> * visible anomalies, so refrain. Refer to\n> * https://postgr.es/m/flat/5075D8DF.6050500@fuzzy.cz for details.\n> \n> I'm not entirely sure that that locking argument still holds, but having\n> been burned once I'm pretty hesitant to try that again.\n\nBecause we now avoid re-computing snapshots, if there weren't any concurrent\ncommits/aborts, the gain would likely not be all that high anyway.\n\nWe should work on gettting rid of the ProcArrayLock acquisition in case we can\nreuse the snapshot, though. I think it's doable safely, but when working on\nit, I didn't succeed at writing a concise description as to why it's sfae, so\nI decided that the rest of the wins are big enough to not focus on it then and\nthere.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 08:15:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why cann't simplify stable function in planning phase?"
}
] |
[
{
"msg_contents": "PostgreSQL 16 Dev apt-based Linux, unable to install ....\nmake sure all dependencies are resolved, like libpq5 (or higher) for \ntesting ...\n\n\" postgresql-client-16 : Prerequisites: libpq5 (>= 16~~devel) but \n15.1-1.pgdg+1+b1 will be installed \"\n\n\n-- \n\n______________________________________________________________________________________\nMy Twitter Page:\ntwitter.com OpenSimFan <http://twitter.com/OpenSimFan>\n\nMy Instagram page:\ninstagram.com dutchglory <http://instagram.com/dutchglory>\n\nMy Facebook page (be my friend, please)\nfacebook.com André Verwijs <http://www.facebook.com/andre.verwijs>\n\n\n\n\n\n\n\n\nPostgreSQL 16 Dev apt-based Linux, unable to install ....\n make sure all dependencies are resolved, like libpq5 (or higher) \n for testing ... \n\n\" postgresql-client-16 : Prerequisites: libpq5 (>= 16~~devel)\n but 15.1-1.pgdg+1+b1 will be installed \" \n\n\n\n-- \n\n______________________________________________________________________________________\n My Twitter Page:\ntwitter.com OpenSimFan\n\n My Instagram page:\ninstagram.com\n dutchglory\n\n My Facebook page (be my friend, please)\nfacebook.com\n André Verwijs",
"msg_date": "Wed, 8 Feb 2023 10:46:42 +0100",
"msg_from": "=?UTF-8?Q?Andr=c3=a9_Verwijs?= <dutchgigalo@gmail.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 16 Dev apt-based Linux unable to install"
},
{
"msg_contents": "On Wed, Feb 08, 2023 at 10:46:42AM +0100, Andr� Verwijs wrote:\n> \n> PostgreSQL 16 Dev� apt-based Linux,� unable to install� ....\n> make sure all dependencies are resolved, like libpq5 (or higher) for testing\n> ...\n> \n> \" postgresql-client-16 : Prerequisites: libpq5 (>= 16~~devel) but\n> 15.1-1.pgdg+1+b1 will be installed \"\n\nFew things:\n\nYou're always going to want to show the command that you ran in addition\nto the error that you got.\n\nThis has to do with the debian packages, and not to postgres itself, so\nthis other list is a better place to ask than -hackers:\nhttps://www.postgresql.org/list/pgsql-pkg-debian/\n\nI think you'll need to use a command like\nsudo apt-get install postgresql-16 -t buster-pgdg-snapshot\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 8 Feb 2023 08:23:22 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 Dev apt-based Linux unable to install"
}
] |
[
{
"msg_contents": "Most meson options (meson_options.txt) that enable an external \ndependency (e.g., icu, ldap) are of type 'feature'. Most of these have \na default value of 'auto', which means they are pulled in automatically \nif found. Some have a default value of 'disabled' for specific reasons \n(e.g., selinux). This is all good.\n\nTwo options deviate from this in annoying ways:\n\noption('ssl', type : 'combo', choices : ['none', 'openssl'],\n value : 'none',\n description: 'use LIB for SSL/TLS support (openssl)')\n\noption('uuid', type : 'combo', choices : ['none', 'bsd', 'e2fs', 'ossp'],\n value : 'none',\n description: 'build contrib/uuid-ossp using LIB')\n\nThese were moved over from configure like that.\n\nThe problem is that these features now cannot be automatically enabled \nand behave annoyingly different from other feature options.\n\nFor the 'ssl' option, we have deprecated the --with-openssl option in \nconfigure and replaced it with --with-ssl, in anticipation of other SSL \nimplementations. None of that ever happened or is currently planned \nAFAICT. So I suggest that we semi-revert this, so that we can make \n'openssl' an auto option in meson.\n\nFor the 'uuid' option, I'm not sure what the best way to address this \nwould. We could establish a search order of libraries that is used if \nno specific one is set (similar to libreadline, libedit, in a way). So \nwe'd have one option 'uuid' that is of type feature with default 'auto' \nand another option, say, 'uuid-library' of type 'combo'.\n\nThoughts?\n\n\n",
"msg_date": "Wed, 8 Feb 2023 11:45:05 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\n\nOn 2/8/23 13:45, Peter Eisentraut wrote:\n>\n> The problem is that these features now cannot be automatically enabled \n> and behave annoyingly different from other feature options.\n\nAgreed.\n\n\n> For the 'ssl' option, we have deprecated the --with-openssl option in \n> configure and replaced it with --with-ssl, in anticipation of other \n> SSL implementations. None of that ever happened or is currently \n> planned AFAICT. So I suggest that we semi-revert this, so that we can \n> make 'openssl' an auto option in meson.\n\n+1\n\n\n> For the 'uuid' option, I'm not sure what the best way to address this \n> would. We could establish a search order of libraries that is used if \n> no specific one is set (similar to libreadline, libedit, in a way). \n> So we'd have one option 'uuid' that is of type feature with default \n> 'auto' and another option, say, 'uuid-library' of type 'combo'.\n>\n\nYour suggestion looks good and TCL already has a similar implementation \nwith what you suggested:\n\noption('pltcl', type : 'feature', value: 'auto',\n description: 'build with TCL support')\n\noption('tcl_version', type : 'string', value : 'tcl',\n description: 'specify TCL version')\n\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n\n",
"msg_date": "Wed, 8 Feb 2023 14:48:24 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 11:45:05 +0100, Peter Eisentraut wrote:\n> Most meson options (meson_options.txt) that enable an external dependency\n> (e.g., icu, ldap) are of type 'feature'. Most of these have a default value\n> of 'auto', which means they are pulled in automatically if found. Some have\n> a default value of 'disabled' for specific reasons (e.g., selinux). This is\n> all good.\n> \n> Two options deviate from this in annoying ways:\n> \n> option('ssl', type : 'combo', choices : ['none', 'openssl'],\n> value : 'none',\n> description: 'use LIB for SSL/TLS support (openssl)')\n> \n> option('uuid', type : 'combo', choices : ['none', 'bsd', 'e2fs', 'ossp'],\n> value : 'none',\n> description: 'build contrib/uuid-ossp using LIB')\n> \n> These were moved over from configure like that.\n>\n> The problem is that these features now cannot be automatically enabled and\n> behave annoyingly different from other feature options.\n\nOh, yes, this has been bothering me too.\n\n\n> For the 'ssl' option, we have deprecated the --with-openssl option in\n> configure and replaced it with --with-ssl, in anticipation of other SSL\n> implementations. None of that ever happened or is currently planned AFAICT.\n> So I suggest that we semi-revert this, so that we can make 'openssl' an auto\n> option in meson.\n\nHm. I'm inclined to leave it there - I do think it's somewhat likely that\nwe'll eventually end up with some platform native library. I think it's likely\nthe NSS patch isn't going anywhere, but I'm not sure that's true for\ne.g. using the windows encryption library. IIRC Heikki had a patch at some\npoint.\n\nI'd probably just add a 'auto' option, and manually make it behave like a\nfeature option.\n\n\n> For the 'uuid' option, I'm not sure what the best way to address this would.\n> We could establish a search order of libraries that is used if no specific\n> one is set (similar to libreadline, libedit, in a way). So we'd have one\n> option 'uuid' that is of type feature with default 'auto' and another\n> option, say, 'uuid-library' of type 'combo'.\n\nOr add 'auto' as a combo option, and handle the value of the auto_features\noption ourselves?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 08:23:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\n\nOn 2/8/23 19:23, Andres Freund wrote:\n>> For the 'uuid' option, I'm not sure what the best way to address this would.\n>> We could establish a search order of libraries that is used if no specific\n>> one is set (similar to libreadline, libedit, in a way). So we'd have one\n>> option 'uuid' that is of type feature with default 'auto' and another\n>> option, say, 'uuid-library' of type 'combo'.\n> Or add 'auto' as a combo option, and handle the value of the auto_features\n> option ourselves?\n\nIf we do it like this, meson's --auto-features option won't work for \nuuid. Is this something we want to consider?\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n\n\n",
"msg_date": "Tue, 14 Feb 2023 16:52:46 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nI added SSL and UUID patches. UUID patch has two different fixes:\n\n1 - v1-0002-meson-Refactor-UUID-option.patch: Adding 'auto' choice to \n'uuid' combo option.\n\n2 - v1-0002-meson-Refactor-UUID-option-with-uuid_library.patch: Making \n'uuid' feature option and adding new 'uuid_library' combo option with \n['auto', 'bsd', 'e2fs', 'ossp'] choices. If 'uuid_library' is set other \nthan 'auto' and it can't be found, build throws an error.\n\nWhat do you think?\n\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Mon, 20 Feb 2023 15:33:29 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On 20.02.23 13:33, Nazir Bilal Yavuz wrote:\n> I added SSL and UUID patches. UUID patch has two different fixes:\n> \n> 1 - v1-0002-meson-Refactor-UUID-option.patch: Adding 'auto' choice to \n> 'uuid' combo option.\n> \n> 2 - v1-0002-meson-Refactor-UUID-option-with-uuid_library.patch: Making \n> 'uuid' feature option and adding new 'uuid_library' combo option with \n> ['auto', 'bsd', 'e2fs', 'ossp'] choices. If 'uuid_library' is set other \n> than 'auto' and it can't be found, build throws an error.\n> \n> What do you think?\n\nI like the second approach, with a 'uuid' feature option. As you wrote \nearlier, adding an 'auto' choice to a combo option doesn't work fully \nlike a real feature option.\n\nBut what does uuid_library=auto do? Which one does it pick? This is \nnot a behavior we currently have, is it?\n\nI would rename the ssl_type variable to ssl_library, so that if we ever \nexpose that as an option, it would be consistent with uuid_library.\n\n\n\n",
"msg_date": "Mon, 20 Feb 2023 19:53:53 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn Mon, 20 Feb 2023 at 21:53, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> But what does uuid_library=auto do? Which one does it pick? This is\n> not a behavior we currently have, is it?\n\nYes, we didn't have that behavior before. It checks uuid libs by the\norder of 'e2fs', 'bsd' and 'ossp'. It uses the first one it finds and\ndoesn't try to find the rest but the build doesn't fail if it can't\nfind any library.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Mon, 20 Feb 2023 22:10:17 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "> On 20 Feb 2023, at 19:53, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> I would rename the ssl_type variable to ssl_library, so that if we ever expose that as an option, it would be consistent with uuid_library.\n\n+1, ssl_library is a better name.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 20 Feb 2023 20:39:01 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-20 19:53:53 +0100, Peter Eisentraut wrote:\n> On 20.02.23 13:33, Nazir Bilal Yavuz wrote:\n> > I added SSL and UUID patches. UUID patch has two different fixes:\n> > \n> > 1 - v1-0002-meson-Refactor-UUID-option.patch: Adding 'auto' choice to\n> > 'uuid' combo option.\n> > \n> > 2 - v1-0002-meson-Refactor-UUID-option-with-uuid_library.patch: Making\n> > 'uuid' feature option and adding new 'uuid_library' combo option with\n> > ['auto', 'bsd', 'e2fs', 'ossp'] choices. If 'uuid_library' is set other\n> > than 'auto' and it can't be found, build throws an error.\n> > \n> > What do you think?\n> \n> I like the second approach, with a 'uuid' feature option. As you wrote\n> earlier, adding an 'auto' choice to a combo option doesn't work fully like a\n> real feature option.\n\nBut we can make it behave exactly like one, by checking the auto_features\noption.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Feb 2023 11:42:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn Mon, 20 Feb 2023 at 22:42, Andres Freund <andres@anarazel.de> wrote:\n> On 2023-02-20 19:53:53 +0100, Peter Eisentraut wrote:\n> > On 20.02.23 13:33, Nazir Bilal Yavuz wrote:\n> > > I added SSL and UUID patches. UUID patch has two different fixes:\n> > >\n> > > 1 - v1-0002-meson-Refactor-UUID-option.patch: Adding 'auto' choice to\n> > > 'uuid' combo option.\n> > >\n> > > 2 - v1-0002-meson-Refactor-UUID-option-with-uuid_library.patch: Making\n> > > 'uuid' feature option and adding new 'uuid_library' combo option with\n> > > ['auto', 'bsd', 'e2fs', 'ossp'] choices. If 'uuid_library' is set other\n> > > than 'auto' and it can't be found, build throws an error.\n> > >\n> > > What do you think?\n> >\n> > I like the second approach, with a 'uuid' feature option. As you wrote\n> > earlier, adding an 'auto' choice to a combo option doesn't work fully like a\n> > real feature option.\n>\n> But we can make it behave exactly like one, by checking the auto_features\n> option.\n\nYes, we can set it like `uuidopt = get_option('auto_features')`.\nHowever, if someone wants to set 'auto_features' to 'disabled' but\n'uuid' to 'enabled'(to find at least one working uuid library); this\nwon't be possible. We can add 'enabled', 'disabled and 'auto' choices\nto 'uuid' combo option to make all behaviours possible but adding\n'uuid' feature option and 'uuid_library' combo option seems better to\nme.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Tue, 21 Feb 2023 19:32:10 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On 21.02.23 17:32, Nazir Bilal Yavuz wrote:\n>>> I like the second approach, with a 'uuid' feature option. As you wrote\n>>> earlier, adding an 'auto' choice to a combo option doesn't work fully like a\n>>> real feature option.\n>> But we can make it behave exactly like one, by checking the auto_features\n>> option.\n> Yes, we can set it like `uuidopt = get_option('auto_features')`.\n> However, if someone wants to set 'auto_features' to 'disabled' but\n> 'uuid' to 'enabled'(to find at least one working uuid library); this\n> won't be possible. We can add 'enabled', 'disabled and 'auto' choices\n> to 'uuid' combo option to make all behaviours possible but adding\n> 'uuid' feature option and 'uuid_library' combo option seems better to\n> me.\n\nI think the uuid side of this is making this way too complicated. I'm \ncontent leaving this as a manual option for now.\n\nThere is much more value in making the ssl option work automatically. \nSo I would welcome a patch that just makes -Dssl=auto work smoothly, \nperhaps using the \"trick\" that Andres described.\n\n\n\n",
"msg_date": "Wed, 22 Feb 2023 10:14:05 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn Wed, 22 Feb 2023 at 12:14, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 21.02.23 17:32, Nazir Bilal Yavuz wrote:\n> >>> I like the second approach, with a 'uuid' feature option. As you wrote\n> >>> earlier, adding an 'auto' choice to a combo option doesn't work fully like a\n> >>> real feature option.\n> >> But we can make it behave exactly like one, by checking the auto_features\n> >> option.\n> > Yes, we can set it like `uuidopt = get_option('auto_features')`.\n> > However, if someone wants to set 'auto_features' to 'disabled' but\n> > 'uuid' to 'enabled'(to find at least one working uuid library); this\n> > won't be possible. We can add 'enabled', 'disabled and 'auto' choices\n> > to 'uuid' combo option to make all behaviours possible but adding\n> > 'uuid' feature option and 'uuid_library' combo option seems better to\n> > me.\n>\n> I think the uuid side of this is making this way too complicated. I'm\n> content leaving this as a manual option for now.\n>\n> There is much more value in making the ssl option work automatically.\n> So I would welcome a patch that just makes -Dssl=auto work smoothly,\n> perhaps using the \"trick\" that Andres described.\n>\n\nThanks for the feedback. I updated the ssl patch and if you like\nchanges, I can apply the same logic to uuid.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Fri, 24 Feb 2023 16:01:29 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On 24.02.23 14:01, Nazir Bilal Yavuz wrote:\n> Thanks for the feedback. I updated the ssl patch and if you like\n> changes, I can apply the same logic to uuid.\n\nMaybe we can make some of the logic less nested. Right now there is\n\n if sslopt != 'none'\n\n if not ssl.found() and sslopt in ['auto', 'openssl']\n\nI think at that point, ssl.found() is never true, so it can be removed. \nAnd the two checks for sslopt are nearly redundant.\n\nAt the end of the block, there is\n\n # At least one SSL library must be found, otherwise throw an error\n if sslopt == 'auto' and auto_features.enabled()\n error('SSL Library could not be found')\n endif\n endif\n\nwhich also implies sslopt != 'none'. So I think the whole thing could be\n\n if sslopt in ['auto', 'openssl']\n\n ...\n\n endif\n\n if sslopt == 'auto' and auto_features.enabled()\n error('SSL Library could not be found')\n endif\n\nboth at the top level.\n\nAnother issue, I think this is incorrect:\n\n+ openssl_required ? error('openssl function @0@ is \nrequired'.format(func)) : \\\n+ message('openssl function @0@ is \nrequired'.format(func))\n\nWe don't want to issue a message like this when a non-required function \nis missing.\n\n\n",
"msg_date": "Wed, 1 Mar 2023 16:52:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nThanks for the review.\n\nOn Wed, 1 Mar 2023 at 18:52, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Maybe we can make some of the logic less nested. Right now there is\n>\n> if sslopt != 'none'\n>\n> if not ssl.found() and sslopt in ['auto', 'openssl']\n>\n> I think at that point, ssl.found() is never true, so it can be removed.\n\nI agree, ssl.found() can be removed.\n\n> And the two checks for sslopt are nearly redundant.\n>\n> At the end of the block, there is\n>\n> # At least one SSL library must be found, otherwise throw an error\n> if sslopt == 'auto' and auto_features.enabled()\n> error('SSL Library could not be found')\n> endif\n> endif\n>\n> which also implies sslopt != 'none'. So I think the whole thing could be\n>\n> if sslopt in ['auto', 'openssl']\n>\n> ...\n>\n> endif\n>\n> if sslopt == 'auto' and auto_features.enabled()\n> error('SSL Library could not be found')\n> endif\n>\n> both at the top level.\n>\n\nI am kind of confused. I added these checks for considering other SSL\nimplementations in the future, for this reason I have two nested if\nchecks. The top one is for checking if we need to search an SSL\nlibrary and the nested one is for checking if we need to search this\nspecific SSL library. What do you think?\n\nThe other thing is(which I forgot before) I need to add \"and not\nssl.found()\" condition to the \"if sslopt == 'auto' and\nauto_features.enabled()\" check.\n\n> Another issue, I think this is incorrect:\n>\n> + openssl_required ? error('openssl function @0@ is\n> required'.format(func)) : \\\n> + message('openssl function @0@ is\n> required'.format(func))\n>\n> We don't want to issue a message like this when a non-required function\n> is missing.\n\nI agree, the message part can be removed.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 2 Mar 2023 13:41:58 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On 02.03.23 11:41, Nazir Bilal Yavuz wrote:\n> I am kind of confused. I added these checks for considering other SSL\n> implementations in the future, for this reason I have two nested if\n> checks. The top one is for checking if we need to search an SSL\n> library and the nested one is for checking if we need to search this\n> specific SSL library. What do you think?\n\nI suppose that depends on how you envision integrating other SSL \nlibraries into this logic. It's not that important right now; if the \nstructure makes sense to you, that's fine.\n\nPlease send an updated patch with the small changes that have been \nmentioned.\n\n\n\n",
"msg_date": "Fri, 3 Mar 2023 10:16:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn Fri, 3 Mar 2023 at 12:16, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 02.03.23 11:41, Nazir Bilal Yavuz wrote:\n> > I am kind of confused. I added these checks for considering other SSL\n> > implementations in the future, for this reason I have two nested if\n> > checks. The top one is for checking if we need to search an SSL\n> > library and the nested one is for checking if we need to search this\n> > specific SSL library. What do you think?\n>\n> I suppose that depends on how you envision integrating other SSL\n> libraries into this logic. It's not that important right now; if the\n> structure makes sense to you, that's fine.\n>\n> Please send an updated patch with the small changes that have been\n> mentioned.\n>\n\nThe updated patch is attached.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Fri, 3 Mar 2023 13:01:00 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On 03.03.23 11:01, Nazir Bilal Yavuz wrote:\n> On Fri, 3 Mar 2023 at 12:16, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 02.03.23 11:41, Nazir Bilal Yavuz wrote:\n>>> I am kind of confused. I added these checks for considering other SSL\n>>> implementations in the future, for this reason I have two nested if\n>>> checks. The top one is for checking if we need to search an SSL\n>>> library and the nested one is for checking if we need to search this\n>>> specific SSL library. What do you think?\n>>\n>> I suppose that depends on how you envision integrating other SSL\n>> libraries into this logic. It's not that important right now; if the\n>> structure makes sense to you, that's fine.\n>>\n>> Please send an updated patch with the small changes that have been\n>> mentioned.\n>>\n> \n> The updated patch is attached.\n\nThis seems to work well.\n\nOne flaw, the \"External libraries\" summary shows something like\n\n ssl : YES 3.0.7\n\nIt would be nice if it showed \"openssl\".\n\nHow about we just hardcode \"openssl\" here instead? We could build that \narray dynamically, of course, but maybe we leave that until we actually \nhave a need?\n\n\n\n",
"msg_date": "Thu, 9 Mar 2023 14:45:24 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "> On 9 Mar 2023, at 14:45, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> How about we just hardcode \"openssl\" here instead? We could build that array dynamically, of course, but maybe we leave that until we actually have a need?\n\nAt least for 16 keeping it hardcoded is an entirely safe bet so +1 for leaving\nadditional complexity for when needed.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 9 Mar 2023 14:54:53 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn Thu, 9 Mar 2023 at 16:54, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 9 Mar 2023, at 14:45, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>\n> > How about we just hardcode \"openssl\" here instead? We could build that array dynamically, of course, but maybe we leave that until we actually have a need?\n>\n> At least for 16 keeping it hardcoded is an entirely safe bet so +1 for leaving\n> additional complexity for when needed.\n\nWe already have the 'ssl_library' variable. Can't we use that instead\nof hardcoding 'openssl'? e.g:\n\nsummary(\n {\n 'ssl': ssl.found() ? [ssl, '(@0@)'.format(ssl_library)] : ssl,\n },\n section: 'External libraries',\n list_sep: ', ',\n)\n\nAnd it will output:\nssl : YES 3.0.8, (openssl)\n\nI don't think that using 'ssl_library' will increase the complexity.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 9 Mar 2023 17:12:26 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "> On 9 Mar 2023, at 15:12, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:\n> \n> Hi,\n> \n> On Thu, 9 Mar 2023 at 16:54, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 9 Mar 2023, at 14:45, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>> \n>>> How about we just hardcode \"openssl\" here instead? We could build that array dynamically, of course, but maybe we leave that until we actually have a need?\n>> \n>> At least for 16 keeping it hardcoded is an entirely safe bet so +1 for leaving\n>> additional complexity for when needed.\n> \n> We already have the 'ssl_library' variable. Can't we use that instead\n> of hardcoding 'openssl'? e.g:\n> \n> summary(\n> {\n> 'ssl': ssl.found() ? [ssl, '(@0@)'.format(ssl_library)] : ssl,\n> },\n> section: 'External libraries',\n> list_sep: ', ',\n> )\n> \n> And it will output:\n> ssl : YES 3.0.8, (openssl)\n> \n> I don't think that using 'ssl_library' will increase the complexity.\n\nThat seems like a good idea.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 9 Mar 2023 15:15:49 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On 09.03.23 15:12, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> On Thu, 9 Mar 2023 at 16:54, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>\n>>> On 9 Mar 2023, at 14:45, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>>> How about we just hardcode \"openssl\" here instead? We could build that array dynamically, of course, but maybe we leave that until we actually have a need?\n>>\n>> At least for 16 keeping it hardcoded is an entirely safe bet so +1 for leaving\n>> additional complexity for when needed.\n> \n> We already have the 'ssl_library' variable. Can't we use that instead\n> of hardcoding 'openssl'? e.g:\n> \n> summary(\n> {\n> 'ssl': ssl.found() ? [ssl, '(@0@)'.format(ssl_library)] : ssl,\n> },\n> section: 'External libraries',\n> list_sep: ', ',\n> )\n> \n> And it will output:\n> ssl : YES 3.0.8, (openssl)\n> \n> I don't think that using 'ssl_library' will increase the complexity.\n\nThen we might as well use ssl_library as the key, like:\n\n{\n ...\n 'selinux': selinux,\n ssl_library: ssl,\n 'systemd': systemd,\n ...\n}\n\n\n\n",
"msg_date": "Thu, 9 Mar 2023 15:18:07 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn Thu, 9 Mar 2023 at 17:18, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 09.03.23 15:12, Nazir Bilal Yavuz wrote:\n> > Hi,\n> >\n> > On Thu, 9 Mar 2023 at 16:54, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>\n> >>> On 9 Mar 2023, at 14:45, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> >>\n> >>> How about we just hardcode \"openssl\" here instead? We could build that array dynamically, of course, but maybe we leave that until we actually have a need?\n> >>\n> >> At least for 16 keeping it hardcoded is an entirely safe bet so +1 for leaving\n> >> additional complexity for when needed.\n> >\n> > We already have the 'ssl_library' variable. Can't we use that instead\n> > of hardcoding 'openssl'? e.g:\n> >\n> > summary(\n> > {\n> > 'ssl': ssl.found() ? [ssl, '(@0@)'.format(ssl_library)] : ssl,\n> > },\n> > section: 'External libraries',\n> > list_sep: ', ',\n> > )\n> >\n> > And it will output:\n> > ssl : YES 3.0.8, (openssl)\n> >\n> > I don't think that using 'ssl_library' will increase the complexity.\n>\n> Then we might as well use ssl_library as the key, like:\n>\n> {\n> ...\n> 'selinux': selinux,\n> ssl_library: ssl,\n> 'systemd': systemd,\n> ...\n> }\n>\n\nThere will be a problem if ssl is not found. It will output 'none: NO'\nbecause 'ssl_library' is initialized as 'none' for now. We can\ninitialize 'ssl_library' as 'ssl' but I am not sure that is a good\nidea.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 9 Mar 2023 17:40:13 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On 09.03.23 14:54, Daniel Gustafsson wrote:\n>> On 9 Mar 2023, at 14:45, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n>> How about we just hardcode \"openssl\" here instead? We could build that array dynamically, of course, but maybe we leave that until we actually have a need?\n> \n> At least for 16 keeping it hardcoded is an entirely safe bet so +1 for leaving\n> additional complexity for when needed.\n\nI have committed it like this.\n\nI didn't like the other variants, because they would cause the openssl \nline to stick out for purely implementation reasons (e.g., we don't have \na line \"compression: YES (lz4)\". If we get support for another ssl \nlibrary, we can easily reconsider this.\n\n\n\n",
"msg_date": "Mon, 13 Mar 2023 07:27:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 07:27:18AM +0100, Peter Eisentraut wrote:\n> I have committed it like this.\n\nI noticed that after 6a30027, if you don't have the OpenSSL headers\ninstalled, 'meson setup' will fail:\n\n\tmeson.build:1195:4: ERROR: C header 'openssl/ssl.h' not found\n\nShouldn't \"auto\" cause Postgres to be built without OpenSSL if the required\nheaders are not present?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 Mar 2023 11:04:32 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn Mon, 13 Mar 2023 at 21:04, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Mar 13, 2023 at 07:27:18AM +0100, Peter Eisentraut wrote:\n> > I have committed it like this.\n>\n> I noticed that after 6a30027, if you don't have the OpenSSL headers\n> installed, 'meson setup' will fail:\n>\n> meson.build:1195:4: ERROR: C header 'openssl/ssl.h' not found\n>\n> Shouldn't \"auto\" cause Postgres to be built without OpenSSL if the required\n> headers are not present?\n\nYes, I tested again and it is working as expected on my end. It\nshouldn't fail like that unless the 'ssl' option is set to 'openssl'.\nIs it possible that it has been set to 'openssl' without you noticing?\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Mon, 13 Mar 2023 21:57:22 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 09:57:22PM +0300, Nazir Bilal Yavuz wrote:\n> On Mon, 13 Mar 2023 at 21:04, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Shouldn't \"auto\" cause Postgres to be built without OpenSSL if the required\n>> headers are not present?\n> \n> Yes, I tested again and it is working as expected on my end. It\n> shouldn't fail like that unless the 'ssl' option is set to 'openssl'.\n> Is it possible that it has been set to 'openssl' without you noticing?\n\nI do not believe so. For the test in question, here are the build options\nreported in meson-log.txt:\n\n\tBuild Options: -Dlz4=enabled -Dplperl=enabled -Dplpython=enabled -Dpltcl=enabled -Dlibxml=enabled -Duuid=ossp -Dlibxslt=enabled -Ddebug=true -Dcassert=true -Dtap_tests=enabled -Dwerror=True\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 Mar 2023 12:04:05 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-13 11:04:32 -0700, Nathan Bossart wrote:\n> On Mon, Mar 13, 2023 at 07:27:18AM +0100, Peter Eisentraut wrote:\n> > I have committed it like this.\n> \n> I noticed that after 6a30027, if you don't have the OpenSSL headers\n> installed, 'meson setup' will fail:\n> \n> \tmeson.build:1195:4: ERROR: C header 'openssl/ssl.h' not found\n> \n> Shouldn't \"auto\" cause Postgres to be built without OpenSSL if the required\n> headers are not present?\n\nYea. I found another thing: When dependency() found something, but the headers\nweren't present, ssl_int wouldn't exist.\n\nMaybe something like the attached?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 13 Mar 2023 13:13:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-13 21:57:22 +0300, Nazir Bilal Yavuz wrote:\n> On Mon, 13 Mar 2023 at 21:04, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >\n> > On Mon, Mar 13, 2023 at 07:27:18AM +0100, Peter Eisentraut wrote:\n> > > I have committed it like this.\n> >\n> > I noticed that after 6a30027, if you don't have the OpenSSL headers\n> > installed, 'meson setup' will fail:\n> >\n> > meson.build:1195:4: ERROR: C header 'openssl/ssl.h' not found\n> >\n> > Shouldn't \"auto\" cause Postgres to be built without OpenSSL if the required\n> > headers are not present?\n> \n> Yes, I tested again and it is working as expected on my end. It\n> shouldn't fail like that unless the 'ssl' option is set to 'openssl'.\n> Is it possible that it has been set to 'openssl' without you noticing?\n\nIt worked for the dependency() path, but not the cc.find_library() path. See\nthe patch I just sent.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Mar 2023 13:14:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 01:13:31PM -0700, Andres Freund wrote:\n> On 2023-03-13 11:04:32 -0700, Nathan Bossart wrote:\n>> I noticed that after 6a30027, if you don't have the OpenSSL headers\n>> installed, 'meson setup' will fail:\n>> \n>> \tmeson.build:1195:4: ERROR: C header 'openssl/ssl.h' not found\n>> \n>> Shouldn't \"auto\" cause Postgres to be built without OpenSSL if the required\n>> headers are not present?\n> \n> Yea. I found another thing: When dependency() found something, but the headers\n> weren't present, ssl_int wouldn't exist.\n> \n> Maybe something like the attached?\n\n> ssl_lib = cc.find_library('ssl',\n> dirs: test_lib_d,\n> header_include_directories: postgres_inc,\n> - has_headers: ['openssl/ssl.h', 'openssl/err.h'])\n> + has_headers: ['openssl/ssl.h', 'openssl/err.h'],\n> + required: openssl_required)\n> crypto_lib = cc.find_library('crypto',\n> dirs: test_lib_d,\n> - header_include_directories: postgres_inc)\n> - ssl_int = [ssl_lib, crypto_lib]\n> -\n> - ssl = declare_dependency(dependencies: ssl_int,\n> - include_directories: postgres_inc)\n> + required: openssl_required)\n> + if ssl_lib.found() and crypto_lib.found()\n> + ssl_int = [ssl_lib, crypto_lib]\n> + ssl = declare_dependency(dependencies: ssl_int, include_directories: postgres_inc)\n> + endif\n\nI was just about to post a patch to set \"required\" like you have for\nssl_lib and crypto_lib. It seemed to work alright without the 'if\nssl_lib.found() and crypto_lib.found()' line, but I haven't worked with\nthese meson files very much, so what you have is probably better form.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 Mar 2023 13:25:44 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn Mon, 13 Mar 2023 at 23:14, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-03-13 21:57:22 +0300, Nazir Bilal Yavuz wrote:\n> > On Mon, 13 Mar 2023 at 21:04, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > >\n> > > On Mon, Mar 13, 2023 at 07:27:18AM +0100, Peter Eisentraut wrote:\n> > > > I have committed it like this.\n> > >\n> > > I noticed that after 6a30027, if you don't have the OpenSSL headers\n> > > installed, 'meson setup' will fail:\n> > >\n> > > meson.build:1195:4: ERROR: C header 'openssl/ssl.h' not found\n> > >\n> > > Shouldn't \"auto\" cause Postgres to be built without OpenSSL if the required\n> > > headers are not present?\n> >\n> > Yes, I tested again and it is working as expected on my end. It\n> > shouldn't fail like that unless the 'ssl' option is set to 'openssl'.\n> > Is it possible that it has been set to 'openssl' without you noticing?\n>\n> It worked for the dependency() path, but not the cc.find_library() path. See\n> the patch I just sent.\n\nThanks for the patch, I understand the problem now and your patch fixes this.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Mon, 13 Mar 2023 23:46:41 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-13 23:46:41 +0300, Nazir Bilal Yavuz wrote:\n> Thanks for the patch, I understand the problem now and your patch fixes this.\n\nPushed the patch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Mar 2023 14:45:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 02:45:29PM -0700, Andres Freund wrote:\n> Pushed the patch.\n\nThanks for the prompt fix.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 Mar 2023 15:32:03 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn Wed, 22 Feb 2023 at 12:14, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 21.02.23 17:32, Nazir Bilal Yavuz wrote:\n> >>> I like the second approach, with a 'uuid' feature option. As you wrote\n> >>> earlier, adding an 'auto' choice to a combo option doesn't work fully like a\n> >>> real feature option.\n> >> But we can make it behave exactly like one, by checking the auto_features\n> >> option.\n> > Yes, we can set it like `uuidopt = get_option('auto_features')`.\n> > However, if someone wants to set 'auto_features' to 'disabled' but\n> > 'uuid' to 'enabled'(to find at least one working uuid library); this\n> > won't be possible. We can add 'enabled', 'disabled and 'auto' choices\n> > to 'uuid' combo option to make all behaviours possible but adding\n> > 'uuid' feature option and 'uuid_library' combo option seems better to\n> > me.\n>\n> I think the uuid side of this is making this way too complicated. I'm\n> content leaving this as a manual option for now.\n>\n> There is much more value in making the ssl option work automatically.\n> So I would welcome a patch that just makes -Dssl=auto work smoothly,\n> perhaps using the \"trick\" that Andres described.\n\nI tried to implement what we did for ssl to uuid as well, do you have\nany comments?\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Tue, 14 Mar 2023 17:07:15 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On 14.03.23 15:07, Nazir Bilal Yavuz wrote:\n>> I think the uuid side of this is making this way too complicated. I'm\n>> content leaving this as a manual option for now.\n>>\n>> There is much more value in making the ssl option work automatically.\n>> So I would welcome a patch that just makes -Dssl=auto work smoothly,\n>> perhaps using the \"trick\" that Andres described.\n> I tried to implement what we did for ssl to uuid as well, do you have\n> any comments?\n\nFor the uuid option, we have three different choices. What should be \nthe search order and why?\n\n\n",
"msg_date": "Wed, 15 Mar 2023 09:12:27 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nOn Wed, 15 Mar 2023 at 11:12, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 14.03.23 15:07, Nazir Bilal Yavuz wrote:\n> >> I think the uuid side of this is making this way too complicated. I'm\n> >> content leaving this as a manual option for now.\n> >>\n> >> There is much more value in making the ssl option work automatically.\n> >> So I would welcome a patch that just makes -Dssl=auto work smoothly,\n> >> perhaps using the \"trick\" that Andres described.\n> > I tried to implement what we did for ssl to uuid as well, do you have\n> > any comments?\n>\n> For the uuid option, we have three different choices. What should be\n> the search order and why?\n\nDocs [1] say that: OSSP uuid library is not well maintained, and is\nbecoming increasingly difficult to port to newer platforms; so we can\nput 'uuid-ossp' to the end. Between 'uuid-e2fs' and 'uuid-bsd', I\nbelieve 'uuid-e2fs' is used more often than 'uuid-bsd'.\nHence, they can be ordered as 'uuid-e2fs', 'uuid-bsd', 'uuid-ossp'.\n\nDoes that make sense?\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n[1] https://www.postgresql.org/docs/current/uuid-ossp.html\n\n\n",
"msg_date": "Wed, 22 Mar 2023 13:16:54 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "On 22.03.23 11:16, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> On Wed, 15 Mar 2023 at 11:12, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 14.03.23 15:07, Nazir Bilal Yavuz wrote:\n>>>> I think the uuid side of this is making this way too complicated. I'm\n>>>> content leaving this as a manual option for now.\n>>>>\n>>>> There is much more value in making the ssl option work automatically.\n>>>> So I would welcome a patch that just makes -Dssl=auto work smoothly,\n>>>> perhaps using the \"trick\" that Andres described.\n>>> I tried to implement what we did for ssl to uuid as well, do you have\n>>> any comments?\n>>\n>> For the uuid option, we have three different choices. What should be\n>> the search order and why?\n> \n> Docs [1] say that: OSSP uuid library is not well maintained, and is\n> becoming increasingly difficult to port to newer platforms; so we can\n> put 'uuid-ossp' to the end. Between 'uuid-e2fs' and 'uuid-bsd', I\n> believe 'uuid-e2fs' is used more often than 'uuid-bsd'.\n> Hence, they can be ordered as 'uuid-e2fs', 'uuid-bsd', 'uuid-ossp'.\n> \n> Does that make sense?\n\nI think that's fair.\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 16:22:33 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: meson: Non-feature feature options"
},
{
"msg_contents": "Hi,\n\nI was looking at older threads and found that. There were failures\nwhen rebasing v1-uuid patch to master so I updated it and also added\ndocumentation. I attached the v2 of the patch. I am sending v2 patch\nto this thread but should I create a new thread for uuid patch?\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Thu, 17 Aug 2023 14:18:58 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson: Non-feature feature options"
}
] |
[
{
"msg_contents": "Hi,\n\nMy colleague Adam realized that when transferring ownership, 'REASSIGN \nOWNED' command doesn't check 'CREATE privilege on the table's schema' on \nnew owner but 'ALTER TABLE OWNER TO' docs state that:\n\nTo alter the owner, you must also be a direct or indirect member of the \nnew owning role, and that role must have CREATE privilege on the table's \nschema. (These restrictions enforce that altering the owner doesn't do \nanything you couldn't do by dropping and recreating the table. However, \na superuser can alter ownership of any table anyway.)\n\nI tested that with:\n\n# Connect as a superuser\n$ psql test\ntest=# CREATE ROLE source_role WITH LOGIN;\nCREATE ROLE\ntest=# CREATE ROLE target_role WITH LOGIN;\nCREATE ROLE\ntest=# GRANT target_role to source_role;\nGRANT ROLE\ntest=# GRANT CREATE on schema public to source_role;\nGRANT\n\n# Connect as a source_role\n$ psql test -U source_role\ntest=> CREATE TABLE test_table();\nCREATE TABLE\n\ntest=> \\dt\n List of relations\n Schema | Name | Type | Owner\n--------+------------+-------+-------------\n public | test_table | table | source_role\n(1 row)\n\n# Alter owner with 'ALTER TABLE OWNER TO'\ntest=> ALTER TABLE test_table owner to target_role;\nERROR: permission denied for schema public\n\n# Alter owner with 'REASSIGN OWNED'\ntest=> REASSIGN OWNED BY source_role to target_role;\nREASSIGN OWNED\n\ntest=> \\dt\n List of relations\n Schema | Name | Type | Owner\n--------+------------+-------+-------------\n public | test_table | table | target_role\n(1 row)\n\nAs you can see, 'ALTER TABLE OWNER TO' checked 'CREATE privilege on the \ntable's schema' on target_role but 'REASSIGN OWNED' didn't check it and \ntransferred ownership of the table. Is this a potential security gap or \nintentional behaviour?\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n\n",
"msg_date": "Wed, 8 Feb 2023 13:49:37 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": true,
"msg_subject": "REASSIGN OWNED vs ALTER TABLE OWNER TO permission inconsistencies"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 5:49 AM Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:\n> My colleague Adam realized that when transferring ownership, 'REASSIGN\n> OWNED' command doesn't check 'CREATE privilege on the table's schema' on\n> new owner but 'ALTER TABLE OWNER TO' docs state that:\n\nWell, that sucks.\n\n> As you can see, 'ALTER TABLE OWNER TO' checked 'CREATE privilege on the\n> table's schema' on target_role but 'REASSIGN OWNED' didn't check it and\n> transferred ownership of the table. Is this a potential security gap or\n> intentional behaviour?\n\nI was looking at this recently and I noticed that for some object\ntypes, ALTER WHATEVER ... OWNER TO requires that the user transferring\nownership possess CREATE on the containing object, which might be a\nschema or database depending on the object type. For other object\ntypes, ALTER WHATEVER ... OWNER TO requires that the user *receiving*\nownership possess CREATE on the containing object, either schema or\ndatabase. That's not very consistent, and I couldn't find anything to\nexplain why it's like that. Now you've discovered that REASSIGN OWNED\nignores this issue altogether. Ugh.\n\nWe probably ought to make this consistent. Either the donor needs\nCREATE permission on the parent object, or the recipient does, or\nboth, or neither, and whatever the rule may be, it should be\nconsistent across all types of objects (except for shared objects\nwhich have no parent object) and across all commands.\n\nI think that requiring the recipient to have CREATE permission on the\nparent object doesn't really make sense. It could make sense if we did\nit consistently, so that there was a hard-and-fast rule that the\ncurrent owner always has CREATE on the parent object, but I think that\nwill never happen. You can be a superuser and thus create objects with\nno explicit privileges on the containing object at all, and if your\nrole is later made NOSUPERUSER, you'll still own those objects. You\ncould have the privilege initially and then later it could be revoked,\nand we would not demand those objects to be dropped or given to a\ndifferent owner or whatever. Changing those behaviors doesn't seem\ndesirable. It would lead to lots of pedantic failures trying to\nexecute REASSIGN OWNED or REVOKE or ALTER USER ... NOSUPERUSER and I\ncan't see what we'd really be gaining.\n\nI think that requiring the donor to have CREATE permission on the\nparent object makes a little bit more sense. I wouldn't mind if we\ntried to standardize on that rule. It would be unlikely to\ninconvenience users trying to execute REASSIGN OWNED because most\nusers running REASSIGNED OWNED are going to be superusers already, or\nat the very least highly privileged. However, I'm not sure how much\nlogical sense it makes. Changing the owner of an object is pretty\ndifferent from creating it. It makes sense to require CREATE\npermission on the parent object if an object is being *renamed*,\nbecause that's a lot like creating a new object: there's now something\nin this schema or database under a name that previously wasn't in use.\nBut ALTER .. OWNER TO does not have that effect, so I think it's kind\nof unclear why we even care about CREATE on the parent object.\n\nI think the important permission checks around ALTER ... OWNER TO are\non the roles involved and their relationship to the object itself. You\nneed to own the object (or inherit those privileges) and, in master,\nyou need to be able to SET ROLE to the new owner. If you have those\npermissions, is that, perhaps, good enough? Maybe checking CREATE on\nthe parent object just isn't really needed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:24:09 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: REASSIGN OWNED vs ALTER TABLE OWNER TO permission inconsistencies"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Feb 8, 2023 at 5:49 AM Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:\n> > My colleague Adam realized that when transferring ownership, 'REASSIGN\n> > OWNED' command doesn't check 'CREATE privilege on the table's schema' on\n> > new owner but 'ALTER TABLE OWNER TO' docs state that:\n> \n> Well, that sucks.\n\nYeah, that's not great.\n\n> > As you can see, 'ALTER TABLE OWNER TO' checked 'CREATE privilege on the\n> > table's schema' on target_role but 'REASSIGN OWNED' didn't check it and\n> > transferred ownership of the table. Is this a potential security gap or\n> > intentional behaviour?\n> \n> I was looking at this recently and I noticed that for some object\n> types, ALTER WHATEVER ... OWNER TO requires that the user transferring\n> ownership possess CREATE on the containing object, which might be a\n> schema or database depending on the object type. For other object\n> types, ALTER WHATEVER ... OWNER TO requires that the user *receiving*\n> ownership possess CREATE on the containing object, either schema or\n> database. That's not very consistent, and I couldn't find anything to\n> explain why it's like that. Now you've discovered that REASSIGN OWNED\n> ignores this issue altogether. Ugh.\n\nWhen this was originally done, at least if my memory serves me\ncorrectly, the idea was that it needed to be the receiver who needed\nCREATE rights because, in that case, they could have just created it\nthemselves and there isn't some risk of objects being \"given away\" to\nanother user in a manner that they wouldn't have been able to create\nthose objects in the first place.\n\n> We probably ought to make this consistent. Either the donor needs\n> CREATE permission on the parent object, or the recipient does, or\n> both, or neither, and whatever the rule may be, it should be\n> consistent across all types of objects (except for shared objects\n> which have no parent object) and across all commands.\n\nI agree that being consistent makes sense.\n\n> I think that requiring the recipient to have CREATE permission on the\n> parent object doesn't really make sense. It could make sense if we did\n> it consistently, so that there was a hard-and-fast rule that the\n> current owner always has CREATE on the parent object, but I think that\n> will never happen. You can be a superuser and thus create objects with\n> no explicit privileges on the containing object at all, and if your\n> role is later made NOSUPERUSER, you'll still own those objects. You\n> could have the privilege initially and then later it could be revoked,\n> and we would not demand those objects to be dropped or given to a\n> different owner or whatever. Changing those behaviors doesn't seem\n> desirable. It would lead to lots of pedantic failures trying to\n> execute REASSIGN OWNED or REVOKE or ALTER USER ... NOSUPERUSER and I\n> can't see what we'd really be gaining.\n\nI don't think I really agree that \"because a superuser can arrange for\nit to not be valid\" that it follows that requiring the recipient to have\nCREATE permission on the parent object doesn't make sense. Surely for\nany of these scenarios, whatever rule we come up with (assuming we have\nany rule at all...) a superuser could arrange to make that rule no\nlonger consistent. I agree that we probably don't want to go through to\nthe point of what SQL requires which is actually that issuing a REVOKE\nwill end up DROP'ing things simply because that's just a recipe for\npeople ending up mistakenly having tables be DROP'd, but having a rule\nthat prevents users from just giving away their objects to other users,\neven when the recipient couldn't have created that object, is good.\n\n> I think that requiring the donor to have CREATE permission on the\n> parent object makes a little bit more sense. I wouldn't mind if we\n> tried to standardize on that rule. It would be unlikely to\n> inconvenience users trying to execute REASSIGN OWNED because most\n> users running REASSIGNED OWNED are going to be superusers already, or\n> at the very least highly privileged. However, I'm not sure how much\n> logical sense it makes. Changing the owner of an object is pretty\n> different from creating it. It makes sense to require CREATE\n> permission on the parent object if an object is being *renamed*,\n> because that's a lot like creating a new object: there's now something\n> in this schema or database under a name that previously wasn't in use.\n> But ALTER .. OWNER TO does not have that effect, so I think it's kind\n> of unclear why we even care about CREATE on the parent object.\n\nMaybe I'm not remembering it entirely, but don't we also require that\nthe user performing the ownership change have the ability to SET ROLE to\nthe destination role? So if we're checking that the destination role\nhas CREATE rights on the parent object then necessarily the donor also\nhas that right.\n\n> I think the important permission checks around ALTER ... OWNER TO are\n> on the roles involved and their relationship to the object itself. You\n> need to own the object (or inherit those privileges) and, in master,\n> you need to be able to SET ROLE to the new owner. If you have those\n> permissions, is that, perhaps, good enough? Maybe checking CREATE on\n> the parent object just isn't really needed.\n\nHrm, didn't we have the requirement for SET ROLE previously? Or maybe\nonly in some of the code paths, but I have a pretty good recollection of\nthat existing before.\n\nI'm not really a fan of just dropping the CREATE check. If we go with\n\"recipient needs CREATE rights\" then at least without superuser\nintervention and excluding cases where REVOKE's or such are happening,\nwe should be able to see that only objects where the owners of those\nobjects have CREATE rights exist in the system. If we drop the CREATE\ncheck entirely then clearly any user who happens to have access to\nmultiple roles can arrange to have objects owned by any of their roles\nin any schema or database they please without any consideration for what\nthe owner of the parent object's wishes are.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 14 Feb 2023 22:31:34 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: REASSIGN OWNED vs ALTER TABLE OWNER TO permission inconsistencies"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 9:01 AM Stephen Frost <sfrost@snowman.net> wrote:\n> I don't think I really agree that \"because a superuser can arrange for\n> it to not be valid\" that it follows that requiring the recipient to have\n> CREATE permission on the parent object doesn't make sense. Surely for\n> any of these scenarios, whatever rule we come up with (assuming we have\n> any rule at all...) a superuser could arrange to make that rule no\n> longer consistent.\n\nWell .... yes and no.\n\nThe superuser can always hack things by modifying the system catalogs,\nbut we have plenty of integrity constraints that a superuser can't\njust casually violate because they feel like it. For example, a\nsuperuser is no more able to revoke privileges without revoking the\nprivileges that depend upon them than anyone else.\n\n> I'm not really a fan of just dropping the CREATE check. If we go with\n> \"recipient needs CREATE rights\" then at least without superuser\n> intervention and excluding cases where REVOKE's or such are happening,\n> we should be able to see that only objects where the owners of those\n> objects have CREATE rights exist in the system. If we drop the CREATE\n> check entirely then clearly any user who happens to have access to\n> multiple roles can arrange to have objects owned by any of their roles\n> in any schema or database they please without any consideration for what\n> the owner of the parent object's wishes are.\n\nThat's true, and it is a downside of dropping to CREATE check, but\nit's also a bit hard to believe that anyone's really getting a lot of\nvalue out of the current inconsistent checks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Feb 2023 09:59:23 +0530",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: REASSIGN OWNED vs ALTER TABLE OWNER TO permission inconsistencies"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Feb 15, 2023 at 9:01 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > I'm not really a fan of just dropping the CREATE check. If we go with\n> > \"recipient needs CREATE rights\" then at least without superuser\n> > intervention and excluding cases where REVOKE's or such are happening,\n> > we should be able to see that only objects where the owners of those\n> > objects have CREATE rights exist in the system. If we drop the CREATE\n> > check entirely then clearly any user who happens to have access to\n> > multiple roles can arrange to have objects owned by any of their roles\n> > in any schema or database they please without any consideration for what\n> > the owner of the parent object's wishes are.\n> \n> That's true, and it is a downside of dropping to CREATE check, but\n> it's also a bit hard to believe that anyone's really getting a lot of\n> value out of the current inconsistent checks.\n\nI agree that we should be consistent about these checks. I'm just more\ninclined to have that consistent result include the CREATE check than\nhave it be dropped. Not sure that it's a huge thing but being able to\ncontrol what set of owner roles are allowed to have objects in a given\nschema seems useful and was certainly the intent, as I recall anyhow.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 16 Feb 2023 21:24:45 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: REASSIGN OWNED vs ALTER TABLE OWNER TO permission inconsistencies"
}
] |
[
{
"msg_contents": "During data refactoring of our Application I encountered $subject when joining 4 CTEs with left join or inner join.\n\n\n1. Background\n\nPG 15.1 on Windows x64 (OS seems no to have no meening here)\n\n\nI try to collect data from 4 (analyzed) tables (up,li,in,ou) by grouping certain data (4 CTEs qup,qli,qin,qou)\n\nThe grouping of the data in the CTEs gives estimated row counts of about 1000 (1 tenth of the real value) This is OK for estimation.\n\n\nThese 4 CTEs are then used to combine the data by joining them.\n\n\n2. Problem\n\nThe 4 CTEs are joined by left joins as shown below:\n\n\nfrom qup\nleft join qli on (qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and qli.ibitmask>0 and cardinality(qli.mat_arr) <=8)\nleft join qin on (qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and qin.ibitmask>0 and cardinality(qin.mat_arr) <=8)\nleft join qou on (qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and qou.ibitmask>0 and cardinality(qou.mat_arr) <=11)\nwhere qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\n\nThe plan first retrieves qup and qli, taking the estimated row counts of 1163 and 1147 respectively\n\n\nBUT the result is then hashed and the row count is estimated as 33!\n\n\nIn a Left join the row count stays always the same as the one of left table (here qup with 1163 rows)\n\n\nThe same algorithm which reduces the row estimation from 1163 to 33 is used in the next step to give an estimation of 1 row.\n\nThis is totally wrong.\n\n\nHere is the execution plan of the query:\n\n(search the plan for rows=33)\n\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=13673.81..17463.30 rows=5734 width=104) (actual time=168.307..222.670 rows=9963 loops=1)\n CTE qup\n -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80) (actual time=35.466..68.131 rows=10735 loops=1)\n Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual time=35.454..36.819 rows=50969 loops=1)\n Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 4722kB\n -> Hash Left Join (cost=41.71..1246.13 rows=50969 width=18) (actual time=0.148..10.687 rows=50969 loops=1)\n Hash Cond: ((sa_upper.sup_mat_code)::text = upper_target.up_mat_code)\n -> Seq Scan on sa_upper (cost=0.00..884.69 rows=50969 width=16) (actual time=0.005..1.972 rows=50969 loops=1)\n -> Hash (cost=35.53..35.53 rows=495 width=6) (actual time=0.140..0.140 rows=495 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\n -> Seq Scan on upper_target (cost=0.00..35.53 rows=495 width=6) (actual time=0.007..0.103 rows=495 loops=1)\n Filter: (id_up <= 495)\n Rows Removed by Filter: 1467\n CTE qli\n -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80) (actual time=9.446..27.388 rows=10469 loops=1)\n Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual time=9.440..9.811 rows=11774 loops=1)\n Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1120kB\n -> Hash Left Join (cost=7.34..301.19 rows=11774 width=18) (actual time=0.045..2.438 rows=11774 loops=1)\n Hash Cond: ((sa_lining.sli_mat_code)::text = lining_target.li_mat_code)\n -> Seq Scan on sa_lining (cost=0.00..204.74 rows=11774 width=16) (actual time=0.008..0.470 rows=11774 loops=1)\n -> Hash (cost=5.86..5.86 rows=118 width=6) (actual time=0.034..0.034 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on lining_target (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.024 rows=119 loops=1)\n Filter: (id_li <= 119)\n Rows Removed by Filter: 190\n CTE qin\n -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80) (actual time=11.424..31.508 rows=10678 loops=1)\n Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual time=11.416..11.908 rows=15230 loops=1)\n Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1336kB\n -> Hash Left Join (cost=10.49..369.26 rows=15230 width=18) (actual time=0.051..3.108 rows=15230 loops=1)\n Hash Cond: ((sa_insole.sin_mat_code)::text = insole_target.in_mat_code)\n -> Seq Scan on sa_insole (cost=0.00..264.30 rows=15230 width=16) (actual time=0.006..0.606 rows=15230 loops=1)\n -> Hash (cost=9.01..9.01 rows=118 width=6) (actual time=0.042..0.043 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on insole_target (cost=0.00..9.01 rows=118 width=6) (actual time=0.008..0.032 rows=119 loops=1)\n Filter: (id_in <= 119)\n Rows Removed by Filter: 362\n CTE qou\n -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80) (actual time=18.198..41.812 rows=10699 loops=1)\n Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual time=18.187..18.967 rows=24768 loops=1)\n Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 2317kB\n -> Hash Left Join (cost=5.39..558.63 rows=24768 width=18) (actual time=0.046..5.132 rows=24768 loops=1)\n Hash Cond: ((sa_outsole.sou_mat_code)::text = outsole_target.ou_mat_code)\n -> Seq Scan on sa_outsole (cost=0.00..430.68 rows=24768 width=16) (actual time=0.010..1.015 rows=24768 loops=1)\n -> Hash (cost=5.03..5.03 rows=29 width=6) (actual time=0.032..0.032 rows=29 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on outsole_target (cost=0.00..5.03 rows=29 width=6) (actual time=0.010..0.025 rows=29 loops=1)\n Filter: (id_ou <= 29)\n Rows Removed by Filter: 213\n -> Hash Join (cost=1015.85..1319.50 rows=1 width=104) (actual time=168.307..215.513 rows=8548 loops=1)\n Hash Cond: ((qou.curr_season = qli.curr_season) AND ((qou.curr_code)::text = (qli.curr_code)::text))\n Join Filter: ((((qup.ibitmask | qin.ibitmask) | qli.ibitmask) | qou.ibitmask) IS NOT NULL)\n -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=76) (actual time=18.200..45.188 rows=10275 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\n Rows Removed by Filter: 424\n -> Hash (cost=1015.83..1015.83 rows=1 width=228) (actual time=150.094..150.095 rows=8845 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1899kB\n -> Hash Join (cost=707.35..1015.83 rows=1 width=228) (actual time=121.898..147.726 rows=8845 loops=1)\n Hash Cond: ((qin.curr_season = qli.curr_season) AND ((qin.curr_code)::text = (qli.curr_code)::text))\n -> CTE Scan on qin (cost=0.00..293.65 rows=1186 width=76) (actual time=11.425..34.674 rows=10197 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 481\n -> Hash (cost=706.86..706.86 rows=33 width=152) (actual time=110.470..110.470 rows=9007 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1473kB\n -> Merge Join (cost=689.20..706.86 rows=33 width=152) (actual time=105.862..108.925 rows=9007 loops=1)\n Merge Cond: ((qup.curr_season = qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n -> Sort (cost=342.09..344.96 rows=1147 width=76) (actual time=73.419..73.653 rows=9320 loops=1)\n Sort Key: qup.curr_season, qup.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1391kB\n -> CTE Scan on qup (cost=0.00..283.80 rows=1147 width=76) (actual time=35.467..71.904 rows=9320 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 21))\n Rows Removed by Filter: 1415\n -> Sort (cost=347.12..350.02 rows=1163 width=76) (actual time=32.440..32.697 rows=10289 loops=1)\n Sort Key: qli.curr_season, qli.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1349kB\n -> CTE Scan on qli (cost=0.00..287.90 rows=1163 width=76) (actual time=9.447..30.666 rows=10289 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 180\n -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104) (actual time=4.597..6.700 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n -> Merge Left Join (cost=1958.66..2135.28 rows=5733 width=136) (actual time=3.427..3.863 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104) (actual time=2.321..2.556 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n -> Sort (cost=641.68..656.02 rows=5733 width=72) (actual time=1.286..1.324 rows=1415 loops=1)\n Sort Key: qup_1.curr_season, qup_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 204kB\n -> CTE Scan on qup qup_1 (cost=0.00..283.80 rows=5733 width=72) (actual time=0.009..1.093 rows=1415 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 21))\n Rows Removed by Filter: 9320\n -> Sort (cost=651.57..666.11 rows=5816 width=72) (actual time=1.033..1.038 rows=180 loops=1)\n Sort Key: qli_1.curr_season, qli_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 41kB\n -> CTE Scan on qli qli_1 (cost=0.00..287.90 rows=5816 width=72) (actual time=0.055..1.007 rows=180 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10289\n -> Sort (cost=665.41..680.24 rows=5932 width=72) (actual time=1.104..1.117 rows=481 loops=1)\n Sort Key: qin_1.curr_season, qin_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qin qin_1 (cost=0.00..293.65 rows=5932 width=72) (actual time=0.016..1.038 rows=481 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10197\n -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual time=1.163..1.174 rows=417 loops=1)\n Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944 width=72) (actual time=0.029..1.068 rows=424 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n Rows Removed by Filter: 10275\n Planning Time: 2.297 ms\n Execution Time: 224.759 ms\n(118 Zeilen)\n\n3. Slow query from wrong plan as result on similar case with inner join\n\nWhen the 3 left joins above are changed to inner joins like:\n\nfrom qup\njoin qli on (qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and qli.ibitmask>0 and cardinality(qli.mat_arr) <=8)\njoin qin on (qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and qin.ibitmask>0 and cardinality(qin.mat_arr) <=8)\njoin qou on (qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and qou.ibitmask>0 and cardinality(qou.mat_arr) <=11)\nwhere qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\n\nThe same rows estimation takes place as with the left joins, but the planner now decides to use a nested loop for the last join, which results in a 500fold execution time:\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=13365.31..17472.18 rows=5734 width=104) (actual time=139.037..13403.310 rows=9963 loops=1)\n CTE qup\n -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80) (actual time=35.399..67.102 rows=10735 loops=1)\n Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual time=35.382..36.743 rows=50969 loops=1)\n Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 4722kB\n -> Hash Left Join (cost=41.71..1246.13 rows=50969 width=18) (actual time=0.157..10.715 rows=50969 loops=1)\n Hash Cond: ((sa_upper.sup_mat_code)::text = upper_target.up_mat_code)\n -> Seq Scan on sa_upper (cost=0.00..884.69 rows=50969 width=16) (actual time=0.008..2.001 rows=50969 loops=1)\n -> Hash (cost=35.53..35.53 rows=495 width=6) (actual time=0.146..0.146 rows=495 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\n -> Seq Scan on upper_target (cost=0.00..35.53 rows=495 width=6) (actual time=0.006..0.105 rows=495 loops=1)\n Filter: (id_up <= 495)\n Rows Removed by Filter: 1467\n CTE qli\n -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80) (actual time=9.541..27.419 rows=10469 loops=1)\n Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual time=9.534..9.908 rows=11774 loops=1)\n Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1120kB\n -> Hash Left Join (cost=7.34..301.19 rows=11774 width=18) (actual time=0.049..2.451 rows=11774 loops=1)\n Hash Cond: ((sa_lining.sli_mat_code)::text = lining_target.li_mat_code)\n -> Seq Scan on sa_lining (cost=0.00..204.74 rows=11774 width=16) (actual time=0.010..0.462 rows=11774 loops=1)\n -> Hash (cost=5.86..5.86 rows=118 width=6) (actual time=0.035..0.035 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on lining_target (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.025 rows=119 loops=1)\n Filter: (id_li <= 119)\n Rows Removed by Filter: 190\n CTE qin\n -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80) (actual time=11.649..30.910 rows=10678 loops=1)\n Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual time=11.642..12.115 rows=15230 loops=1)\n Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1336kB\n -> Hash Left Join (cost=10.49..369.26 rows=15230 width=18) (actual time=0.056..3.144 rows=15230 loops=1)\n Hash Cond: ((sa_insole.sin_mat_code)::text = insole_target.in_mat_code)\n -> Seq Scan on sa_insole (cost=0.00..264.30 rows=15230 width=16) (actual time=0.008..0.594 rows=15230 loops=1)\n -> Hash (cost=9.01..9.01 rows=118 width=6) (actual time=0.045..0.046 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on insole_target (cost=0.00..9.01 rows=118 width=6) (actual time=0.008..0.034 rows=119 loops=1)\n Filter: (id_in <= 119)\n Rows Removed by Filter: 362\n CTE qou\n -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80) (actual time=18.163..51.151 rows=10699 loops=1)\n Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual time=18.150..20.000 rows=24768 loops=1)\n Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 2317kB\n -> Hash Left Join (cost=5.39..558.63 rows=24768 width=18) (actual time=0.036..5.106 rows=24768 loops=1)\n Hash Cond: ((sa_outsole.sou_mat_code)::text = outsole_target.ou_mat_code)\n -> Seq Scan on sa_outsole (cost=0.00..430.68 rows=24768 width=16) (actual time=0.008..1.005 rows=24768 loops=1)\n -> Hash (cost=5.03..5.03 rows=29 width=6) (actual time=0.024..0.024 rows=29 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on outsole_target (cost=0.00..5.03 rows=29 width=6) (actual time=0.007..0.018 rows=29 loops=1)\n Filter: (id_ou <= 29)\n Rows Removed by Filter: 213\n -> Nested Loop (cost=707.35..1328.37 rows=1 width=104) (actual time=139.036..13395.820 rows=8548 loops=1)\n Join Filter: ((qli.curr_season = qin.curr_season) AND ((qli.curr_code)::text = (qin.curr_code)::text))\n Rows Removed by Join Filter: 88552397\n -> Hash Join (cost=707.35..1016.45 rows=1 width=216) (actual time=127.374..168.249 rows=8685 loops=1)\n Hash Cond: ((qou.curr_season = qli.curr_season) AND ((qou.curr_code)::text = (qli.curr_code)::text))\n -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=72) (actual time=18.165..54.968 rows=10275 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\n Rows Removed by Filter: 424\n -> Hash (cost=706.86..706.86 rows=33 width=144) (actual time=109.205..109.207 rows=9007 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1369kB\n -> Merge Join (cost=689.20..706.86 rows=33 width=144) (actual time=104.785..107.748 rows=9007 loops=1)\n Merge Cond: ((qup.curr_season = qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n -> Sort (cost=342.09..344.96 rows=1147 width=72) (actual time=72.320..72.559 rows=9320 loops=1)\n Sort Key: qup.curr_season, qup.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1357kB\n -> CTE Scan on qup (cost=0.00..283.80 rows=1147 width=72) (actual time=35.401..70.834 rows=9320 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 21))\n Rows Removed by Filter: 1415\n -> Sort (cost=347.12..350.02 rows=1163 width=72) (actual time=32.461..32.719 rows=10289 loops=1)\n Sort Key: qli.curr_season, qli.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1269kB\n -> CTE Scan on qli (cost=0.00..287.90 rows=1163 width=72) (actual time=9.543..30.696 rows=10289 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 180\n -> CTE Scan on qin (cost=0.00..293.65 rows=1186 width=72) (actual time=0.001..1.159 rows=10197 loops=8685)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 481\n -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104) (actual time=4.606..6.733 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n -> Merge Left Join (cost=1958.66..2135.28 rows=5733 width=136) (actual time=3.479..3.930 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104) (actual time=2.368..2.610 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n -> Sort (cost=641.68..656.02 rows=5733 width=72) (actual time=1.296..1.335 rows=1415 loops=1)\n Sort Key: qup_1.curr_season, qup_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 204kB\n -> CTE Scan on qup qup_1 (cost=0.00..283.80 rows=5733 width=72) (actual time=0.010..1.119 rows=1415 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 21))\n Rows Removed by Filter: 9320\n -> Sort (cost=651.57..666.11 rows=5816 width=72) (actual time=1.069..1.075 rows=180 loops=1)\n Sort Key: qli_1.curr_season, qli_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 41kB\n -> CTE Scan on qli qli_1 (cost=0.00..287.90 rows=5816 width=72) (actual time=0.057..1.026 rows=180 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10289\n -> Sort (cost=665.41..680.24 rows=5932 width=72) (actual time=1.110..1.124 rows=481 loops=1)\n Sort Key: qin_1.curr_season, qin_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qin qin_1 (cost=0.00..293.65 rows=5932 width=72) (actual time=0.016..1.046 rows=481 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10197\n -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual time=1.119..1.128 rows=417 loops=1)\n Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944 width=72) (actual time=0.029..1.056 rows=424 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n Rows Removed by Filter: 10275\n Planning Time: 1.746 ms\n Execution Time: 13405.503 ms\n(116 Zeilen)\n\nThis case really brought me to detect the problem!\n\nThe original query and data are not shown here, but the principle should be clear from the execution plans.\n\nI think the planner shouldn't change the row estimations on further steps after left joins at all, and be a bit more conservative on inner joins.\nThis may be related to the fact that this case has 2 join-conditions (xx_season an xx_code).\n\nThanks for looking\n\nHans Buschmann\n\n\n\n\n\n\n\n\n\n\n\n\nDuring data refactoring of our Application I encountered $subject when joining 4 CTEs with left join or inner join.\n\n\n1. Background\nPG 15.1 on Windows x64 (OS seems no to have no meening here)\n\n\nI try to collect data from 4 (analyzed) tables (up,li,in,ou) by grouping certain data (4 CTEs qup,qli,qin,qou)\nThe grouping of the data in the CTEs gives estimated row counts of about 1000 (1 tenth of the real value) This is OK for estimation.\n\n\nThese 4 CTEs are then used to combine the data by joining them.\n\n\n2. Problem\nThe 4 CTEs are joined by left joins as shown below:\n\n\n\nfrom qup\n\nleft join qli on (qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and qli.ibitmask>0 and cardinality(qli.mat_arr) <=8)\nleft join qin on (qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and qin.ibitmask>0 and cardinality(qin.mat_arr) <=8)\nleft join qou on (qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and qou.ibitmask>0 and cardinality(qou.mat_arr) <=11)\nwhere qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\n\n\nThe plan first retrieves qup and qli, taking the estimated row counts of 1163 and 1147 respectively\n\n\n\nBUT the result is then hashed and the row count is estimated as 33!\n\n\nIn a Left join the row count stays always the same as the one of left table (here qup with 1163 rows)\n\n\nThe same algorithm which reduces the row estimation from 1163 to 33 is used in the next step to give an estimation of 1 row.\nThis is totally wrong.\n\n\nHere is the execution plan of the query:\n(search the plan for rows=33)\n\n\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=13673.81..17463.30 rows=5734 width=104) (actual time=168.307..222.670 rows=9963 loops=1)\n CTE qup\n -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80) (actual time=35.466..68.131 rows=10735 loops=1)\n Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual time=35.454..36.819 rows=50969 loops=1)\n Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 4722kB\n -> Hash Left Join (cost=41.71..1246.13 rows=50969 width=18) (actual time=0.148..10.687 rows=50969 loops=1)\n Hash Cond: ((sa_upper.sup_mat_code)::text = upper_target.up_mat_code)\n -> Seq Scan on sa_upper (cost=0.00..884.69 rows=50969 width=16) (actual time=0.005..1.972 rows=50969 loops=1)\n -> Hash (cost=35.53..35.53 rows=495 width=6) (actual time=0.140..0.140 rows=495 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\n -> Seq Scan on upper_target (cost=0.00..35.53 rows=495 width=6) (actual time=0.007..0.103 rows=495 loops=1)\n Filter: (id_up <= 495)\n Rows Removed by Filter: 1467\n CTE qli\n -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80) (actual time=9.446..27.388 rows=10469 loops=1)\n Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual time=9.440..9.811 rows=11774 loops=1)\n Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1120kB\n -> Hash Left Join (cost=7.34..301.19 rows=11774 width=18) (actual time=0.045..2.438 rows=11774 loops=1)\n Hash Cond: ((sa_lining.sli_mat_code)::text = lining_target.li_mat_code)\n -> Seq Scan on sa_lining (cost=0.00..204.74 rows=11774 width=16) (actual time=0.008..0.470 rows=11774 loops=1)\n -> Hash (cost=5.86..5.86 rows=118 width=6) (actual time=0.034..0.034 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on lining_target (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.024 rows=119 loops=1)\n Filter: (id_li <= 119)\n Rows Removed by Filter: 190\n CTE qin\n -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80) (actual time=11.424..31.508 rows=10678 loops=1)\n Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual time=11.416..11.908 rows=15230 loops=1)\n Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1336kB\n -> Hash Left Join (cost=10.49..369.26 rows=15230 width=18) (actual time=0.051..3.108 rows=15230 loops=1)\n Hash Cond: ((sa_insole.sin_mat_code)::text = insole_target.in_mat_code)\n -> Seq Scan on sa_insole (cost=0.00..264.30 rows=15230 width=16) (actual time=0.006..0.606 rows=15230 loops=1)\n -> Hash (cost=9.01..9.01 rows=118 width=6) (actual time=0.042..0.043 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on insole_target (cost=0.00..9.01 rows=118 width=6) (actual time=0.008..0.032 rows=119 loops=1)\n Filter: (id_in <= 119)\n Rows Removed by Filter: 362\n CTE qou\n -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80) (actual time=18.198..41.812 rows=10699 loops=1)\n Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual time=18.187..18.967 rows=24768 loops=1)\n Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 2317kB\n -> Hash Left Join (cost=5.39..558.63 rows=24768 width=18) (actual time=0.046..5.132 rows=24768 loops=1)\n Hash Cond: ((sa_outsole.sou_mat_code)::text = outsole_target.ou_mat_code)\n -> Seq Scan on sa_outsole (cost=0.00..430.68 rows=24768 width=16) (actual time=0.010..1.015 rows=24768 loops=1)\n -> Hash (cost=5.03..5.03 rows=29 width=6) (actual time=0.032..0.032 rows=29 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on outsole_target (cost=0.00..5.03 rows=29 width=6) (actual time=0.010..0.025 rows=29 loops=1)\n Filter: (id_ou <= 29)\n Rows Removed by Filter: 213\n -> Hash Join (cost=1015.85..1319.50 rows=1 width=104) (actual time=168.307..215.513 rows=8548 loops=1)\n Hash Cond: ((qou.curr_season = qli.curr_season) AND ((qou.curr_code)::text = (qli.curr_code)::text))\n Join Filter: ((((qup.ibitmask | qin.ibitmask) | qli.ibitmask) | qou.ibitmask) IS NOT NULL)\n -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=76) (actual time=18.200..45.188 rows=10275 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\n Rows Removed by Filter: 424\n -> Hash (cost=1015.83..1015.83 rows=1 width=228) (actual time=150.094..150.095 rows=8845 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1899kB\n -> Hash Join (cost=707.35..1015.83 rows=1 width=228) (actual time=121.898..147.726 rows=8845 loops=1)\n Hash Cond: ((qin.curr_season = qli.curr_season) AND ((qin.curr_code)::text = (qli.curr_code)::text))\n -> CTE Scan on qin (cost=0.00..293.65 rows=1186 width=76) (actual time=11.425..34.674 rows=10197 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 481\n -> Hash (cost=706.86..706.86 rows=33 width=152) (actual time=110.470..110.470 rows=9007 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1473kB\n -> Merge Join (cost=689.20..706.86 rows=33 width=152) (actual time=105.862..108.925 rows=9007 loops=1)\n Merge Cond: ((qup.curr_season = qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n -> Sort (cost=342.09..344.96 rows=1147 width=76) (actual time=73.419..73.653 rows=9320 loops=1)\n Sort Key: qup.curr_season, qup.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1391kB\n -> CTE Scan on qup (cost=0.00..283.80 rows=1147 width=76) (actual time=35.467..71.904 rows=9320 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 21))\n Rows Removed by Filter: 1415\n -> Sort (cost=347.12..350.02 rows=1163 width=76) (actual time=32.440..32.697 rows=10289 loops=1)\n Sort Key: qli.curr_season, qli.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1349kB\n -> CTE Scan on qli (cost=0.00..287.90 rows=1163 width=76) (actual time=9.447..30.666 rows=10289 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 180\n -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104) (actual time=4.597..6.700 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n -> Merge Left Join (cost=1958.66..2135.28 rows=5733 width=136) (actual time=3.427..3.863 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104) (actual time=2.321..2.556 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n -> Sort (cost=641.68..656.02 rows=5733 width=72) (actual time=1.286..1.324 rows=1415 loops=1)\n Sort Key: qup_1.curr_season, qup_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 204kB\n -> CTE Scan on qup qup_1 (cost=0.00..283.80 rows=5733 width=72) (actual time=0.009..1.093 rows=1415 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 21))\n Rows Removed by Filter: 9320\n -> Sort (cost=651.57..666.11 rows=5816 width=72) (actual time=1.033..1.038 rows=180 loops=1)\n Sort Key: qli_1.curr_season, qli_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 41kB\n -> CTE Scan on qli qli_1 (cost=0.00..287.90 rows=5816 width=72) (actual time=0.055..1.007 rows=180 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10289\n -> Sort (cost=665.41..680.24 rows=5932 width=72) (actual time=1.104..1.117 rows=481 loops=1)\n Sort Key: qin_1.curr_season, qin_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qin qin_1 (cost=0.00..293.65 rows=5932 width=72) (actual time=0.016..1.038 rows=481 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10197\n -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual time=1.163..1.174 rows=417 loops=1)\n Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944 width=72) (actual time=0.029..1.068 rows=424 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n Rows Removed by Filter: 10275\n Planning Time: 2.297 ms\n Execution Time: 224.759 ms\n(118 Zeilen)\n\n\n3. Slow query from wrong plan as result on similar case with inner join\n\n\nWhen the 3 left joins above are changed to inner joins like:\n\n\n\nfrom qup\njoin qli on (qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and qli.ibitmask>0 and cardinality(qli.mat_arr) <=8)\njoin qin on (qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and qin.ibitmask>0 and cardinality(qin.mat_arr) <=8)\njoin qou on (qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and qou.ibitmask>0 and cardinality(qou.mat_arr) <=11)\nwhere qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\n\n\nThe same rows estimation takes place as with the left joins, but the planner now decides to use a nested loop for the last join, which results in a 500fold execution time:\n\n\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=13365.31..17472.18 rows=5734 width=104) (actual time=139.037..13403.310 rows=9963 loops=1)\n CTE qup\n -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80) (actual time=35.399..67.102 rows=10735 loops=1)\n Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual time=35.382..36.743 rows=50969 loops=1)\n Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 4722kB\n -> Hash Left Join (cost=41.71..1246.13 rows=50969 width=18) (actual time=0.157..10.715 rows=50969 loops=1)\n Hash Cond: ((sa_upper.sup_mat_code)::text = upper_target.up_mat_code)\n -> Seq Scan on sa_upper (cost=0.00..884.69 rows=50969 width=16) (actual time=0.008..2.001 rows=50969 loops=1)\n -> Hash (cost=35.53..35.53 rows=495 width=6) (actual time=0.146..0.146 rows=495 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\n -> Seq Scan on upper_target (cost=0.00..35.53 rows=495 width=6) (actual time=0.006..0.105 rows=495 loops=1)\n Filter: (id_up <= 495)\n Rows Removed by Filter: 1467\n CTE qli\n -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80) (actual time=9.541..27.419 rows=10469 loops=1)\n Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual time=9.534..9.908 rows=11774 loops=1)\n Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1120kB\n -> Hash Left Join (cost=7.34..301.19 rows=11774 width=18) (actual time=0.049..2.451 rows=11774 loops=1)\n Hash Cond: ((sa_lining.sli_mat_code)::text = lining_target.li_mat_code)\n -> Seq Scan on sa_lining (cost=0.00..204.74 rows=11774 width=16) (actual time=0.010..0.462 rows=11774 loops=1)\n -> Hash (cost=5.86..5.86 rows=118 width=6) (actual time=0.035..0.035 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on lining_target (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.025 rows=119 loops=1)\n Filter: (id_li <= 119)\n Rows Removed by Filter: 190\n CTE qin\n -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80) (actual time=11.649..30.910 rows=10678 loops=1)\n Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual time=11.642..12.115 rows=15230 loops=1)\n Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1336kB\n -> Hash Left Join (cost=10.49..369.26 rows=15230 width=18) (actual time=0.056..3.144 rows=15230 loops=1)\n Hash Cond: ((sa_insole.sin_mat_code)::text = insole_target.in_mat_code)\n -> Seq Scan on sa_insole (cost=0.00..264.30 rows=15230 width=16) (actual time=0.008..0.594 rows=15230 loops=1)\n -> Hash (cost=9.01..9.01 rows=118 width=6) (actual time=0.045..0.046 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on insole_target (cost=0.00..9.01 rows=118 width=6) (actual time=0.008..0.034 rows=119 loops=1)\n Filter: (id_in <= 119)\n Rows Removed by Filter: 362\n CTE qou\n -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80) (actual time=18.163..51.151 rows=10699 loops=1)\n Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual time=18.150..20.000 rows=24768 loops=1)\n Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 2317kB\n -> Hash Left Join (cost=5.39..558.63 rows=24768 width=18) (actual time=0.036..5.106 rows=24768 loops=1)\n Hash Cond: ((sa_outsole.sou_mat_code)::text = outsole_target.ou_mat_code)\n -> Seq Scan on sa_outsole (cost=0.00..430.68 rows=24768 width=16) (actual time=0.008..1.005 rows=24768 loops=1)\n -> Hash (cost=5.03..5.03 rows=29 width=6) (actual time=0.024..0.024 rows=29 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on outsole_target (cost=0.00..5.03 rows=29 width=6) (actual time=0.007..0.018 rows=29 loops=1)\n Filter: (id_ou <= 29)\n Rows Removed by Filter: 213\n -> Nested Loop (cost=707.35..1328.37 rows=1 width=104) (actual time=139.036..13395.820 rows=8548 loops=1)\n Join Filter: ((qli.curr_season = qin.curr_season) AND ((qli.curr_code)::text = (qin.curr_code)::text))\n Rows Removed by Join Filter: 88552397\n -> Hash Join (cost=707.35..1016.45 rows=1 width=216) (actual time=127.374..168.249 rows=8685 loops=1)\n Hash Cond: ((qou.curr_season = qli.curr_season) AND ((qou.curr_code)::text = (qli.curr_code)::text))\n -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=72) (actual time=18.165..54.968 rows=10275 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\n Rows Removed by Filter: 424\n -> Hash (cost=706.86..706.86 rows=33 width=144) (actual time=109.205..109.207 rows=9007 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1369kB\n -> Merge Join (cost=689.20..706.86 rows=33 width=144) (actual time=104.785..107.748 rows=9007 loops=1)\n Merge Cond: ((qup.curr_season = qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n -> Sort (cost=342.09..344.96 rows=1147 width=72) (actual time=72.320..72.559 rows=9320 loops=1)\n Sort Key: qup.curr_season, qup.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1357kB\n -> CTE Scan on qup (cost=0.00..283.80 rows=1147 width=72) (actual time=35.401..70.834 rows=9320 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 21))\n Rows Removed by Filter: 1415\n -> Sort (cost=347.12..350.02 rows=1163 width=72) (actual time=32.461..32.719 rows=10289 loops=1)\n Sort Key: qli.curr_season, qli.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1269kB\n -> CTE Scan on qli (cost=0.00..287.90 rows=1163 width=72) (actual time=9.543..30.696 rows=10289 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 180\n -> CTE Scan on qin (cost=0.00..293.65 rows=1186 width=72) (actual time=0.001..1.159 rows=10197 loops=8685)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 481\n -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104) (actual time=4.606..6.733 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n -> Merge Left Join (cost=1958.66..2135.28 rows=5733 width=136) (actual time=3.479..3.930 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104) (actual time=2.368..2.610 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n -> Sort (cost=641.68..656.02 rows=5733 width=72) (actual time=1.296..1.335 rows=1415 loops=1)\n Sort Key: qup_1.curr_season, qup_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 204kB\n -> CTE Scan on qup qup_1 (cost=0.00..283.80 rows=5733 width=72) (actual time=0.010..1.119 rows=1415 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 21))\n Rows Removed by Filter: 9320\n -> Sort (cost=651.57..666.11 rows=5816 width=72) (actual time=1.069..1.075 rows=180 loops=1)\n Sort Key: qli_1.curr_season, qli_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 41kB\n -> CTE Scan on qli qli_1 (cost=0.00..287.90 rows=5816 width=72) (actual time=0.057..1.026 rows=180 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10289\n -> Sort (cost=665.41..680.24 rows=5932 width=72) (actual time=1.110..1.124 rows=481 loops=1)\n Sort Key: qin_1.curr_season, qin_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qin qin_1 (cost=0.00..293.65 rows=5932 width=72) (actual time=0.016..1.046 rows=481 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10197\n -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual time=1.119..1.128 rows=417 loops=1)\n Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944 width=72) (actual time=0.029..1.056 rows=424 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n Rows Removed by Filter: 10275\n Planning Time: 1.746 ms\n Execution Time: 13405.503 ms\n(116 Zeilen)\n\n\nThis case really brought me to detect the problem!\n\n\nThe original query and data are not shown here, but the principle should be clear from the execution plans.\n\n\nI think the planner shouldn't change the row estimations on further steps after left joins at all, and be a bit more conservative on inner joins.\nThis may be related to the fact that this case has 2 join-conditions (xx_season an xx_code).\n\n\nThanks for looking\n\n\nHans Buschmann",
"msg_date": "Wed, 8 Feb 2023 13:55:07 +0000",
"msg_from": "Hans Buschmann <buschmann@nidsa.net>",
"msg_from_op": true,
"msg_subject": "Wrong rows estimations with joins of CTEs slows queries by more than\n factor 500"
},
{
"msg_contents": "On 2/8/23 14:55, Hans Buschmann wrote:\n> During data refactoring of our Application I encountered $subject when\n> joining 4 CTEs with left join or inner join.\n> \n> \n> 1. Background\n> \n> PG 15.1 on Windows x64 (OS seems no to have no meening here)\n> \n> \n> I try to collect data from 4 (analyzed) tables (up,li,in,ou) by grouping\n> certain data (4 CTEs qup,qli,qin,qou)\n> \n> The grouping of the data in the CTEs gives estimated row counts of about\n> 1000 (1 tenth of the real value) This is OK for estimation.\n> \n> \n> These 4 CTEs are then used to combine the data by joining them.\n> \n> \n> 2. Problem\n> \n> The 4 CTEs are joined by left joins as shown below:\n>\n...\n> \n> This case really brought me to detect the problem!\n> \n> The original query and data are not shown here, but the principle should\n> be clear from the execution plans.\n> \n> I think the planner shouldn't change the row estimations on further\n> steps after left joins at all, and be a bit more conservative on inner\n> joins.\n\nBut the code should alredy do exactly that, see:\n\nhttps://github.com/postgres/postgres/blob/dbe8a1726cfd5a09cf1ef99e76f5f89e2efada71/src/backend/optimizer/path/costsize.c#L5212\n\nAnd in fact, the second part of the plains shows it's doing the trick:\n\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104)\n(actual time=2.321..2.556 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND\n ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n -> Sort (cost=641.68..656.02 rows=5733 width=72)\n -> Sort (cost=651.57..666.11 rows=5816 width=72)\n\nBut notice the first join (with rows=33) doesn't say \"Left\". And I see\nthere's Append on top, so presumably the query is much more complex, and\nthere's a regular join of these CTEs in some other part.\n\nWe'll need to se the whole query, not just one chunk of it.\n\nFWIW it seems you're using materialized CTEs - that's likely pretty bad\nfor the estimates, because we don't propagate statistics from the CTE.\nSo a join on CTEs can't see statistics from the underlying tables, and\nthat can easily produce really bad estimates.\n\nI'm assuming you're not using AS MATERIALIZED explicitly, so I'd bet\nthis happens because the \"cardinality\" function is marked as volatile.\nPerhaps it can be redefined as stable/immutable.\n\n> This may be related to the fact that this case has 2 join-conditions\n> (xx_season an xx_code).\n\nThat shouldn't affect outer join estimates this way (but as I explained\nabove, the join does not seem to be \"left\" per the explain).\nMulti-column joins can cause issues, no doubt about it - but CTEs make\nit worse because we can't e.g. see foreign keys.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 Feb 2023 22:27:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Hello Tomas,\r\n\r\n\r\nThank you for looking at.\r\n\r\nFirst, I miscalculated the factor which should be about 50, not 500. Sorry.\r\n\r\nThen I want to show you the table definitions (simple, very similar, ommited child_tables and additional indexes, here using always \"ONLY\"):\r\n\r\ncpsdb_matcol=# \\d sa_upper;\r\n Tabelle ╗public.sa_upper½\r\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert\r\n--------------+-----------------------+--------------+---------------+----------------------------------\r\n id_sup | integer | | not null | generated by default as identity\r\n sup_season | smallint | | |\r\n sup_sa_code | character varying(10) | C | |\r\n sup_mat_code | character varying(4) | C | |\r\n sup_clr_code | character varying(3) | C | |\r\nIndexe:\r\n \"sa_upper_active_pkey\" PRIMARY KEY, btree (id_sup)\r\n\r\n\r\ncpsdb_matcol=# \\d sa_lining+;\r\n Tabelle ╗public.sa_lining½\r\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert\r\n--------------+-----------------------+--------------+---------------+----------------------------------\r\n id_sli | integer | | not null | generated by default as identity\r\n sli_season | smallint | | |\r\n sli_sa_code | character varying(10) | C | |\r\n sli_mat_code | character varying(4) | C | |\r\n sli_clr_code | character varying(3) | C | |\r\nIndexe:\r\n \"sa_lining_active_pkey\" PRIMARY KEY, btree (id_sli)\r\n\r\n\r\ncpsdb_matcol=# \\d sa_insole;\r\n Tabelle ╗public.sa_insole½\r\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert\r\n--------------+-----------------------+--------------+---------------+----------------------------------\r\n id_sin | integer | | not null | generated by default as identity\r\n sin_season | smallint | | |\r\n sin_sa_code | character varying(10) | C | |\r\n sin_mat_code | character varying(4) | C | |\r\n sin_clr_code | character varying(3) | C | |\r\nIndexe:\r\n \"sa_insole_active_pkey\" PRIMARY KEY, btree (id_sin)\r\n\r\n\r\ncpsdb_matcol=# \\d sa_outsole;\r\n Tabelle ╗public.sa_outsole½\r\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert\r\n--------------+-----------------------+--------------+---------------+----------------------------------\r\n id_sou | integer | | not null | generated by default as identity\r\n sou_season | smallint | | |\r\n sou_sa_code | character varying(10) | C | |\r\n sou_mat_code | character varying(4) | C | |\r\n sou_clr_code | character varying(3) | C | |\r\nIndexe:\r\n \"sa_outsole_active_pkey\" PRIMARY KEY, btree (id_sou)\r\n\r\nThe xxx_target tables are very similiar, here the upper one as an example:\r\nThey are count_aggregates of the whole dataset, where up_mat_code=sup_mat_code etc.\r\n\r\ncpsdb_matcol=# \\d upper_target\r\n Tabelle ╗admin.upper_target½\r\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert\r\n-------------+----------+--------------+---------------+-------------\r\n id_up | smallint | | |\r\n nup | integer | | |\r\n up_mat_code | text | C | |\r\n\r\n\r\n\r\nI have reworked the two queries to show their complete explain plans:\r\n\r\n1. query with left join in the qupd CTE:\r\n\r\n\\set only 'ONLY'\r\n\r\ncpsdb_matcol=# explain analyze -- explain analyze verbose -- explain -- select * from ( -- select count(*) from ( -- select length(sel) from (\r\ncpsdb_matcol-# with\r\ncpsdb_matcol-# qup as (\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# curr_season -- all xxx_seasosn are always smallint\r\ncpsdb_matcol(# ,curr_code-- all xx_code are always varchar(10)\r\ncpsdb_matcol(# ,array_agg(id_up order by id_up)||array_fill(0::smallint,array[10]) as mat_arr\r\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_up) as matcode_arr\r\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\r\ncpsdb_matcol(# from(\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# sup_season as curr_season\r\ncpsdb_matcol(# ,sup_sa_code as curr_code\r\ncpsdb_matcol(# ,sup_mat_code as curr_mat_code\r\ncpsdb_matcol(# ,sup_clr_code as curr_clr_code\r\ncpsdb_matcol(# ,id_up\r\ncpsdb_matcol(# ,coalesce(id_up,-1) as imask\r\ncpsdb_matcol(# from :only sa_upper\r\ncpsdb_matcol(# left join upper_target on up_mat_code=sup_mat_code and id_up <= (512-1-16)\r\ncpsdb_matcol(# )qr\r\ncpsdb_matcol(# group by 1,2\r\ncpsdb_matcol(# )\r\ncpsdb_matcol-# ,qli as (\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# curr_season\r\ncpsdb_matcol(# ,curr_code\r\ncpsdb_matcol(# ,array_agg(id_li order by id_li)||array_fill(0::smallint,array[4]) as mat_arr\r\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_li) as matcode_arr\r\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\r\ncpsdb_matcol(# from(\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# sli_season as curr_season\r\ncpsdb_matcol(# ,sli_sa_code as curr_code\r\ncpsdb_matcol(# ,sli_mat_code as curr_mat_code\r\ncpsdb_matcol(# ,sli_clr_code as curr_clr_code\r\ncpsdb_matcol(# ,id_li\r\ncpsdb_matcol(# ,coalesce(id_li,-1) as imask\r\ncpsdb_matcol(# from :only sa_lining\r\ncpsdb_matcol(# left join lining_target on li_mat_code=sli_mat_code and id_li <= (128-1-8)\r\ncpsdb_matcol(# )qr\r\ncpsdb_matcol(# group by 1,2\r\ncpsdb_matcol(# )\r\ncpsdb_matcol-# ,qin as (\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# curr_season\r\ncpsdb_matcol(# ,curr_code\r\ncpsdb_matcol(# ,array_agg(id_in order by id_in)||array_fill(0::smallint,array[4]) as mat_arr\r\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_in) as matcode_arr\r\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\r\ncpsdb_matcol(# from(\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# sin_season as curr_season\r\ncpsdb_matcol(# ,sin_sa_code as curr_code\r\ncpsdb_matcol(# ,sin_mat_code as curr_mat_code\r\ncpsdb_matcol(# ,sin_clr_code as curr_clr_code\r\ncpsdb_matcol(# ,id_in\r\ncpsdb_matcol(# ,coalesce(id_in,-1) as imask\r\ncpsdb_matcol(# from :only sa_insole\r\ncpsdb_matcol(# left join insole_target on in_mat_code=sin_mat_code and id_in <= (128-1-8)\r\ncpsdb_matcol(# )qr\r\ncpsdb_matcol(# group by 1,2\r\ncpsdb_matcol(# )\r\ncpsdb_matcol-# ,qou as (\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# curr_season\r\ncpsdb_matcol(# ,curr_code\r\ncpsdb_matcol(# ,array_agg(id_ou order by id_ou)||array_fill(0::smallint,array[6]) as mat_arr\r\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_ou) as matcode_arr\r\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\r\ncpsdb_matcol(# from(\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# sou_season as curr_season\r\ncpsdb_matcol(# ,sou_sa_code as curr_code\r\ncpsdb_matcol(# ,sou_mat_code as curr_mat_code\r\ncpsdb_matcol(# ,sou_clr_code as curr_clr_code\r\ncpsdb_matcol(# ,id_ou\r\ncpsdb_matcol(# ,coalesce(id_ou,-1) as imask\r\ncpsdb_matcol(# from :only sa_outsole\r\ncpsdb_matcol(# left join outsole_target on ou_mat_code=sou_mat_code and id_ou <= (32-1-2)\r\ncpsdb_matcol(# )qr\r\ncpsdb_matcol(# group by 1,2\r\ncpsdb_matcol(# )\r\ncpsdb_matcol-# ,qupd as (\r\ncpsdb_matcol(# select * from (\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# qup.curr_season\r\ncpsdb_matcol(# ,qup.curr_code\r\ncpsdb_matcol(# ,qup.ibitmask|qin.ibitmask|qli.ibitmask|qou.ibitmask as ibitmask\r\ncpsdb_matcol(# -- the calculations of new_mat_x are simplified here\r\ncpsdb_matcol(# -- in the production version they are a more complex combination of bit masks, bit shifts and bit or of different elements of the arrays\r\ncpsdb_matcol(# ,(qup.mat_arr[1]|qli.mat_arr[1]|qin.mat_arr[1]|qou.mat_arr[1])::bigint as new_mat_1\r\ncpsdb_matcol(#\r\ncpsdb_matcol(# ,(qup.mat_arr[2]|qli.mat_arr[2]|qin.mat_arr[2]|qou.mat_arr[2])::bigint as new_mat_2\r\ncpsdb_matcol(#\r\ncpsdb_matcol(# ,(qup.mat_arr[3]|qli.mat_arr[3]|qin.mat_arr[3]|qou.mat_arr[3])::bigint as new_mat_3\r\ncpsdb_matcol(#\r\ncpsdb_matcol(# from qup\r\ncpsdb_matcol(# left join qli on (qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and qli.ibitmask>0 and cardinality(qli.mat_arr) <=8)\r\ncpsdb_matcol(# left join qin on (qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and qin.ibitmask>0 and cardinality(qin.mat_arr) <=8)\r\ncpsdb_matcol(# left join qou on (qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and qou.ibitmask>0 and cardinality(qou.mat_arr) <=11)\r\ncpsdb_matcol(# where qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\r\ncpsdb_matcol(# )qj\r\ncpsdb_matcol(# where ibitmask is not null\r\ncpsdb_matcol(# )\r\ncpsdb_matcol-# ,qupda as (\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# qup.curr_season\r\ncpsdb_matcol(# ,qup.curr_code\r\ncpsdb_matcol(# ,repeat('0',64)||\r\ncpsdb_matcol(# repeat('11',coalesce(cardinality(qou.matcode_arr),0))||repeat('10',coalesce(cardinality(qin.matcode_arr),0))||\r\ncpsdb_matcol(# repeat('01',coalesce(cardinality(qou.matcode_arr),0))||repeat('00',coalesce(cardinality(qup.matcode_arr),0))||\r\ncpsdb_matcol(# '00' as curr_mattype_bitmask\r\ncpsdb_matcol(# ,qup.matcode_arr||qli.matcode_arr||qin.matcode_arr||qou.matcode_arr as curr_matcode_arr\r\ncpsdb_matcol(# from qup\r\ncpsdb_matcol(# left join qli on qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and (qli.ibitmask<0 or cardinality(qli.mat_arr) >8)\r\ncpsdb_matcol(# left join qin on qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and (qin.ibitmask<0 or cardinality(qin.mat_arr) >8)\r\ncpsdb_matcol(# left join qou on qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and (qou.ibitmask<0 or cardinality(qou.mat_arr) >11)\r\ncpsdb_matcol(# where qup.ibitmask<0 or cardinality(qup.mat_arr) >21\r\ncpsdb_matcol(# )\r\ncpsdb_matcol-# select\r\ncpsdb_matcol-# curr_season\r\ncpsdb_matcol-# ,curr_code\r\ncpsdb_matcol-# ,new_mat_1\r\ncpsdb_matcol-# ,new_mat_2\r\ncpsdb_matcol-# ,new_mat_3\r\ncpsdb_matcol-# ,NULL::bigint as new_mattype_bitmask\r\ncpsdb_matcol-# ,NULL as new_mat_codes\r\ncpsdb_matcol-# from qupd\r\ncpsdb_matcol-# union all\r\ncpsdb_matcol-# select\r\ncpsdb_matcol-# curr_season\r\ncpsdb_matcol-# ,curr_code\r\ncpsdb_matcol-# ,NULL::bigint as new_mat_1\r\ncpsdb_matcol-# ,NULL::bigint as new_mat_2\r\ncpsdb_matcol-# ,NULL::bigint as new_mat_3\r\ncpsdb_matcol-# ,substr(curr_mattype_bitmask,length(curr_mattype_bitmask)-63)::bit(64)::bigint as new_mattype_bitmask\r\ncpsdb_matcol-# ,curr_matcode_arr as new_mat_codes\r\ncpsdb_matcol-# from qupda\r\ncpsdb_matcol-# ;\r\n QUERY PLAN\r\n--------------------------------------------------------------------------------------------------------------------------------------------------\r\n Append (cost=13673.81..17462.84 rows=5734 width=104) (actual time=169.382..210.799 rows=9963 loops=1)\r\n CTE qup\r\n -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80) (actual time=35.064..68.308 rows=10735 loops=1)\r\n Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\r\n -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual time=35.053..36.412 rows=50969 loops=1)\r\n Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 4722kB\r\n -> Hash Left Join (cost=41.71..1246.13 rows=50969 width=18) (actual time=0.165..10.562 rows=50969 loops=1)\r\n Hash Cond: ((sa_upper.sup_mat_code)::text = upper_target.up_mat_code)\r\n -> Seq Scan on sa_upper (cost=0.00..884.69 rows=50969 width=16) (actual time=0.006..1.990 rows=50969 loops=1)\r\n -> Hash (cost=35.53..35.53 rows=495 width=6) (actual time=0.157..0.157 rows=495 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\r\n -> Seq Scan on upper_target (cost=0.00..35.53 rows=495 width=6) (actual time=0.006..0.115 rows=495 loops=1)\r\n Filter: (id_up <= 495)\r\n Rows Removed by Filter: 1467\r\n CTE qli\r\n -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80) (actual time=9.354..28.199 rows=10469 loops=1)\r\n Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\r\n -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual time=9.347..9.711 rows=11774 loops=1)\r\n Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 1120kB\r\n -> Hash Left Join (cost=7.34..301.19 rows=11774 width=18) (actual time=0.049..2.397 rows=11774 loops=1)\r\n Hash Cond: ((sa_lining.sli_mat_code)::text = lining_target.li_mat_code)\r\n -> Seq Scan on sa_lining (cost=0.00..204.74 rows=11774 width=16) (actual time=0.009..0.469 rows=11774 loops=1)\r\n -> Hash (cost=5.86..5.86 rows=118 width=6) (actual time=0.037..0.037 rows=119 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\r\n -> Seq Scan on lining_target (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.025 rows=119 loops=1)\r\n Filter: (id_li <= 119)\r\n Rows Removed by Filter: 190\r\n CTE qin\r\n -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80) (actual time=11.453..32.317 rows=10678 loops=1)\r\n Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\r\n -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual time=11.444..11.943 rows=15230 loops=1)\r\n Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 1336kB\r\n -> Hash Left Join (cost=10.49..369.26 rows=15230 width=18) (actual time=0.051..3.098 rows=15230 loops=1)\r\n Hash Cond: ((sa_insole.sin_mat_code)::text = insole_target.in_mat_code)\r\n -> Seq Scan on sa_insole (cost=0.00..264.30 rows=15230 width=16) (actual time=0.007..0.608 rows=15230 loops=1)\r\n -> Hash (cost=9.01..9.01 rows=118 width=6) (actual time=0.041..0.041 rows=119 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\r\n -> Seq Scan on insole_target (cost=0.00..9.01 rows=118 width=6) (actual time=0.007..0.031 rows=119 loops=1)\r\n Filter: (id_in <= 119)\r\n Rows Removed by Filter: 362\r\n CTE qou\r\n -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80) (actual time=18.055..42.079 rows=10699 loops=1)\r\n Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\r\n -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual time=18.043..18.798 rows=24768 loops=1)\r\n Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 2317kB\r\n -> Hash Left Join (cost=5.39..558.63 rows=24768 width=18) (actual time=0.037..5.017 rows=24768 loops=1)\r\n Hash Cond: ((sa_outsole.sou_mat_code)::text = outsole_target.ou_mat_code)\r\n -> Seq Scan on sa_outsole (cost=0.00..430.68 rows=24768 width=16) (actual time=0.008..0.998 rows=24768 loops=1)\r\n -> Hash (cost=5.03..5.03 rows=29 width=6) (actual time=0.025..0.025 rows=29 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\r\n -> Seq Scan on outsole_target (cost=0.00..5.03 rows=29 width=6) (actual time=0.009..0.020 rows=29 loops=1)\r\n Filter: (id_ou <= 29)\r\n Rows Removed by Filter: 213\r\n -> Hash Join (cost=1015.85..1319.04 rows=1 width=104) (actual time=169.382..203.707 rows=8548 loops=1)\r\n Hash Cond: ((qou.curr_season = qli.curr_season) AND ((qou.curr_code)::text = (qli.curr_code)::text))\r\n Join Filter: ((((qup.ibitmask | qin.ibitmask) | qli.ibitmask) | qou.ibitmask) IS NOT NULL)\r\n -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=76) (actual time=18.057..45.448 rows=10275 loops=1)\r\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\r\n Rows Removed by Filter: 424\r\n -> Hash (cost=1015.83..1015.83 rows=1 width=228) (actual time=151.316..151.317 rows=8845 loops=1)\r\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1899kB\r\n -> Hash Join (cost=707.35..1015.83 rows=1 width=228) (actual time=122.483..149.030 rows=8845 loops=1)\r\n Hash Cond: ((qin.curr_season = qli.curr_season) AND ((qin.curr_code)::text = (qli.curr_code)::text))\r\n -> CTE Scan on qin (cost=0.00..293.65 rows=1186 width=76) (actual time=11.454..35.456 rows=10197 loops=1)\r\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\r\n Rows Removed by Filter: 481\r\n -> Hash (cost=706.86..706.86 rows=33 width=152) (actual time=111.026..111.027 rows=9007 loops=1)\r\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1473kB\r\n -> Merge Join (cost=689.20..706.86 rows=33 width=152) (actual time=106.441..109.505 rows=9007 loops=1)\r\n Merge Cond: ((qup.curr_season = qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\r\n -> Sort (cost=342.09..344.96 rows=1147 width=76) (actual time=73.200..73.429 rows=9320 loops=1)\r\n Sort Key: qup.curr_season, qup.curr_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 1391kB\r\n -> CTE Scan on qup (cost=0.00..283.80 rows=1147 width=76) (actual time=35.067..71.872 rows=9320 loops=1)\r\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 21))\r\n Rows Removed by Filter: 1415\r\n -> Sort (cost=347.12..350.02 rows=1163 width=76) (actual time=33.239..33.490 rows=10289 loops=1)\r\n Sort Key: qli.curr_season, qli.curr_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 1349kB\r\n -> CTE Scan on qli (cost=0.00..287.90 rows=1163 width=76) (actual time=9.355..31.457 rows=10289 loops=1)\r\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\r\n Rows Removed by Filter: 180\r\n -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104) (actual time=4.529..6.645 rows=1415 loops=1)\r\n Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\r\n -> Merge Left Join (cost=1958.66..2135.28 rows=5733 width=136) (actual time=3.388..3.833 rows=1415 loops=1)\r\n Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\r\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104) (actual time=2.297..2.534 rows=1415 loops=1)\r\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\r\n -> Sort (cost=641.68..656.02 rows=5733 width=72) (actual time=1.278..1.315 rows=1415 loops=1)\r\n Sort Key: qup_1.curr_season, qup_1.curr_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 204kB\r\n -> CTE Scan on qup qup_1 (cost=0.00..283.80 rows=5733 width=72) (actual time=0.009..1.081 rows=1415 loops=1)\r\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 21))\r\n Rows Removed by Filter: 9320\r\n -> Sort (cost=651.57..666.11 rows=5816 width=72) (actual time=1.017..1.022 rows=180 loops=1)\r\n Sort Key: qli_1.curr_season, qli_1.curr_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 41kB\r\n -> CTE Scan on qli qli_1 (cost=0.00..287.90 rows=5816 width=72) (actual time=0.054..0.994 rows=180 loops=1)\r\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\r\n Rows Removed by Filter: 10289\r\n -> Sort (cost=665.41..680.24 rows=5932 width=72) (actual time=1.089..1.103 rows=481 loops=1)\r\n Sort Key: qin_1.curr_season, qin_1.curr_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 68kB\r\n -> CTE Scan on qin qin_1 (cost=0.00..293.65 rows=5932 width=72) (actual time=0.016..1.022 rows=481 loops=1)\r\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\r\n Rows Removed by Filter: 10197\r\n -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual time=1.134..1.145 rows=417 loops=1)\r\n Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 68kB\r\n -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944 width=72) (actual time=0.029..1.038 rows=424 loops=1)\r\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\r\n Rows Removed by Filter: 10275\r\n Planning Time: 1.055 ms\r\n Execution Time: 212.800 ms\r\n(118 Zeilen)\r\n\r\nAs seen in the line of the qupd CTE\r\n\r\n -> Merge Join (cost=689.20..706.86 rows=33 width=152) (actual time=106.441..109.505 rows=9007 loops=1)\r\n\r\nthe row count of the second join round drops to 33 and for the third round it drops to 1\r\n\r\n -> Hash Join (cost=707.35..1015.83 rows=1 width=228) (actual time=122.483..149.030 rows=8845 loops=1)\r\n\r\nBTW, I don't know, why the second join group (part of qupda) gets a complete different plan.\r\n\r\n\r\n--------------------------------------------\r\n\r\n\r\nHere is the second question, different from the first only by replacing the left join to inner join in the join group of qupd:\r\n\r\n\\set only 'ONLY'\r\n\r\ncpsdb_matcol=# explain analyze -- explain analyze verbose -- explain -- select * from ( -- select count(*) from ( -- select length(sel) from (\r\ncpsdb_matcol-# with\r\ncpsdb_matcol-# qup as (\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# curr_season -- all xxx_seasosn are always smallint\r\ncpsdb_matcol(# ,curr_code-- all xx_code are always varchar(10)\r\ncpsdb_matcol(# ,array_agg(id_up order by id_up)||array_fill(0::smallint,array[10]) as mat_arr\r\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_up) as matcode_arr\r\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\r\ncpsdb_matcol(# from(\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# sup_season as curr_season\r\ncpsdb_matcol(# ,sup_sa_code as curr_code\r\ncpsdb_matcol(# ,sup_mat_code as curr_mat_code\r\ncpsdb_matcol(# ,sup_clr_code as curr_clr_code\r\ncpsdb_matcol(# ,id_up\r\ncpsdb_matcol(# ,coalesce(id_up,-1) as imask\r\ncpsdb_matcol(# from :only sa_upper\r\ncpsdb_matcol(# left join upper_target on up_mat_code=sup_mat_code and id_up <= (512-1-16)\r\ncpsdb_matcol(# )qr\r\ncpsdb_matcol(# group by 1,2\r\ncpsdb_matcol(# )\r\ncpsdb_matcol-# ,qli as (\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# curr_season\r\ncpsdb_matcol(# ,curr_code\r\ncpsdb_matcol(# ,array_agg(id_li order by id_li)||array_fill(0::smallint,array[4]) as mat_arr\r\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_li) as matcode_arr\r\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\r\ncpsdb_matcol(# from(\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# sli_season as curr_season\r\ncpsdb_matcol(# ,sli_sa_code as curr_code\r\ncpsdb_matcol(# ,sli_mat_code as curr_mat_code\r\ncpsdb_matcol(# ,sli_clr_code as curr_clr_code\r\ncpsdb_matcol(# ,id_li\r\ncpsdb_matcol(# ,coalesce(id_li,-1) as imask\r\ncpsdb_matcol(# from :only sa_lining\r\ncpsdb_matcol(# left join lining_target on li_mat_code=sli_mat_code and id_li <= (128-1-8)\r\ncpsdb_matcol(# )qr\r\ncpsdb_matcol(# group by 1,2\r\ncpsdb_matcol(# )\r\ncpsdb_matcol-# ,qin as (\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# curr_season\r\ncpsdb_matcol(# ,curr_code\r\ncpsdb_matcol(# ,array_agg(id_in order by id_in)||array_fill(0::smallint,array[4]) as mat_arr\r\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_in) as matcode_arr\r\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\r\ncpsdb_matcol(# from(\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# sin_season as curr_season\r\ncpsdb_matcol(# ,sin_sa_code as curr_code\r\ncpsdb_matcol(# ,sin_mat_code as curr_mat_code\r\ncpsdb_matcol(# ,sin_clr_code as curr_clr_code\r\ncpsdb_matcol(# ,id_in\r\ncpsdb_matcol(# ,coalesce(id_in,-1) as imask\r\ncpsdb_matcol(# from :only sa_insole\r\ncpsdb_matcol(# left join insole_target on in_mat_code=sin_mat_code and id_in <= (128-1-8)\r\ncpsdb_matcol(# )qr\r\ncpsdb_matcol(# group by 1,2\r\ncpsdb_matcol(# )\r\ncpsdb_matcol-# ,qou as (\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# curr_season\r\ncpsdb_matcol(# ,curr_code\r\ncpsdb_matcol(# ,array_agg(id_ou order by id_ou)||array_fill(0::smallint,array[6]) as mat_arr\r\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_ou) as matcode_arr\r\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\r\ncpsdb_matcol(# from(\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# sou_season as curr_season\r\ncpsdb_matcol(# ,sou_sa_code as curr_code\r\ncpsdb_matcol(# ,sou_mat_code as curr_mat_code\r\ncpsdb_matcol(# ,sou_clr_code as curr_clr_code\r\ncpsdb_matcol(# ,id_ou\r\ncpsdb_matcol(# ,coalesce(id_ou,-1) as imask\r\ncpsdb_matcol(# from :only sa_outsole\r\ncpsdb_matcol(# left join outsole_target on ou_mat_code=sou_mat_code and id_ou <= (32-1-2)\r\ncpsdb_matcol(# )qr\r\ncpsdb_matcol(# group by 1,2\r\ncpsdb_matcol(# )\r\ncpsdb_matcol-# ,qupd as (\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# qup.curr_season\r\ncpsdb_matcol(# ,qup.curr_code\r\ncpsdb_matcol(# ,qup.ibitmask|qin.ibitmask|qli.ibitmask|qou.ibitmask as ibitmask\r\ncpsdb_matcol(# -- the calculations of new_mat_x are simplified here\r\ncpsdb_matcol(# -- in the production version they are a more complex combination of bit masks, bit shifts and bit or of different elements of the arrays\r\ncpsdb_matcol(# ,(qup.mat_arr[1]|qli.mat_arr[1]|qin.mat_arr[1]|qou.mat_arr[1])::bigint as new_mat_1\r\ncpsdb_matcol(#\r\ncpsdb_matcol(# ,(qup.mat_arr[2]|qli.mat_arr[2]|qin.mat_arr[2]|qou.mat_arr[2])::bigint as new_mat_2\r\ncpsdb_matcol(#\r\ncpsdb_matcol(# ,(qup.mat_arr[3]|qli.mat_arr[3]|qin.mat_arr[3]|qou.mat_arr[3])::bigint as new_mat_3\r\ncpsdb_matcol(#\r\ncpsdb_matcol(# from qup\r\ncpsdb_matcol(# join qli on (qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and qli.ibitmask>0 and cardinality(qli.mat_arr) <=8)\r\ncpsdb_matcol(# join qin on (qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and qin.ibitmask>0 and cardinality(qin.mat_arr) <=8)\r\ncpsdb_matcol(# join qou on (qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and qou.ibitmask>0 and cardinality(qou.mat_arr) <=11)\r\ncpsdb_matcol(# where qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\r\ncpsdb_matcol(# )\r\ncpsdb_matcol-# ,qupda as (\r\ncpsdb_matcol(# select\r\ncpsdb_matcol(# qup.curr_season\r\ncpsdb_matcol(# ,qup.curr_code\r\ncpsdb_matcol(# ,repeat('0',64)||\r\ncpsdb_matcol(# repeat('11',coalesce(cardinality(qou.matcode_arr),0))||repeat('10',coalesce(cardinality(qin.matcode_arr),0))||\r\ncpsdb_matcol(# repeat('01',coalesce(cardinality(qou.matcode_arr),0))||repeat('00',coalesce(cardinality(qup.matcode_arr),0))||\r\ncpsdb_matcol(# '00' as curr_mattype_bitmask\r\ncpsdb_matcol(# ,qup.matcode_arr||qli.matcode_arr||qin.matcode_arr||qou.matcode_arr as curr_matcode_arr\r\ncpsdb_matcol(# from qup\r\ncpsdb_matcol(# left join qli on qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and (qli.ibitmask<0 or cardinality(qli.mat_arr) >8)\r\ncpsdb_matcol(# left join qin on qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and (qin.ibitmask<0 or cardinality(qin.mat_arr) >8)\r\ncpsdb_matcol(# left join qou on qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and (qou.ibitmask<0 or cardinality(qou.mat_arr) >11)\r\ncpsdb_matcol(# where qup.ibitmask<0 or cardinality(qup.mat_arr) >21\r\ncpsdb_matcol(# )\r\ncpsdb_matcol-# select\r\ncpsdb_matcol-# curr_season\r\ncpsdb_matcol-# ,curr_code\r\ncpsdb_matcol-# ,new_mat_1\r\ncpsdb_matcol-# ,new_mat_2\r\ncpsdb_matcol-# ,new_mat_3\r\ncpsdb_matcol-# ,NULL::bigint as new_mattype_bitmask\r\ncpsdb_matcol-# ,NULL as new_mat_codes\r\ncpsdb_matcol-# from qupd\r\ncpsdb_matcol-# union all\r\ncpsdb_matcol-# select\r\ncpsdb_matcol-# curr_season\r\ncpsdb_matcol-# ,curr_code\r\ncpsdb_matcol-# ,NULL::bigint as new_mat_1\r\ncpsdb_matcol-# ,NULL::bigint as new_mat_2\r\ncpsdb_matcol-# ,NULL::bigint as new_mat_3\r\ncpsdb_matcol-# ,substr(curr_mattype_bitmask,length(curr_mattype_bitmask)-63)::bit(64)::bigint as new_mattype_bitmask\r\ncpsdb_matcol-# ,curr_matcode_arr as new_mat_codes\r\ncpsdb_matcol-# from qupda\r\ncpsdb_matcol-# ;\r\n QUERY PLAN\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n Append (cost=13365.31..17471.72 rows=5734 width=104) (actual time=139.730..13430.641 rows=9963 loops=1)\r\n CTE qup\r\n -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80) (actual time=35.337..67.779 rows=10735 loops=1)\r\n Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\r\n -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual time=35.326..36.704 rows=50969 loops=1)\r\n Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 4722kB\r\n -> Hash Left Join (cost=41.71..1246.13 rows=50969 width=18) (actual time=0.179..10.787 rows=50969 loops=1)\r\n Hash Cond: ((sa_upper.sup_mat_code)::text = upper_target.up_mat_code)\r\n -> Seq Scan on sa_upper (cost=0.00..884.69 rows=50969 width=16) (actual time=0.009..1.990 rows=50969 loops=1)\r\n -> Hash (cost=35.53..35.53 rows=495 width=6) (actual time=0.164..0.164 rows=495 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\r\n -> Seq Scan on upper_target (cost=0.00..35.53 rows=495 width=6) (actual time=0.006..0.128 rows=495 loops=1)\r\n Filter: (id_up <= 495)\r\n Rows Removed by Filter: 1467\r\n CTE qli\r\n -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80) (actual time=9.434..27.620 rows=10469 loops=1)\r\n Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\r\n -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual time=9.424..9.796 rows=11774 loops=1)\r\n Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 1120kB\r\n -> Hash Left Join (cost=7.34..301.19 rows=11774 width=18) (actual time=0.049..2.444 rows=11774 loops=1)\r\n Hash Cond: ((sa_lining.sli_mat_code)::text = lining_target.li_mat_code)\r\n -> Seq Scan on sa_lining (cost=0.00..204.74 rows=11774 width=16) (actual time=0.009..0.476 rows=11774 loops=1)\r\n -> Hash (cost=5.86..5.86 rows=118 width=6) (actual time=0.036..0.036 rows=119 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\r\n -> Seq Scan on lining_target (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.026 rows=119 loops=1)\r\n Filter: (id_li <= 119)\r\n Rows Removed by Filter: 190\r\n CTE qin\r\n -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80) (actual time=11.578..31.510 rows=10678 loops=1)\r\n Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\r\n -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual time=11.572..12.044 rows=15230 loops=1)\r\n Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 1336kB\r\n -> Hash Left Join (cost=10.49..369.26 rows=15230 width=18) (actual time=0.056..3.120 rows=15230 loops=1)\r\n Hash Cond: ((sa_insole.sin_mat_code)::text = insole_target.in_mat_code)\r\n -> Seq Scan on sa_insole (cost=0.00..264.30 rows=15230 width=16) (actual time=0.008..0.609 rows=15230 loops=1)\r\n -> Hash (cost=9.01..9.01 rows=118 width=6) (actual time=0.044..0.045 rows=119 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\r\n -> Seq Scan on insole_target (cost=0.00..9.01 rows=118 width=6) (actual time=0.008..0.033 rows=119 loops=1)\r\n Filter: (id_in <= 119)\r\n Rows Removed by Filter: 362\r\n CTE qou\r\n -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80) (actual time=18.295..51.236 rows=10699 loops=1)\r\n Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\r\n -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual time=18.281..20.157 rows=24768 loops=1)\r\n Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 2317kB\r\n -> Hash Left Join (cost=5.39..558.63 rows=24768 width=18) (actual time=0.036..5.080 rows=24768 loops=1)\r\n Hash Cond: ((sa_outsole.sou_mat_code)::text = outsole_target.ou_mat_code)\r\n -> Seq Scan on sa_outsole (cost=0.00..430.68 rows=24768 width=16) (actual time=0.009..1.017 rows=24768 loops=1)\r\n -> Hash (cost=5.03..5.03 rows=29 width=6) (actual time=0.024..0.025 rows=29 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\r\n -> Seq Scan on outsole_target (cost=0.00..5.03 rows=29 width=6) (actual time=0.007..0.018 rows=29 loops=1)\r\n Filter: (id_ou <= 29)\r\n Rows Removed by Filter: 213\r\n -> Nested Loop (cost=707.35..1327.91 rows=1 width=104) (actual time=139.729..13423.084 rows=8548 loops=1)\r\n Join Filter: ((qli.curr_season = qin.curr_season) AND ((qli.curr_code)::text = (qin.curr_code)::text))\r\n Rows Removed by Join Filter: 88552397\r\n -> Hash Join (cost=707.35..1016.45 rows=1 width=216) (actual time=128.145..169.287 rows=8685 loops=1)\r\n Hash Cond: ((qou.curr_season = qli.curr_season) AND ((qou.curr_code)::text = (qli.curr_code)::text))\r\n -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=72) (actual time=18.297..55.085 rows=10275 loops=1)\r\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\r\n Rows Removed by Filter: 424\r\n -> Hash (cost=706.86..706.86 rows=33 width=144) (actual time=109.843..109.845 rows=9007 loops=1)\r\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1369kB\r\n -> Merge Join (cost=689.20..706.86 rows=33 width=144) (actual time=105.294..108.377 rows=9007 loops=1)\r\n Merge Cond: ((qup.curr_season = qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\r\n -> Sort (cost=342.09..344.96 rows=1147 width=72) (actual time=72.693..72.923 rows=9320 loops=1)\r\n Sort Key: qup.curr_season, qup.curr_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 1357kB\r\n -> CTE Scan on qup (cost=0.00..283.80 rows=1147 width=72) (actual time=35.339..71.419 rows=9320 loops=1)\r\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 21))\r\n Rows Removed by Filter: 1415\r\n -> Sort (cost=347.12..350.02 rows=1163 width=72) (actual time=32.598..32.861 rows=10289 loops=1)\r\n Sort Key: qli.curr_season, qli.curr_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 1269kB\r\n -> CTE Scan on qli (cost=0.00..287.90 rows=1163 width=72) (actual time=9.436..30.852 rows=10289 loops=1)\r\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\r\n Rows Removed by Filter: 180\r\n -> CTE Scan on qin (cost=0.00..293.65 rows=1186 width=72) (actual time=0.001..1.163 rows=10197 loops=8685)\r\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\r\n Rows Removed by Filter: 481\r\n -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104) (actual time=4.622..6.715 rows=1415 loops=1)\r\n Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\r\n -> Merge Left Join (cost=1958.66..2135.28 rows=5733 width=136) (actual time=3.489..3.937 rows=1415 loops=1)\r\n Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\r\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104) (actual time=2.376..2.614 rows=1415 loops=1)\r\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\r\n -> Sort (cost=641.68..656.02 rows=5733 width=72) (actual time=1.300..1.337 rows=1415 loops=1)\r\n Sort Key: qup_1.curr_season, qup_1.curr_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 204kB\r\n -> CTE Scan on qup qup_1 (cost=0.00..283.80 rows=5733 width=72) (actual time=0.010..1.119 rows=1415 loops=1)\r\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 21))\r\n Rows Removed by Filter: 9320\r\n -> Sort (cost=651.57..666.11 rows=5816 width=72) (actual time=1.073..1.078 rows=180 loops=1)\r\n Sort Key: qli_1.curr_season, qli_1.curr_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 41kB\r\n -> CTE Scan on qli qli_1 (cost=0.00..287.90 rows=5816 width=72) (actual time=0.057..1.029 rows=180 loops=1)\r\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\r\n Rows Removed by Filter: 10289\r\n -> Sort (cost=665.41..680.24 rows=5932 width=72) (actual time=1.111..1.124 rows=481 loops=1)\r\n Sort Key: qin_1.curr_season, qin_1.curr_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 68kB\r\n -> CTE Scan on qin qin_1 (cost=0.00..293.65 rows=5932 width=72) (actual time=0.016..1.045 rows=481 loops=1)\r\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\r\n Rows Removed by Filter: 10197\r\n -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual time=1.125..1.135 rows=417 loops=1)\r\n Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\r\n Sort Method: quicksort Memory: 68kB\r\n -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944 width=72) (actual time=0.029..1.063 rows=424 loops=1)\r\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\r\n Rows Removed by Filter: 10275\r\n Planning Time: 0.969 ms\r\n Execution Time: 13432.726 ms\r\n(116 Zeilen)\r\n\r\n(All plans are unchanged, cut/pasted from psql window)\r\n\r\nIn qupd we find the same rows estimations as above, as shown in the lines\r\n\r\n -> Hash (cost=706.86..706.86 rows=33 width=144) (actual time=109.843..109.845 rows=9007 loops=1)\r\n\r\n -> Nested Loop (cost=707.35..1327.91 rows=1 width=104) (actual time=139.729..13423.084 rows=8548 loops=1)\r\n\r\n---------\r\n\r\nIn both queries I haven't used materialized CTEs explicitely, but the first 4 CTE's are used in 2 different subsequent CTE's.\r\n\r\nThis query is not fully optimized for frequent use, it is only used for refactoring old data, but finally it will use a 10fold bigger dataset.\r\n(Optimizing could eleminate the cardinality function in join conditions, eliminate materialized CTEs etc).\r\n\r\nI only encountered the long execution time in the second query (with inner joins), which let me analyze and dig to the root cause.\r\nThe use of the nested loop in the third inner join round took very long and eliminated about 9 million rows (on a quad join with 4 datasets of about 10000 tuples).\r\n\r\nI wanted to draw attention on my accidently findings, but I am not able to fully understand or investigate in the source code :-(.\r\n\r\nI conclude that the row estimation in this example seems wrong ((left) outer join case) or too strict (inner join case, only 1/33 estimated from the previous step!)\r\n\r\nI Hope this updated information may help you\r\n\r\nHans Buschmann\r\n\r\n\r\n\r\n\r\n\r\n\r\n________________________________\r\nVon: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\nGesendet: Mittwoch, 8. Februar 2023 22:27\r\nAn: Hans Buschmann; pgsql-hackers@lists.postgresql.org\r\nBetreff: Re: Wrong rows estimations with joins of CTEs slows queries by more than factor 500\r\n\r\nOn 2/8/23 14:55, Hans Buschmann wrote:\r\n> During data refactoring of our Application I encountered $subject when\r\n> joining 4 CTEs with left join or inner join.\r\n>\r\n>\r\n> 1. Background\r\n>\r\n> PG 15.1 on Windows x64 (OS seems no to have no meening here)\r\n>\r\n>\r\n> I try to collect data from 4 (analyzed) tables (up,li,in,ou) by grouping\r\n> certain data (4 CTEs qup,qli,qin,qou)\r\n>\r\n> The grouping of the data in the CTEs gives estimated row counts of about\r\n> 1000 (1 tenth of the real value) This is OK for estimation.\r\n>\r\n>\r\n> These 4 CTEs are then used to combine the data by joining them.\r\n>\r\n>\r\n> 2. Problem\r\n>\r\n> The 4 CTEs are joined by left joins as shown below:\r\n>\r\n...\r\n>\r\n> This case really brought me to detect the problem!\r\n>\r\n> The original query and data are not shown here, but the principle should\r\n> be clear from the execution plans.\r\n>\r\n> I think the planner shouldn't change the row estimations on further\r\n> steps after left joins at all, and be a bit more conservative on inner\r\n> joins.\r\n\r\nBut the code should alredy do exactly that, see:\r\n\r\nhttps://github.com/postgres/postgres/blob/dbe8a1726cfd5a09cf1ef99e76f5f89e2efada71/src/backend/optimizer/path/costsize.c#L5212\r\n\r\nAnd in fact, the second part of the plains shows it's doing the trick:\r\n\r\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104)\r\n(actual time=2.321..2.556 rows=1415 loops=1)\r\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND\r\n ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\r\n -> Sort (cost=641.68..656.02 rows=5733 width=72)\r\n -> Sort (cost=651.57..666.11 rows=5816 width=72)\r\n\r\nBut notice the first join (with rows=33) doesn't say \"Left\". And I see\r\nthere's Append on top, so presumably the query is much more complex, and\r\nthere's a regular join of these CTEs in some other part.\r\n\r\nWe'll need to se the whole query, not just one chunk of it.\r\n\r\nFWIW it seems you're using materialized CTEs - that's likely pretty bad\r\nfor the estimates, because we don't propagate statistics from the CTE.\r\nSo a join on CTEs can't see statistics from the underlying tables, and\r\nthat can easily produce really bad estimates.\r\n\r\nI'm assuming you're not using AS MATERIALIZED explicitly, so I'd bet\r\nthis happens because the \"cardinality\" function is marked as volatile.\r\nPerhaps it can be redefined as stable/immutable.\r\n\r\n> This may be related to the fact that this case has 2 join-conditions\r\n> (xx_season an xx_code).\r\n\r\nThat shouldn't affect outer join estimates this way (but as I explained\r\nabove, the join does not seem to be \"left\" per the explain).\r\nMulti-column joins can cause issues, no doubt about it - but CTEs make\r\nit worse because we can't e.g. see foreign keys.\r\n\r\nregards\r\n\r\n--\r\nTomas Vondra\r\nEnterpriseDB: http://www.enterprisedb.com\r\nThe Enterprise PostgreSQL Company\r\n\n\n\n\n\n\n\n\nHello Tomas,\n\n\nThank you for looking at.\n\r\nFirst, I miscalculated the factor which should be about 50, not 500. Sorry.\r\n\n\nThen I want to show you the table definitions (simple, very similar, ommited child_tables and additional indexes, here using always \"ONLY\"):\n\n\n\ncpsdb_matcol=# \\d sa_upper;\n Tabelle ╗public.sa_upper½\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert\n--------------+-----------------------+--------------+---------------+----------------------------------\n id_sup | integer | | not null | generated by default as identity\n sup_season | smallint | | |\n sup_sa_code | character varying(10) | C | |\n sup_mat_code | character varying(4) | C | |\n sup_clr_code | character varying(3) | C | |\nIndexe:\n \"sa_upper_active_pkey\" PRIMARY KEY, btree (id_sup)\n \n\n\ncpsdb_matcol=# \\d sa_lining+;\n Tabelle ╗public.sa_lining½\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert\n--------------+-----------------------+--------------+---------------+----------------------------------\n id_sli | integer | | not null | generated by default as identity\n sli_season | smallint | | |\n sli_sa_code | character varying(10) | C | |\n sli_mat_code | character varying(4) | C | |\n sli_clr_code | character varying(3) | C | |\nIndexe:\n \"sa_lining_active_pkey\" PRIMARY KEY, btree (id_sli)\n \n\n\ncpsdb_matcol=# \\d sa_insole;\n Tabelle ╗public.sa_insole½\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert\n--------------+-----------------------+--------------+---------------+----------------------------------\n id_sin | integer | | not null | generated by default as identity\n sin_season | smallint | | |\n sin_sa_code | character varying(10) | C | |\n sin_mat_code | character varying(4) | C | |\n sin_clr_code | character varying(3) | C | |\nIndexe:\n \"sa_insole_active_pkey\" PRIMARY KEY, btree (id_sin)\n \n\n\ncpsdb_matcol=# \\d sa_outsole;\n Tabelle ╗public.sa_outsole½\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert\n--------------+-----------------------+--------------+---------------+----------------------------------\n id_sou | integer | | not null | generated by default as identity\n sou_season | smallint | | |\n sou_sa_code | character varying(10) | C | |\n sou_mat_code | character varying(4) | C | |\n sou_clr_code | character varying(3) | C | |\nIndexe:\n \"sa_outsole_active_pkey\" PRIMARY KEY, btree (id_sou)\n \nThe xxx_target tables are very similiar, here the upper one as an example:\nThey are count_aggregates of the whole dataset, where up_mat_code=sup_mat_code etc.\n\n\ncpsdb_matcol=# \\d upper_target\n Tabelle ╗admin.upper_target½\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert\n-------------+----------+--------------+---------------+-------------\n id_up | smallint | | |\n nup | integer | | |\n up_mat_code | text | C | |\n\n\n\n\n\n\nI have reworked the two queries to show their complete explain plans:\n\n\n1. query with left join in the qupd CTE:\n\n\n\n\\set only 'ONLY'\n\n\n\n\n\ncpsdb_matcol=# explain analyze -- explain analyze verbose -- explain -- select * from ( -- select count(*) from ( -- select length(sel) from (\ncpsdb_matcol-# with\ncpsdb_matcol-# qup as (\ncpsdb_matcol(# select\ncpsdb_matcol(# curr_season -- all xxx_seasosn are always smallint\ncpsdb_matcol(# ,curr_code-- all xx_code are always varchar(10)\ncpsdb_matcol(# ,array_agg(id_up order by id_up)||array_fill(0::smallint,array[10]) as mat_arr\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_up) as matcode_arr\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\ncpsdb_matcol(# from(\ncpsdb_matcol(# select\ncpsdb_matcol(# sup_season as curr_season\ncpsdb_matcol(# ,sup_sa_code as curr_code\ncpsdb_matcol(# ,sup_mat_code as curr_mat_code\ncpsdb_matcol(# ,sup_clr_code as curr_clr_code\ncpsdb_matcol(# ,id_up\ncpsdb_matcol(# ,coalesce(id_up,-1) as imask\ncpsdb_matcol(# from :only sa_upper\ncpsdb_matcol(# left join upper_target on up_mat_code=sup_mat_code and id_up <= (512-1-16)\ncpsdb_matcol(# )qr\ncpsdb_matcol(# group by 1,2\ncpsdb_matcol(# )\ncpsdb_matcol-# ,qli as (\ncpsdb_matcol(# select\ncpsdb_matcol(# curr_season\ncpsdb_matcol(# ,curr_code\ncpsdb_matcol(# ,array_agg(id_li order by id_li)||array_fill(0::smallint,array[4]) as mat_arr\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_li) as matcode_arr\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\ncpsdb_matcol(# from(\ncpsdb_matcol(# select\ncpsdb_matcol(# sli_season as curr_season\ncpsdb_matcol(# ,sli_sa_code as curr_code\ncpsdb_matcol(# ,sli_mat_code as curr_mat_code\ncpsdb_matcol(# ,sli_clr_code as curr_clr_code\ncpsdb_matcol(# ,id_li\ncpsdb_matcol(# ,coalesce(id_li,-1) as imask\ncpsdb_matcol(# from :only sa_lining\ncpsdb_matcol(# left join lining_target on li_mat_code=sli_mat_code and id_li <= (128-1-8)\ncpsdb_matcol(# )qr\ncpsdb_matcol(# group by 1,2\ncpsdb_matcol(# )\ncpsdb_matcol-# ,qin as (\ncpsdb_matcol(# select\ncpsdb_matcol(# curr_season\ncpsdb_matcol(# ,curr_code\ncpsdb_matcol(# ,array_agg(id_in order by id_in)||array_fill(0::smallint,array[4]) as mat_arr\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_in) as matcode_arr\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\ncpsdb_matcol(# from(\ncpsdb_matcol(# select\ncpsdb_matcol(# sin_season as curr_season\ncpsdb_matcol(# ,sin_sa_code as curr_code\ncpsdb_matcol(# ,sin_mat_code as curr_mat_code\ncpsdb_matcol(# ,sin_clr_code as curr_clr_code\ncpsdb_matcol(# ,id_in\ncpsdb_matcol(# ,coalesce(id_in,-1) as imask\ncpsdb_matcol(# from :only sa_insole\ncpsdb_matcol(# left join insole_target on in_mat_code=sin_mat_code and id_in <= (128-1-8)\ncpsdb_matcol(# )qr\ncpsdb_matcol(# group by 1,2\ncpsdb_matcol(# )\ncpsdb_matcol-# ,qou as (\ncpsdb_matcol(# select\ncpsdb_matcol(# curr_season\ncpsdb_matcol(# ,curr_code\ncpsdb_matcol(# ,array_agg(id_ou order by id_ou)||array_fill(0::smallint,array[6]) as mat_arr\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_ou) as matcode_arr\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\ncpsdb_matcol(# from(\ncpsdb_matcol(# select\ncpsdb_matcol(# sou_season as curr_season\ncpsdb_matcol(# ,sou_sa_code as curr_code\ncpsdb_matcol(# ,sou_mat_code as curr_mat_code\ncpsdb_matcol(# ,sou_clr_code as curr_clr_code\ncpsdb_matcol(# ,id_ou\ncpsdb_matcol(# ,coalesce(id_ou,-1) as imask\ncpsdb_matcol(# from :only sa_outsole\ncpsdb_matcol(# left join outsole_target on ou_mat_code=sou_mat_code and id_ou <= (32-1-2)\ncpsdb_matcol(# )qr\ncpsdb_matcol(# group by 1,2\ncpsdb_matcol(# )\ncpsdb_matcol-# ,qupd as (\ncpsdb_matcol(# select * from (\ncpsdb_matcol(# select\ncpsdb_matcol(# qup.curr_season\ncpsdb_matcol(# ,qup.curr_code\ncpsdb_matcol(# ,qup.ibitmask|qin.ibitmask|qli.ibitmask|qou.ibitmask as ibitmask\ncpsdb_matcol(# -- the calculations of new_mat_x are simplified here\ncpsdb_matcol(# -- in the production version they are a more complex combination of bit masks, bit shifts and bit or of different elements of the arrays\ncpsdb_matcol(# ,(qup.mat_arr[1]|qli.mat_arr[1]|qin.mat_arr[1]|qou.mat_arr[1])::bigint as new_mat_1\ncpsdb_matcol(#\ncpsdb_matcol(# ,(qup.mat_arr[2]|qli.mat_arr[2]|qin.mat_arr[2]|qou.mat_arr[2])::bigint as new_mat_2\ncpsdb_matcol(#\ncpsdb_matcol(# ,(qup.mat_arr[3]|qli.mat_arr[3]|qin.mat_arr[3]|qou.mat_arr[3])::bigint as new_mat_3\ncpsdb_matcol(#\ncpsdb_matcol(# from qup\ncpsdb_matcol(# left join qli on (qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and qli.ibitmask>0 and cardinality(qli.mat_arr) <=8)\ncpsdb_matcol(# left join qin on (qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and qin.ibitmask>0 and cardinality(qin.mat_arr) <=8)\ncpsdb_matcol(# left join qou on (qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and qou.ibitmask>0 and cardinality(qou.mat_arr) <=11)\ncpsdb_matcol(# where qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\ncpsdb_matcol(# )qj\ncpsdb_matcol(# where ibitmask is not null\ncpsdb_matcol(# )\ncpsdb_matcol-# ,qupda as (\ncpsdb_matcol(# select\ncpsdb_matcol(# qup.curr_season\ncpsdb_matcol(# ,qup.curr_code\ncpsdb_matcol(# ,repeat('0',64)||\ncpsdb_matcol(# repeat('11',coalesce(cardinality(qou.matcode_arr),0))||repeat('10',coalesce(cardinality(qin.matcode_arr),0))||\ncpsdb_matcol(# repeat('01',coalesce(cardinality(qou.matcode_arr),0))||repeat('00',coalesce(cardinality(qup.matcode_arr),0))||\ncpsdb_matcol(# '00' as curr_mattype_bitmask\ncpsdb_matcol(# ,qup.matcode_arr||qli.matcode_arr||qin.matcode_arr||qou.matcode_arr as curr_matcode_arr\ncpsdb_matcol(# from qup\ncpsdb_matcol(# left join qli on qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and (qli.ibitmask<0 or cardinality(qli.mat_arr) >8)\ncpsdb_matcol(# left join qin on qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and (qin.ibitmask<0 or cardinality(qin.mat_arr) >8)\ncpsdb_matcol(# left join qou on qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and (qou.ibitmask<0 or cardinality(qou.mat_arr) >11)\ncpsdb_matcol(# where qup.ibitmask<0 or cardinality(qup.mat_arr) >21\ncpsdb_matcol(# )\ncpsdb_matcol-# select\ncpsdb_matcol-# curr_season\ncpsdb_matcol-# ,curr_code\ncpsdb_matcol-# ,new_mat_1\ncpsdb_matcol-# ,new_mat_2\ncpsdb_matcol-# ,new_mat_3\ncpsdb_matcol-# ,NULL::bigint as new_mattype_bitmask\ncpsdb_matcol-# ,NULL as new_mat_codes\ncpsdb_matcol-# from qupd\ncpsdb_matcol-# union all\ncpsdb_matcol-# select\ncpsdb_matcol-# curr_season\ncpsdb_matcol-# ,curr_code\ncpsdb_matcol-# ,NULL::bigint as new_mat_1\ncpsdb_matcol-# ,NULL::bigint as new_mat_2\ncpsdb_matcol-# ,NULL::bigint as new_mat_3\ncpsdb_matcol-# ,substr(curr_mattype_bitmask,length(curr_mattype_bitmask)-63)::bit(64)::bigint as new_mattype_bitmask\ncpsdb_matcol-# ,curr_matcode_arr as new_mat_codes\ncpsdb_matcol-# from qupda\ncpsdb_matcol-# ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=13673.81..17462.84 rows=5734 width=104) (actual time=169.382..210.799 rows=9963 loops=1)\n CTE qup\n -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80) (actual time=35.064..68.308 rows=10735 loops=1)\n Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual time=35.053..36.412 rows=50969 loops=1)\n Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 4722kB\n -> Hash Left Join (cost=41.71..1246.13 rows=50969 width=18) (actual time=0.165..10.562 rows=50969 loops=1)\n Hash Cond: ((sa_upper.sup_mat_code)::text = upper_target.up_mat_code)\n -> Seq Scan on sa_upper (cost=0.00..884.69 rows=50969 width=16) (actual time=0.006..1.990 rows=50969 loops=1)\n -> Hash (cost=35.53..35.53 rows=495 width=6) (actual time=0.157..0.157 rows=495 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\n -> Seq Scan on upper_target (cost=0.00..35.53 rows=495 width=6) (actual time=0.006..0.115 rows=495 loops=1)\n Filter: (id_up <= 495)\n Rows Removed by Filter: 1467\n CTE qli\n -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80) (actual time=9.354..28.199 rows=10469 loops=1)\n Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual time=9.347..9.711 rows=11774 loops=1)\n Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1120kB\n -> Hash Left Join (cost=7.34..301.19 rows=11774 width=18) (actual time=0.049..2.397 rows=11774 loops=1)\n Hash Cond: ((sa_lining.sli_mat_code)::text = lining_target.li_mat_code)\n -> Seq Scan on sa_lining (cost=0.00..204.74 rows=11774 width=16) (actual time=0.009..0.469 rows=11774 loops=1)\n -> Hash (cost=5.86..5.86 rows=118 width=6) (actual time=0.037..0.037 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on lining_target (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.025 rows=119 loops=1)\n Filter: (id_li <= 119)\n Rows Removed by Filter: 190\n CTE qin\n -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80) (actual time=11.453..32.317 rows=10678 loops=1)\n Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual time=11.444..11.943 rows=15230 loops=1)\n Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1336kB\n -> Hash Left Join (cost=10.49..369.26 rows=15230 width=18) (actual time=0.051..3.098 rows=15230 loops=1)\n Hash Cond: ((sa_insole.sin_mat_code)::text = insole_target.in_mat_code)\n -> Seq Scan on sa_insole (cost=0.00..264.30 rows=15230 width=16) (actual time=0.007..0.608 rows=15230 loops=1)\n -> Hash (cost=9.01..9.01 rows=118 width=6) (actual time=0.041..0.041 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on insole_target (cost=0.00..9.01 rows=118 width=6) (actual time=0.007..0.031 rows=119 loops=1)\n Filter: (id_in <= 119)\n Rows Removed by Filter: 362\n CTE qou\n -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80) (actual time=18.055..42.079 rows=10699 loops=1)\n Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual time=18.043..18.798 rows=24768 loops=1)\n Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 2317kB\n -> Hash Left Join (cost=5.39..558.63 rows=24768 width=18) (actual time=0.037..5.017 rows=24768 loops=1)\n Hash Cond: ((sa_outsole.sou_mat_code)::text = outsole_target.ou_mat_code)\n -> Seq Scan on sa_outsole (cost=0.00..430.68 rows=24768 width=16) (actual time=0.008..0.998 rows=24768 loops=1)\n -> Hash (cost=5.03..5.03 rows=29 width=6) (actual time=0.025..0.025 rows=29 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on outsole_target (cost=0.00..5.03 rows=29 width=6) (actual time=0.009..0.020 rows=29 loops=1)\n Filter: (id_ou <= 29)\n Rows Removed by Filter: 213\n -> Hash Join (cost=1015.85..1319.04 rows=1 width=104) (actual time=169.382..203.707 rows=8548 loops=1)\n Hash Cond: ((qou.curr_season = qli.curr_season) AND ((qou.curr_code)::text = (qli.curr_code)::text))\n Join Filter: ((((qup.ibitmask | qin.ibitmask) | qli.ibitmask) | qou.ibitmask) IS NOT NULL)\n -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=76) (actual time=18.057..45.448 rows=10275 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\n Rows Removed by Filter: 424\n -> Hash (cost=1015.83..1015.83 rows=1 width=228) (actual time=151.316..151.317 rows=8845 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1899kB\n -> Hash Join (cost=707.35..1015.83 rows=1 width=228) (actual time=122.483..149.030 rows=8845 loops=1)\n Hash Cond: ((qin.curr_season = qli.curr_season) AND ((qin.curr_code)::text = (qli.curr_code)::text))\n -> CTE Scan on qin (cost=0.00..293.65 rows=1186 width=76) (actual time=11.454..35.456 rows=10197 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 481\n -> Hash (cost=706.86..706.86 rows=33 width=152) (actual time=111.026..111.027 rows=9007 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1473kB\n -> Merge Join (cost=689.20..706.86 rows=33 width=152) (actual time=106.441..109.505 rows=9007 loops=1)\n Merge Cond: ((qup.curr_season = qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n -> Sort (cost=342.09..344.96 rows=1147 width=76) (actual time=73.200..73.429 rows=9320 loops=1)\n Sort Key: qup.curr_season, qup.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1391kB\n -> CTE Scan on qup (cost=0.00..283.80 rows=1147 width=76) (actual time=35.067..71.872 rows=9320 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 21))\n Rows Removed by Filter: 1415\n -> Sort (cost=347.12..350.02 rows=1163 width=76) (actual time=33.239..33.490 rows=10289 loops=1)\n Sort Key: qli.curr_season, qli.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1349kB\n -> CTE Scan on qli (cost=0.00..287.90 rows=1163 width=76) (actual time=9.355..31.457 rows=10289 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 180\n -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104) (actual time=4.529..6.645 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n -> Merge Left Join (cost=1958.66..2135.28 rows=5733 width=136) (actual time=3.388..3.833 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104) (actual time=2.297..2.534 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n -> Sort (cost=641.68..656.02 rows=5733 width=72) (actual time=1.278..1.315 rows=1415 loops=1)\n Sort Key: qup_1.curr_season, qup_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 204kB\n -> CTE Scan on qup qup_1 (cost=0.00..283.80 rows=5733 width=72) (actual time=0.009..1.081 rows=1415 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 21))\n Rows Removed by Filter: 9320\n -> Sort (cost=651.57..666.11 rows=5816 width=72) (actual time=1.017..1.022 rows=180 loops=1)\n Sort Key: qli_1.curr_season, qli_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 41kB\n -> CTE Scan on qli qli_1 (cost=0.00..287.90 rows=5816 width=72) (actual time=0.054..0.994 rows=180 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10289\n -> Sort (cost=665.41..680.24 rows=5932 width=72) (actual time=1.089..1.103 rows=481 loops=1)\n Sort Key: qin_1.curr_season, qin_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qin qin_1 (cost=0.00..293.65 rows=5932 width=72) (actual time=0.016..1.022 rows=481 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10197\n -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual time=1.134..1.145 rows=417 loops=1)\n Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944 width=72) (actual time=0.029..1.038 rows=424 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n Rows Removed by Filter: 10275\n Planning Time: 1.055 ms\n Execution Time: 212.800 ms\n(118 Zeilen)\n\n\r\nAs seen in the line of the qupd CTE\n\n\n\n\r\n -> Merge Join (cost=689.20..706.86 rows=33 width=152) (actual time=106.441..109.505 rows=9007 loops=1)\n\n\r\nthe row count of the second join round drops to 33 and for the third round it drops to 1\n\n\n\n\r\n -> Hash Join (cost=707.35..1015.83 rows=1 width=228) (actual time=122.483..149.030 rows=8845 loops=1)\n\n\nBTW, I don't know, why the second join group (part of qupda) gets a complete different plan.\n\n\n\n\n--------------------------------------------\n\n\n\n\nHere is the second question, different from the first only by replacing the left join to inner join in the join group of qupd:\n\n\n\n\\set only 'ONLY'\n\n\n\n\n\ncpsdb_matcol=# explain analyze -- explain analyze verbose -- explain -- select * from ( -- select count(*) from ( -- select length(sel) from (\ncpsdb_matcol-# with\ncpsdb_matcol-# qup as (\ncpsdb_matcol(# select\ncpsdb_matcol(# curr_season -- all xxx_seasosn are always smallint\ncpsdb_matcol(# ,curr_code-- all xx_code are always varchar(10)\ncpsdb_matcol(# ,array_agg(id_up order by id_up)||array_fill(0::smallint,array[10]) as mat_arr\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_up) as matcode_arr\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\ncpsdb_matcol(# from(\ncpsdb_matcol(# select\ncpsdb_matcol(# sup_season as curr_season\ncpsdb_matcol(# ,sup_sa_code as curr_code\ncpsdb_matcol(# ,sup_mat_code as curr_mat_code\ncpsdb_matcol(# ,sup_clr_code as curr_clr_code\ncpsdb_matcol(# ,id_up\ncpsdb_matcol(# ,coalesce(id_up,-1) as imask\ncpsdb_matcol(# from :only sa_upper\ncpsdb_matcol(# left join upper_target on up_mat_code=sup_mat_code and id_up <= (512-1-16)\ncpsdb_matcol(# )qr\ncpsdb_matcol(# group by 1,2\ncpsdb_matcol(# )\ncpsdb_matcol-# ,qli as (\ncpsdb_matcol(# select\ncpsdb_matcol(# curr_season\ncpsdb_matcol(# ,curr_code\ncpsdb_matcol(# ,array_agg(id_li order by id_li)||array_fill(0::smallint,array[4]) as mat_arr\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_li) as matcode_arr\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\ncpsdb_matcol(# from(\ncpsdb_matcol(# select\ncpsdb_matcol(# sli_season as curr_season\ncpsdb_matcol(# ,sli_sa_code as curr_code\ncpsdb_matcol(# ,sli_mat_code as curr_mat_code\ncpsdb_matcol(# ,sli_clr_code as curr_clr_code\ncpsdb_matcol(# ,id_li\ncpsdb_matcol(# ,coalesce(id_li,-1) as imask\ncpsdb_matcol(# from :only sa_lining\ncpsdb_matcol(# left join lining_target on li_mat_code=sli_mat_code and id_li <= (128-1-8)\ncpsdb_matcol(# )qr\ncpsdb_matcol(# group by 1,2\ncpsdb_matcol(# )\ncpsdb_matcol-# ,qin as (\ncpsdb_matcol(# select\ncpsdb_matcol(# curr_season\ncpsdb_matcol(# ,curr_code\ncpsdb_matcol(# ,array_agg(id_in order by id_in)||array_fill(0::smallint,array[4]) as mat_arr\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_in) as matcode_arr\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\ncpsdb_matcol(# from(\ncpsdb_matcol(# select\ncpsdb_matcol(# sin_season as curr_season\ncpsdb_matcol(# ,sin_sa_code as curr_code\ncpsdb_matcol(# ,sin_mat_code as curr_mat_code\ncpsdb_matcol(# ,sin_clr_code as curr_clr_code\ncpsdb_matcol(# ,id_in\ncpsdb_matcol(# ,coalesce(id_in,-1) as imask\ncpsdb_matcol(# from :only sa_insole\ncpsdb_matcol(# left join insole_target on in_mat_code=sin_mat_code and id_in <= (128-1-8)\ncpsdb_matcol(# )qr\ncpsdb_matcol(# group by 1,2\ncpsdb_matcol(# )\ncpsdb_matcol-# ,qou as (\ncpsdb_matcol(# select\ncpsdb_matcol(# curr_season\ncpsdb_matcol(# ,curr_code\ncpsdb_matcol(# ,array_agg(id_ou order by id_ou)||array_fill(0::smallint,array[6]) as mat_arr\ncpsdb_matcol(# ,array_agg(curr_mat_code order by id_ou) as matcode_arr\ncpsdb_matcol(# ,bit_or(imask) as ibitmask\ncpsdb_matcol(# from(\ncpsdb_matcol(# select\ncpsdb_matcol(# sou_season as curr_season\ncpsdb_matcol(# ,sou_sa_code as curr_code\ncpsdb_matcol(# ,sou_mat_code as curr_mat_code\ncpsdb_matcol(# ,sou_clr_code as curr_clr_code\ncpsdb_matcol(# ,id_ou\ncpsdb_matcol(# ,coalesce(id_ou,-1) as imask\ncpsdb_matcol(# from :only sa_outsole\ncpsdb_matcol(# left join outsole_target on ou_mat_code=sou_mat_code and id_ou <= (32-1-2)\ncpsdb_matcol(# )qr\ncpsdb_matcol(# group by 1,2\ncpsdb_matcol(# )\ncpsdb_matcol-# ,qupd as (\ncpsdb_matcol(# select\ncpsdb_matcol(# qup.curr_season\ncpsdb_matcol(# ,qup.curr_code\ncpsdb_matcol(# ,qup.ibitmask|qin.ibitmask|qli.ibitmask|qou.ibitmask as ibitmask\ncpsdb_matcol(# -- the calculations of new_mat_x are simplified here\ncpsdb_matcol(# -- in the production version they are a more complex combination of bit masks, bit shifts and bit or of different elements of the arrays\ncpsdb_matcol(# ,(qup.mat_arr[1]|qli.mat_arr[1]|qin.mat_arr[1]|qou.mat_arr[1])::bigint as new_mat_1\ncpsdb_matcol(#\ncpsdb_matcol(# ,(qup.mat_arr[2]|qli.mat_arr[2]|qin.mat_arr[2]|qou.mat_arr[2])::bigint as new_mat_2\ncpsdb_matcol(#\ncpsdb_matcol(# ,(qup.mat_arr[3]|qli.mat_arr[3]|qin.mat_arr[3]|qou.mat_arr[3])::bigint as new_mat_3\ncpsdb_matcol(#\ncpsdb_matcol(# from qup\ncpsdb_matcol(# join qli on (qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and qli.ibitmask>0 and cardinality(qli.mat_arr) <=8)\ncpsdb_matcol(# join qin on (qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and qin.ibitmask>0 and cardinality(qin.mat_arr) <=8)\ncpsdb_matcol(# join qou on (qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and qou.ibitmask>0 and cardinality(qou.mat_arr) <=11)\ncpsdb_matcol(# where qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\ncpsdb_matcol(# )\ncpsdb_matcol-# ,qupda as (\ncpsdb_matcol(# select\ncpsdb_matcol(# qup.curr_season\ncpsdb_matcol(# ,qup.curr_code\ncpsdb_matcol(# ,repeat('0',64)||\ncpsdb_matcol(# repeat('11',coalesce(cardinality(qou.matcode_arr),0))||repeat('10',coalesce(cardinality(qin.matcode_arr),0))||\ncpsdb_matcol(# repeat('01',coalesce(cardinality(qou.matcode_arr),0))||repeat('00',coalesce(cardinality(qup.matcode_arr),0))||\ncpsdb_matcol(# '00' as curr_mattype_bitmask\ncpsdb_matcol(# ,qup.matcode_arr||qli.matcode_arr||qin.matcode_arr||qou.matcode_arr as curr_matcode_arr\ncpsdb_matcol(# from qup\ncpsdb_matcol(# left join qli on qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and (qli.ibitmask<0 or cardinality(qli.mat_arr) >8)\ncpsdb_matcol(# left join qin on qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and (qin.ibitmask<0 or cardinality(qin.mat_arr) >8)\ncpsdb_matcol(# left join qou on qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and (qou.ibitmask<0 or cardinality(qou.mat_arr) >11)\ncpsdb_matcol(# where qup.ibitmask<0 or cardinality(qup.mat_arr) >21\ncpsdb_matcol(# )\ncpsdb_matcol-# select\ncpsdb_matcol-# curr_season\ncpsdb_matcol-# ,curr_code\ncpsdb_matcol-# ,new_mat_1\ncpsdb_matcol-# ,new_mat_2\ncpsdb_matcol-# ,new_mat_3\ncpsdb_matcol-# ,NULL::bigint as new_mattype_bitmask\ncpsdb_matcol-# ,NULL as new_mat_codes\ncpsdb_matcol-# from qupd\ncpsdb_matcol-# union all\ncpsdb_matcol-# select\ncpsdb_matcol-# curr_season\ncpsdb_matcol-# ,curr_code\ncpsdb_matcol-# ,NULL::bigint as new_mat_1\ncpsdb_matcol-# ,NULL::bigint as new_mat_2\ncpsdb_matcol-# ,NULL::bigint as new_mat_3\ncpsdb_matcol-# ,substr(curr_mattype_bitmask,length(curr_mattype_bitmask)-63)::bit(64)::bigint as new_mattype_bitmask\ncpsdb_matcol-# ,curr_matcode_arr as new_mat_codes\ncpsdb_matcol-# from qupda\ncpsdb_matcol-# ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=13365.31..17471.72 rows=5734 width=104) (actual time=139.730..13430.641 rows=9963 loops=1)\n CTE qup\n -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80) (actual time=35.337..67.779 rows=10735 loops=1)\n Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual time=35.326..36.704 rows=50969 loops=1)\n Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 4722kB\n -> Hash Left Join (cost=41.71..1246.13 rows=50969 width=18) (actual time=0.179..10.787 rows=50969 loops=1)\n Hash Cond: ((sa_upper.sup_mat_code)::text = upper_target.up_mat_code)\n -> Seq Scan on sa_upper (cost=0.00..884.69 rows=50969 width=16) (actual time=0.009..1.990 rows=50969 loops=1)\n -> Hash (cost=35.53..35.53 rows=495 width=6) (actual time=0.164..0.164 rows=495 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\n -> Seq Scan on upper_target (cost=0.00..35.53 rows=495 width=6) (actual time=0.006..0.128 rows=495 loops=1)\n Filter: (id_up <= 495)\n Rows Removed by Filter: 1467\n CTE qli\n -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80) (actual time=9.434..27.620 rows=10469 loops=1)\n Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual time=9.424..9.796 rows=11774 loops=1)\n Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1120kB\n -> Hash Left Join (cost=7.34..301.19 rows=11774 width=18) (actual time=0.049..2.444 rows=11774 loops=1)\n Hash Cond: ((sa_lining.sli_mat_code)::text = lining_target.li_mat_code)\n -> Seq Scan on sa_lining (cost=0.00..204.74 rows=11774 width=16) (actual time=0.009..0.476 rows=11774 loops=1)\n -> Hash (cost=5.86..5.86 rows=118 width=6) (actual time=0.036..0.036 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on lining_target (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.026 rows=119 loops=1)\n Filter: (id_li <= 119)\n Rows Removed by Filter: 190\n CTE qin\n -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80) (actual time=11.578..31.510 rows=10678 loops=1)\n Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual time=11.572..12.044 rows=15230 loops=1)\n Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1336kB\n -> Hash Left Join (cost=10.49..369.26 rows=15230 width=18) (actual time=0.056..3.120 rows=15230 loops=1)\n Hash Cond: ((sa_insole.sin_mat_code)::text = insole_target.in_mat_code)\n -> Seq Scan on sa_insole (cost=0.00..264.30 rows=15230 width=16) (actual time=0.008..0.609 rows=15230 loops=1)\n -> Hash (cost=9.01..9.01 rows=118 width=6) (actual time=0.044..0.045 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on insole_target (cost=0.00..9.01 rows=118 width=6) (actual time=0.008..0.033 rows=119 loops=1)\n Filter: (id_in <= 119)\n Rows Removed by Filter: 362\n CTE qou\n -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80) (actual time=18.295..51.236 rows=10699 loops=1)\n Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual time=18.281..20.157 rows=24768 loops=1)\n Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 2317kB\n -> Hash Left Join (cost=5.39..558.63 rows=24768 width=18) (actual time=0.036..5.080 rows=24768 loops=1)\n Hash Cond: ((sa_outsole.sou_mat_code)::text = outsole_target.ou_mat_code)\n -> Seq Scan on sa_outsole (cost=0.00..430.68 rows=24768 width=16) (actual time=0.009..1.017 rows=24768 loops=1)\n -> Hash (cost=5.03..5.03 rows=29 width=6) (actual time=0.024..0.025 rows=29 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on outsole_target (cost=0.00..5.03 rows=29 width=6) (actual time=0.007..0.018 rows=29 loops=1)\n Filter: (id_ou <= 29)\n Rows Removed by Filter: 213\n -> Nested Loop (cost=707.35..1327.91 rows=1 width=104) (actual time=139.729..13423.084 rows=8548 loops=1)\n Join Filter: ((qli.curr_season = qin.curr_season) AND ((qli.curr_code)::text = (qin.curr_code)::text))\n Rows Removed by Join Filter: 88552397\n -> Hash Join (cost=707.35..1016.45 rows=1 width=216) (actual time=128.145..169.287 rows=8685 loops=1)\n Hash Cond: ((qou.curr_season = qli.curr_season) AND ((qou.curr_code)::text = (qli.curr_code)::text))\n -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=72) (actual time=18.297..55.085 rows=10275 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\n Rows Removed by Filter: 424\n -> Hash (cost=706.86..706.86 rows=33 width=144) (actual time=109.843..109.845 rows=9007 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1369kB\n -> Merge Join (cost=689.20..706.86 rows=33 width=144) (actual time=105.294..108.377 rows=9007 loops=1)\n Merge Cond: ((qup.curr_season = qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n -> Sort (cost=342.09..344.96 rows=1147 width=72) (actual time=72.693..72.923 rows=9320 loops=1)\n Sort Key: qup.curr_season, qup.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1357kB\n -> CTE Scan on qup (cost=0.00..283.80 rows=1147 width=72) (actual time=35.339..71.419 rows=9320 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 21))\n Rows Removed by Filter: 1415\n -> Sort (cost=347.12..350.02 rows=1163 width=72) (actual time=32.598..32.861 rows=10289 loops=1)\n Sort Key: qli.curr_season, qli.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1269kB\n -> CTE Scan on qli (cost=0.00..287.90 rows=1163 width=72) (actual time=9.436..30.852 rows=10289 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 180\n -> CTE Scan on qin (cost=0.00..293.65 rows=1186 width=72) (actual time=0.001..1.163 rows=10197 loops=8685)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 481\n -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104) (actual time=4.622..6.715 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n -> Merge Left Join (cost=1958.66..2135.28 rows=5733 width=136) (actual time=3.489..3.937 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104) (actual time=2.376..2.614 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n -> Sort (cost=641.68..656.02 rows=5733 width=72) (actual time=1.300..1.337 rows=1415 loops=1)\n Sort Key: qup_1.curr_season, qup_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 204kB\n -> CTE Scan on qup qup_1 (cost=0.00..283.80 rows=5733 width=72) (actual time=0.010..1.119 rows=1415 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 21))\n Rows Removed by Filter: 9320\n -> Sort (cost=651.57..666.11 rows=5816 width=72) (actual time=1.073..1.078 rows=180 loops=1)\n Sort Key: qli_1.curr_season, qli_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 41kB\n -> CTE Scan on qli qli_1 (cost=0.00..287.90 rows=5816 width=72) (actual time=0.057..1.029 rows=180 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10289\n -> Sort (cost=665.41..680.24 rows=5932 width=72) (actual time=1.111..1.124 rows=481 loops=1)\n Sort Key: qin_1.curr_season, qin_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qin qin_1 (cost=0.00..293.65 rows=5932 width=72) (actual time=0.016..1.045 rows=481 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10197\n -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual time=1.125..1.135 rows=417 loops=1)\n Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944 width=72) (actual time=0.029..1.063 rows=424 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n Rows Removed by Filter: 10275\n Planning Time: 0.969 ms\n Execution Time: 13432.726 ms\n(116 Zeilen)\n\n\r\n(All plans are unchanged, cut/pasted from psql window)\n\n\nIn qupd we find the same rows estimations as above, as shown in the lines\n\n\n\n -> Hash (cost=706.86..706.86 rows=33 width=144) (actual time=109.843..109.845 rows=9007 loops=1)\n\n\n -> Nested Loop (cost=707.35..1327.91 rows=1 width=104) (actual time=139.729..13423.084 rows=8548 loops=1)\n\n\n---------\n\n\nIn both queries I haven't used materialized CTEs explicitely, but the first 4 CTE's are used in 2 different subsequent CTE's.\n\n\nThis query is not fully optimized for frequent use, it is only used for refactoring old data, but finally it will use a 10fold bigger dataset.\n(Optimizing could eleminate the cardinality function in join conditions, eliminate materialized CTEs etc).\n\n\nI only encountered the long execution time in the second query (with inner joins), which let me analyze and dig to the root cause.\nThe use of the nested loop in the third inner join round took very long and eliminated about 9 million rows (on a quad join with 4 datasets of about 10000 tuples).\n\n\nI wanted to draw attention on my accidently findings, but I am not able to fully understand or investigate in the source code :-(.\n\n\nI conclude that the row estimation in this example seems wrong ((left) outer join case) or too strict (inner join case, only 1/33 estimated from the previous step!)\n\n\nI Hope this updated information may help you\n\n\nHans Buschmann\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nVon: Tomas Vondra <tomas.vondra@enterprisedb.com>\nGesendet: Mittwoch, 8. Februar 2023 22:27\nAn: Hans Buschmann; pgsql-hackers@lists.postgresql.org\nBetreff: Re: Wrong rows estimations with joins of CTEs slows queries by more than factor 500\n \n\n\n\nOn 2/8/23 14:55, Hans Buschmann wrote:\r\n> During data refactoring of our Application I encountered $subject when\r\n> joining 4 CTEs with left join or inner join.\r\n> \r\n> \r\n> 1. Background\r\n> \r\n> PG 15.1 on Windows x64 (OS seems no to have no meening here)\r\n> \r\n> \r\n> I try to collect data from 4 (analyzed) tables (up,li,in,ou) by grouping\r\n> certain data (4 CTEs qup,qli,qin,qou)\r\n> \r\n> The grouping of the data in the CTEs gives estimated row counts of about\r\n> 1000 (1 tenth of the real value) This is OK for estimation.\r\n> \r\n> \r\n> These 4 CTEs are then used to combine the data by joining them.\r\n> \r\n> \r\n> 2. Problem\r\n> \r\n> The 4 CTEs are joined by left joins as shown below:\r\n>\r\n...\r\n> \r\n> This case really brought me to detect the problem!\r\n> \r\n> The original query and data are not shown here, but the principle should\r\n> be clear from the execution plans.\r\n> \r\n> I think the planner shouldn't change the row estimations on further\r\n> steps after left joins at all, and be a bit more conservative on inner\r\n> joins.\n\r\nBut the code should alredy do exactly that, see:\n\nhttps://github.com/postgres/postgres/blob/dbe8a1726cfd5a09cf1ef99e76f5f89e2efada71/src/backend/optimizer/path/costsize.c#L5212\n\r\nAnd in fact, the second part of the plains shows it's doing the trick:\n\r\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104)\r\n(actual time=2.321..2.556 rows=1415 loops=1)\r\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND\r\n ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\r\n -> Sort (cost=641.68..656.02 rows=5733 width=72)\r\n -> Sort (cost=651.57..666.11 rows=5816 width=72)\n\r\nBut notice the first join (with rows=33) doesn't say \"Left\". And I see\r\nthere's Append on top, so presumably the query is much more complex, and\r\nthere's a regular join of these CTEs in some other part.\n\r\nWe'll need to se the whole query, not just one chunk of it.\n\r\nFWIW it seems you're using materialized CTEs - that's likely pretty bad\r\nfor the estimates, because we don't propagate statistics from the CTE.\r\nSo a join on CTEs can't see statistics from the underlying tables, and\r\nthat can easily produce really bad estimates.\n\r\nI'm assuming you're not using AS MATERIALIZED explicitly, so I'd bet\r\nthis happens because the \"cardinality\" function is marked as volatile.\r\nPerhaps it can be redefined as stable/immutable.\n\r\n> This may be related to the fact that this case has 2 join-conditions\r\n> (xx_season an xx_code).\n\r\nThat shouldn't affect outer join estimates this way (but as I explained\r\nabove, the join does not seem to be \"left\" per the explain).\r\nMulti-column joins can cause issues, no doubt about it - but CTEs make\r\nit worse because we can't e.g. see foreign keys.\n\r\nregards\n\r\n-- \r\nTomas Vondra\r\nEnterpriseDB: \r\nhttp://www.enterprisedb.com\r\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 9 Feb 2023 09:03:32 +0000",
"msg_from": "Hans Buschmann <buschmann@nidsa.net>",
"msg_from_op": true,
"msg_subject": "AW: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "\n\nOn 2/9/23 10:03, Hans Buschmann wrote:\n> Hello Tomas,\n> \n> \n> Thank you for looking at.\n> \n> \n> First, I miscalculated the factor which should be about 50, not 500. Sorry.\n> \n> Then I want to show you the table definitions (simple, very similar,\n> ommited child_tables and additional indexes, here using always \"ONLY\"):\n> \n> cpsdb_matcol=# \\d sa_upper;\n> Tabelle ╗public.sa_upper½\n> Spalte | Typ | Sortierfolge | NULL erlaubt? | \n> Vorgabewert\n> --------------+-----------------------+--------------+---------------+----------------------------------\n> id_sup | integer | | not null |\n> generated by default as identity\n> sup_season | smallint | | |\n> sup_sa_code | character varying(10) | C | |\n> sup_mat_code | character varying(4) | C | |\n> sup_clr_code | character varying(3) | C | |\n> Indexe:\n> \"sa_upper_active_pkey\" PRIMARY KEY, btree (id_sup)\n> \n> \n> cpsdb_matcol=# \\d sa_lining+;\n> Tabelle ╗public.sa_lining½\n> Spalte | Typ | Sortierfolge | NULL erlaubt? | \n> Vorgabewert\n> --------------+-----------------------+--------------+---------------+----------------------------------\n> id_sli | integer | | not null |\n> generated by default as identity\n> sli_season | smallint | | |\n> sli_sa_code | character varying(10) | C | |\n> sli_mat_code | character varying(4) | C | |\n> sli_clr_code | character varying(3) | C | |\n> Indexe:\n> \"sa_lining_active_pkey\" PRIMARY KEY, btree (id_sli)\n> \n> \n> cpsdb_matcol=# \\d sa_insole;\n> Tabelle ╗public.sa_insole½\n> Spalte | Typ | Sortierfolge | NULL erlaubt? | \n> Vorgabewert\n> --------------+-----------------------+--------------+---------------+----------------------------------\n> id_sin | integer | | not null |\n> generated by default as identity\n> sin_season | smallint | | |\n> sin_sa_code | character varying(10) | C | |\n> sin_mat_code | character varying(4) | C | |\n> sin_clr_code | character varying(3) | C | |\n> Indexe:\n> \"sa_insole_active_pkey\" PRIMARY KEY, btree (id_sin)\n> \n> \n> cpsdb_matcol=# \\d sa_outsole;\n> Tabelle ╗public.sa_outsole½\n> Spalte | Typ | Sortierfolge | NULL erlaubt? | \n> Vorgabewert\n> --------------+-----------------------+--------------+---------------+----------------------------------\n> id_sou | integer | | not null |\n> generated by default as identity\n> sou_season | smallint | | |\n> sou_sa_code | character varying(10) | C | |\n> sou_mat_code | character varying(4) | C | |\n> sou_clr_code | character varying(3) | C | |\n> Indexe:\n> \"sa_outsole_active_pkey\" PRIMARY KEY, btree (id_sou)\n> \n> The xxx_target tables are very similiar, here the upper one as an example:\n> They are count_aggregates of the whole dataset, where\n> up_mat_code=sup_mat_code etc.\n> \n> cpsdb_matcol=# \\d upper_target\n> Tabelle ╗admin.upper_target½\n> Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert\n> -------------+----------+--------------+---------------+-------------\n> id_up | smallint | | |\n> nup | integer | | |\n> up_mat_code | text | C | |\n> \n> \n> \n> I have reworked the two queries to show their complete explain plans:\n> \n> 1. query with left join in the qupd CTE:\n> \n> \\set only 'ONLY'\n> \n> cpsdb_matcol=# explain analyze -- explain analyze verbose -- explain --\n> select * from ( -- select count(*) from ( -- select length(sel) from (\n> cpsdb_matcol-# with\n> cpsdb_matcol-# qup as (\n> cpsdb_matcol(# select\n> cpsdb_matcol(# curr_season -- all xxx_seasosn are always smallint\n> cpsdb_matcol(# ,curr_code-- all xx_code are always varchar(10)\n> cpsdb_matcol(# ,array_agg(id_up order by\n> id_up)||array_fill(0::smallint,array[10]) as mat_arr\n> cpsdb_matcol(# ,array_agg(curr_mat_code order by id_up) as matcode_arr\n> cpsdb_matcol(# ,bit_or(imask) as ibitmask\n> cpsdb_matcol(# from(\n> cpsdb_matcol(# select\n> cpsdb_matcol(# sup_season as curr_season\n> cpsdb_matcol(# ,sup_sa_code as curr_code\n> cpsdb_matcol(# ,sup_mat_code as curr_mat_code\n> cpsdb_matcol(# ,sup_clr_code as curr_clr_code\n> cpsdb_matcol(# ,id_up\n> cpsdb_matcol(# ,coalesce(id_up,-1) as imask\n> cpsdb_matcol(# from :only sa_upper\n> cpsdb_matcol(# left join upper_target on up_mat_code=sup_mat_code and\n> id_up <= (512-1-16)\n> cpsdb_matcol(# )qr\n> cpsdb_matcol(# group by 1,2\n> cpsdb_matcol(# )\n> cpsdb_matcol-# ,qli as (\n> cpsdb_matcol(# select\n> cpsdb_matcol(# curr_season\n> cpsdb_matcol(# ,curr_code\n> cpsdb_matcol(# ,array_agg(id_li order by\n> id_li)||array_fill(0::smallint,array[4]) as mat_arr\n> cpsdb_matcol(# ,array_agg(curr_mat_code order by id_li) as matcode_arr\n> cpsdb_matcol(# ,bit_or(imask) as ibitmask\n> cpsdb_matcol(# from(\n> cpsdb_matcol(# select\n> cpsdb_matcol(# sli_season as curr_season\n> cpsdb_matcol(# ,sli_sa_code as curr_code\n> cpsdb_matcol(# ,sli_mat_code as curr_mat_code\n> cpsdb_matcol(# ,sli_clr_code as curr_clr_code\n> cpsdb_matcol(# ,id_li\n> cpsdb_matcol(# ,coalesce(id_li,-1) as imask\n> cpsdb_matcol(# from :only sa_lining\n> cpsdb_matcol(# left join lining_target on li_mat_code=sli_mat_code and\n> id_li <= (128-1-8)\n> cpsdb_matcol(# )qr\n> cpsdb_matcol(# group by 1,2\n> cpsdb_matcol(# )\n> cpsdb_matcol-# ,qin as (\n> cpsdb_matcol(# select\n> cpsdb_matcol(# curr_season\n> cpsdb_matcol(# ,curr_code\n> cpsdb_matcol(# ,array_agg(id_in order by\n> id_in)||array_fill(0::smallint,array[4]) as mat_arr\n> cpsdb_matcol(# ,array_agg(curr_mat_code order by id_in) as matcode_arr\n> cpsdb_matcol(# ,bit_or(imask) as ibitmask\n> cpsdb_matcol(# from(\n> cpsdb_matcol(# select\n> cpsdb_matcol(# sin_season as curr_season\n> cpsdb_matcol(# ,sin_sa_code as curr_code\n> cpsdb_matcol(# ,sin_mat_code as curr_mat_code\n> cpsdb_matcol(# ,sin_clr_code as curr_clr_code\n> cpsdb_matcol(# ,id_in\n> cpsdb_matcol(# ,coalesce(id_in,-1) as imask\n> cpsdb_matcol(# from :only sa_insole\n> cpsdb_matcol(# left join insole_target on in_mat_code=sin_mat_code and\n> id_in <= (128-1-8)\n> cpsdb_matcol(# )qr\n> cpsdb_matcol(# group by 1,2\n> cpsdb_matcol(# )\n> cpsdb_matcol-# ,qou as (\n> cpsdb_matcol(# select\n> cpsdb_matcol(# curr_season\n> cpsdb_matcol(# ,curr_code\n> cpsdb_matcol(# ,array_agg(id_ou order by\n> id_ou)||array_fill(0::smallint,array[6]) as mat_arr\n> cpsdb_matcol(# ,array_agg(curr_mat_code order by id_ou) as matcode_arr\n> cpsdb_matcol(# ,bit_or(imask) as ibitmask\n> cpsdb_matcol(# from(\n> cpsdb_matcol(# select\n> cpsdb_matcol(# sou_season as curr_season\n> cpsdb_matcol(# ,sou_sa_code as curr_code\n> cpsdb_matcol(# ,sou_mat_code as curr_mat_code\n> cpsdb_matcol(# ,sou_clr_code as curr_clr_code\n> cpsdb_matcol(# ,id_ou\n> cpsdb_matcol(# ,coalesce(id_ou,-1) as imask\n> cpsdb_matcol(# from :only sa_outsole\n> cpsdb_matcol(# left join outsole_target on ou_mat_code=sou_mat_code and\n> id_ou <= (32-1-2)\n> cpsdb_matcol(# )qr\n> cpsdb_matcol(# group by 1,2\n> cpsdb_matcol(# )\n> cpsdb_matcol-# ,qupd as (\n> cpsdb_matcol(# select * from (\n> cpsdb_matcol(# select\n> cpsdb_matcol(# qup.curr_season\n> cpsdb_matcol(# ,qup.curr_code\n> cpsdb_matcol(# ,qup.ibitmask|qin.ibitmask|qli.ibitmask|qou.ibitmask as\n> ibitmask\n> cpsdb_matcol(# -- the calculations of new_mat_x are simplified here\n> cpsdb_matcol(# -- in the production version they are a more complex\n> combination of bit masks, bit shifts and bit or of different elements of\n> the arrays\n> cpsdb_matcol(#\n> ,(qup.mat_arr[1]|qli.mat_arr[1]|qin.mat_arr[1]|qou.mat_arr[1])::bigint\n> as new_mat_1\n> cpsdb_matcol(#\n> cpsdb_matcol(#\n> ,(qup.mat_arr[2]|qli.mat_arr[2]|qin.mat_arr[2]|qou.mat_arr[2])::bigint\n> as new_mat_2\n> cpsdb_matcol(#\n> cpsdb_matcol(#\n> ,(qup.mat_arr[3]|qli.mat_arr[3]|qin.mat_arr[3]|qou.mat_arr[3])::bigint\n> as new_mat_3\n> cpsdb_matcol(#\n> cpsdb_matcol(# from qup\n> cpsdb_matcol(# left join qli on (qli.curr_season=qup.curr_season and\n> qli.curr_code=qup.curr_code and qli.ibitmask>0 and\n> cardinality(qli.mat_arr) <=8)\n> cpsdb_matcol(# left join qin on (qin.curr_season=qup.curr_season and\n> qin.curr_code=qup.curr_code and qin.ibitmask>0 and\n> cardinality(qin.mat_arr) <=8)\n> cpsdb_matcol(# left join qou on (qou.curr_season=qup.curr_season and\n> qou.curr_code=qup.curr_code and qou.ibitmask>0 and\n> cardinality(qou.mat_arr) <=11)\n> cpsdb_matcol(# where qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\n> cpsdb_matcol(# )qj\n> cpsdb_matcol(# where ibitmask is not null\n> cpsdb_matcol(# )\n> cpsdb_matcol-# ,qupda as (\n> cpsdb_matcol(# select\n> cpsdb_matcol(# qup.curr_season\n> cpsdb_matcol(# ,qup.curr_code\n> cpsdb_matcol(# ,repeat('0',64)||\n> cpsdb_matcol(#\n> repeat('11',coalesce(cardinality(qou.matcode_arr),0))||repeat('10',coalesce(cardinality(qin.matcode_arr),0))||\n> cpsdb_matcol(#\n> repeat('01',coalesce(cardinality(qou.matcode_arr),0))||repeat('00',coalesce(cardinality(qup.matcode_arr),0))||\n> cpsdb_matcol(# '00' as curr_mattype_bitmask\n> cpsdb_matcol(#\n> ,qup.matcode_arr||qli.matcode_arr||qin.matcode_arr||qou.matcode_arr as\n> curr_matcode_arr\n> cpsdb_matcol(# from qup\n> cpsdb_matcol(# left join qli on qli.curr_season=qup.curr_season and\n> qli.curr_code=qup.curr_code and (qli.ibitmask<0 or\n> cardinality(qli.mat_arr) >8)\n> cpsdb_matcol(# left join qin on qin.curr_season=qup.curr_season and\n> qin.curr_code=qup.curr_code and (qin.ibitmask<0 or\n> cardinality(qin.mat_arr) >8)\n> cpsdb_matcol(# left join qou on qou.curr_season=qup.curr_season and\n> qou.curr_code=qup.curr_code and (qou.ibitmask<0 or\n> cardinality(qou.mat_arr) >11)\n> cpsdb_matcol(# where qup.ibitmask<0 or cardinality(qup.mat_arr) >21\n> cpsdb_matcol(# )\n> cpsdb_matcol-# select\n> cpsdb_matcol-# curr_season\n> cpsdb_matcol-# ,curr_code\n> cpsdb_matcol-# ,new_mat_1\n> cpsdb_matcol-# ,new_mat_2\n> cpsdb_matcol-# ,new_mat_3\n> cpsdb_matcol-# ,NULL::bigint as new_mattype_bitmask\n> cpsdb_matcol-# ,NULL as new_mat_codes\n> cpsdb_matcol-# from qupd\n> cpsdb_matcol-# union all\n> cpsdb_matcol-# select\n> cpsdb_matcol-# curr_season\n> cpsdb_matcol-# ,curr_code\n> cpsdb_matcol-# ,NULL::bigint as new_mat_1\n> cpsdb_matcol-# ,NULL::bigint as new_mat_2\n> cpsdb_matcol-# ,NULL::bigint as new_mat_3\n> cpsdb_matcol-#\n> ,substr(curr_mattype_bitmask,length(curr_mattype_bitmask)-63)::bit(64)::bigint as new_mattype_bitmask\n> cpsdb_matcol-# ,curr_matcode_arr as new_mat_codes\n> cpsdb_matcol-# from qupda,qup.ibitmask|qin.ibitmask|qli.ibitmask|qou.ibitmask as ibitmask\n> cpsdb_matcol-# ;\n> \n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------\n> Append (cost=13673.81..17462.84 rows=5734 width=104) (actual\n> time=169.382..210.799 rows=9963 loops=1)\n> CTE qup\n> -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80)\n> (actual time=35.064..68.308 rows=10735 loops=1)\n> Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n> -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual\n> time=35.053..36.412 rows=50969 loops=1)\n> Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 4722kB\n> -> Hash Left Join (cost=41.71..1246.13 rows=50969\n> width=18) (actual time=0.165..10.562 rows=50969 loops=1)\n> Hash Cond: ((sa_upper.sup_mat_code)::text =\n> upper_target.up_mat_code)\n> -> Seq Scan on sa_upper (cost=0.00..884.69\n> rows=50969 width=16) (actual time=0.006..1.990 rows=50969 loops=1)\n> -> Hash (cost=35.53..35.53 rows=495 width=6)\n> (actual time=0.157..0.157 rows=495 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 27kB\n> -> Seq Scan on upper_target \n> (cost=0.00..35.53 rows=495 width=6) (actual time=0.006..0.115 rows=495\n> loops=1)\n> Filter: (id_up <= 495)\n> Rows Removed by Filter: 1467\n> CTE qli\n> -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80)\n> (actual time=9.354..28.199 rows=10469 loops=1)\n> Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n> -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual\n> time=9.347..9.711 rows=11774 loops=1)\n> Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 1120kB\n> -> Hash Left Join (cost=7.34..301.19 rows=11774\n> width=18) (actual time=0.049..2.397 rows=11774 loops=1)\n> Hash Cond: ((sa_lining.sli_mat_code)::text =\n> lining_target.li_mat_code)\n> -> Seq Scan on sa_lining (cost=0.00..204.74\n> rows=11774 width=16) (actual time=0.009..0.469 rows=11774 loops=1)\n> -> Hash (cost=5.86..5.86 rows=118 width=6)\n> (actual time=0.037..0.037 rows=119 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 13kB\n> -> Seq Scan on lining_target \n> (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.025 rows=119\n> loops=1)\n> Filter: (id_li <= 119)\n> Rows Removed by Filter: 190\n> CTE qin\n> -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80)\n> (actual time=11.453..32.317 rows=10678 loops=1)\n> Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n> -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual\n> time=11.444..11.943 rows=15230 loops=1)\n> Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 1336kB\n> -> Hash Left Join (cost=10.49..369.26 rows=15230\n> width=18) (actual time=0.051..3.098 rows=15230 loops=1)\n> Hash Cond: ((sa_insole.sin_mat_code)::text =\n> insole_target.in_mat_code)\n> -> Seq Scan on sa_insole (cost=0.00..264.30\n> rows=15230 width=16) (actual time=0.007..0.608 rows=15230 loops=1)\n> -> Hash (cost=9.01..9.01 rows=118 width=6)\n> (actual time=0.041..0.041 rows=119 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 13kB\n> -> Seq Scan on insole_target \n> (cost=0.00..9.01 rows=118 width=6) (actual time=0.007..0.031 rows=119\n> loops=1)\n> Filter: (id_in <= 119)\n> Rows Removed by Filter: 362\n> CTE qou\n> -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80)\n> (actual time=18.055..42.079 rows=10699 loops=1)\n> Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n> -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual\n> time=18.043..18.798 rows=24768 loops=1)\n> Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 2317kB\n> -> Hash Left Join (cost=5.39..558.63 rows=24768\n> width=18) (actual time=0.037..5.017 rows=24768 loops=1)\n> Hash Cond: ((sa_outsole.sou_mat_code)::text =\n> outsole_target.ou_mat_code)\n> -> Seq Scan on sa_outsole (cost=0.00..430.68\n> rows=24768 width=16) (actual time=0.008..0.998 rows=24768 loops=1)\n> -> Hash (cost=5.03..5.03 rows=29 width=6)\n> (actual time=0.025..0.025 rows=29 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 10kB\n> -> Seq Scan on outsole_target \n> (cost=0.00..5.03 rows=29 width=6) (actual time=0.009..0.020 rows=29 loops=1)\n> Filter: (id_ou <= 29)\n> Rows Removed by Filter: 213\n> -> Hash Join (cost=1015.85..1319.04 rows=1 width=104) (actual\n> time=169.382..203.707 rows=8548 loops=1)\n> Hash Cond: ((qou.curr_season = qli.curr_season) AND\n> ((qou.curr_code)::text = (qli.curr_code)::text))\n> Join Filter: ((((qup.ibitmask | qin.ibitmask) | qli.ibitmask) |\n> qou.ibitmask) IS NOT NULL)\n> -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=76)\n> (actual time=18.057..45.448 rows=10275 loops=1)\n> Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\n> Rows Removed by Filter: 424\n> -> Hash (cost=1015.83..1015.83 rows=1 width=228) (actual\n> time=151.316..151.317 rows=8845 loops=1)\n> Buckets: 16384 (originally 1024) Batches: 1 (originally\n> 1) Memory Usage: 1899kB\n> -> Hash Join (cost=707.35..1015.83 rows=1 width=228)\n> (actual time=122.483..149.030 rows=8845 loops=1)\n> Hash Cond: ((qin.curr_season = qli.curr_season) AND\n> ((qin.curr_code)::text = (qli.curr_code)::text))\n> -> CTE Scan on qin (cost=0.00..293.65 rows=1186\n> width=76) (actual time=11.454..35.456 rows=10197 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 8))\n> Rows Removed by Filter: 481\n> -> Hash (cost=706.86..706.86 rows=33 width=152)\n> (actual time=111.026..111.027 rows=9007 loops=1)\n> Buckets: 16384 (originally 1024) Batches: 1\n> (originally 1) Memory Usage: 1473kB\n> -> Merge Join (cost=689.20..706.86 rows=33\n> width=152) (actual time=106.441..109.505 rows=9007 loops=1)\n> Merge Cond: ((qup.curr_season =\n> qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n> -> Sort (cost=342.09..344.96\n> rows=1147 width=76) (actual time=73.200..73.429 rows=9320 loops=1)\n> Sort Key: qup.curr_season,\n> qup.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory:\n> 1391kB\n> -> CTE Scan on qup \n> (cost=0.00..283.80 rows=1147 width=76) (actual time=35.067..71.872\n> rows=9320 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 21))\n> Rows Removed by Filter: 1415\n> -> Sort (cost=347.12..350.02\n> rows=1163 width=76) (actual time=33.239..33.490 rows=10289 loops=1)\n> Sort Key: qli.curr_season,\n> qli.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory:\n> 1349kB\n> -> CTE Scan on qli \n> (cost=0.00..287.90 rows=1163 width=76) (actual time=9.355..31.457\n> rows=10289 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 8))\n> Rows Removed by Filter: 180\n> -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104)\n> (actual time=4.529..6.645 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND\n> ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n> -> Merge Left Join (cost=1958.66..2135.28 rows=5733\n> width=136) (actual time=3.388..3.833 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND\n> ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n> -> Merge Left Join (cost=1293.25..1388.21 rows=5733\n> width=104) (actual time=2.297..2.534 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season =\n> qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n> -> Sort (cost=641.68..656.02 rows=5733 width=72)\n> (actual time=1.278..1.315 rows=1415 loops=1)\n> Sort Key: qup_1.curr_season, qup_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 204kB\n> -> CTE Scan on qup qup_1 (cost=0.00..283.80\n> rows=5733 width=72) (actual time=0.009..1.081 rows=1415 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 21))\n> Rows Removed by Filter: 9320\n> -> Sort (cost=651.57..666.11 rows=5816 width=72)\n> (actual time=1.017..1.022 rows=180 loops=1)\n> Sort Key: qli_1.curr_season, qli_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 41kB\n> -> CTE Scan on qli qli_1 (cost=0.00..287.90\n> rows=5816 width=72) (actual time=0.054..0.994 rows=180 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 8))\n> Rows Removed by Filter: 10289\n> -> Sort (cost=665.41..680.24 rows=5932 width=72)\n> (actual time=1.089..1.103 rows=481 loops=1)\n> Sort Key: qin_1.curr_season, qin_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 68kB\n> -> CTE Scan on qin qin_1 (cost=0.00..293.65\n> rows=5932 width=72) (actual time=0.016..1.022 rows=481 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 8))\n> Rows Removed by Filter: 10197\n> -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual\n> time=1.134..1.145 rows=417 loops=1)\n> Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory: 68kB\n> -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944\n> width=72) (actual time=0.029..1.038 rows=424 loops=1)\n> Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n> Rows Removed by Filter: 10275\n> Planning Time: 1.055 ms\n> Execution Time: 212.800 ms\n> (118 Zeilen)\n> \n> As seen in the line of the qupd CTE\n> \n> -> Merge Join (cost=689.20..706.86 rows=33\n> width=152) (actual time=106.441..109.505 rows=9007 loops=1)\n> \n> the row count of the second join round drops to 33 and for the third\n> round it drops to 1\n> \n> -> Hash Join (cost=707.35..1015.83 rows=1 width=228)\n> (actual time=122.483..149.030 rows=8845 loops=1)\n> \n> BTW, I don't know, why the second join group (part of qupda) gets a\n> complete different plan.\n> \n\nIt gets a different plan because the \"qupd\" CTE does this:\n\n SELECT\n ...\n ,qup.ibitmask|qin.ibitmask|qli.ibitmask|qou.ibitmask as ibitmask\n ...\n FROM ... left join of the CTEs\n WHERE qup.ibitmask>0 AND ..\n\nWhich means all the inputs must be non-NULL, hence the optimizer changes\nthe plan to inner join (and that seems to be perfectly correct).\n\nI think this suggests this join cardinality estimation is not the real\nissue. The estimates are off, but there's an order of magnitude\ndifference for the scans, like here:\n\n -> CTE Scan on qup (cost=0.00..283.80 rows=1147 width=72)\n (actual time=35.339..71.419 rows=9320 loops=1)\n\nand this tends to \"snowball\" in the join estimation (it amplifies the\nissue - it can't really improve them, except by chance).\n\nFWIW the UNION ALL also explains why we materialize the CTEs, because by\ndefault we fold CTEs into the query only when there's a single\nreference. And here both \"qupd\" and \"qupda\" reference them.\n\nI'd suggest adding AS NOT MATERIALIZED to the CTEs, to fold them into\nthe main query despite multiple references. That might improve the\nestimate, with a bit of luck.\n\nIf not, you'll need to look into improving the scan estimates first,\nit's pointless to try to make join estimates better when the input\nestimates are this off. This however depends on the conditions, and as\nthe CTEs do aggregations that may not be possible.\n\nFWIW I suggest you provide the data in a form that's easier to use (like\na working SQL script). More people are likely to look and help than when\nthey have to extract stuff from an e-mail, fill in missing pieces etc.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 9 Feb 2023 15:29:52 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: Wrong rows estimations with joins of CTEs slows queries by\n more than factor 500"
},
{
"msg_contents": "> \n> FWIW I suggest you provide the data in a form that's easier to use (like\n> a working SQL script). More people are likely to look and help than when\n> they have to extract stuff from an e-mail, fill in missing pieces etc.\n> \n\nBTW if anyone wants to play with this, here are the SQL scripts I used\nto create the tables and the queries. There's no data, but it's enough\nto see how the plans change.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 9 Feb 2023 16:53:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: AW: Wrong rows estimations with joins of CTEs slows queries by\n more than factor 500"
},
{
"msg_contents": "Hi hackers,\n\nI have written a patch to add stats info for Vars in CTEs. With this patch, the join size estimation on the upper of CTE scans became more accurate.\n\nIn the function selfuncs.c:eqjoinsel it uses the number of the distinct values of the two join variables to estimate join size, and in the function selfuncs.c:get_variable_numdistinct return a default value DEFAULT_NUM_DISTINCT (200 in Postgres and 1000 in Greenplum), with the default value, you can never expect a good plan.\n\nThanks if anyone could give a review.\n\nRegards,\nJian\n\n________________________________\nFrom: Hans Buschmann <buschmann@nidsa.net>\nSent: Wednesday, February 8, 2023 21:55\nTo: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Wrong rows estimations with joins of CTEs slows queries by more than factor 500\n\n!! External Email\n\nDuring data refactoring of our Application I encountered $subject when joining 4 CTEs with left join or inner join.\n\n\n1. Background\n\nPG 15.1 on Windows x64 (OS seems no to have no meening here)\n\n\nI try to collect data from 4 (analyzed) tables (up,li,in,ou) by grouping certain data (4 CTEs qup,qli,qin,qou)\n\nThe grouping of the data in the CTEs gives estimated row counts of about 1000 (1 tenth of the real value) This is OK for estimation.\n\n\nThese 4 CTEs are then used to combine the data by joining them.\n\n\n2. Problem\n\nThe 4 CTEs are joined by left joins as shown below:\n\n\nfrom qup\nleft join qli on (qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and qli.ibitmask>0 and cardinality(qli.mat_arr) <=8)\nleft join qin on (qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and qin.ibitmask>0 and cardinality(qin.mat_arr) <=8)\nleft join qou on (qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and qou.ibitmask>0 and cardinality(qou.mat_arr) <=11)\nwhere qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\n\nThe plan first retrieves qup and qli, taking the estimated row counts of 1163 and 1147 respectively\n\n\nBUT the result is then hashed and the row count is estimated as 33!\n\n\nIn a Left join the row count stays always the same as the one of left table (here qup with 1163 rows)\n\n\nThe same algorithm which reduces the row estimation from 1163 to 33 is used in the next step to give an estimation of 1 row.\n\nThis is totally wrong.\n\n\nHere is the execution plan of the query:\n\n(search the plan for rows=33)\n\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=13673.81..17463.30 rows=5734 width=104) (actual time=168.307..222.670 rows=9963 loops=1)\n CTE qup\n -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80) (actual time=35.466..68.131 rows=10735 loops=1)\n Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual time=35.454..36.819 rows=50969 loops=1)\n Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 4722kB\n -> Hash Left Join (cost=41.71..1246.13 rows=50969 width=18) (actual time=0.148..10.687 rows=50969 loops=1)\n Hash Cond: ((sa_upper.sup_mat_code)::text = upper_target.up_mat_code)\n -> Seq Scan on sa_upper (cost=0.00..884.69 rows=50969 width=16) (actual time=0.005..1.972 rows=50969 loops=1)\n -> Hash (cost=35.53..35.53 rows=495 width=6) (actual time=0.140..0.140 rows=495 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\n -> Seq Scan on upper_target (cost=0.00..35.53 rows=495 width=6) (actual time=0.007..0.103 rows=495 loops=1)\n Filter: (id_up <= 495)\n Rows Removed by Filter: 1467\n CTE qli\n -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80) (actual time=9.446..27.388 rows=10469 loops=1)\n Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual time=9.440..9.811 rows=11774 loops=1)\n Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1120kB\n -> Hash Left Join (cost=7.34..301.19 rows=11774 width=18) (actual time=0.045..2.438 rows=11774 loops=1)\n Hash Cond: ((sa_lining.sli_mat_code)::text = lining_target.li_mat_code)\n -> Seq Scan on sa_lining (cost=0.00..204.74 rows=11774 width=16) (actual time=0.008..0.470 rows=11774 loops=1)\n -> Hash (cost=5.86..5.86 rows=118 width=6) (actual time=0.034..0.034 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on lining_target (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.024 rows=119 loops=1)\n Filter: (id_li <= 119)\n Rows Removed by Filter: 190\n CTE qin\n -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80) (actual time=11.424..31.508 rows=10678 loops=1)\n Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual time=11.416..11.908 rows=15230 loops=1)\n Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1336kB\n -> Hash Left Join (cost=10.49..369.26 rows=15230 width=18) (actual time=0.051..3.108 rows=15230 loops=1)\n Hash Cond: ((sa_insole.sin_mat_code)::text = insole_target.in_mat_code)\n -> Seq Scan on sa_insole (cost=0.00..264.30 rows=15230 width=16) (actual time=0.006..0.606 rows=15230 loops=1)\n -> Hash (cost=9.01..9.01 rows=118 width=6) (actual time=0.042..0.043 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on insole_target (cost=0.00..9.01 rows=118 width=6) (actual time=0.008..0.032 rows=119 loops=1)\n Filter: (id_in <= 119)\n Rows Removed by Filter: 362\n CTE qou\n -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80) (actual time=18.198..41.812 rows=10699 loops=1)\n Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual time=18.187..18.967 rows=24768 loops=1)\n Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 2317kB\n -> Hash Left Join (cost=5.39..558.63 rows=24768 width=18) (actual time=0.046..5.132 rows=24768 loops=1)\n Hash Cond: ((sa_outsole.sou_mat_code)::text = outsole_target.ou_mat_code)\n -> Seq Scan on sa_outsole (cost=0.00..430.68 rows=24768 width=16) (actual time=0.010..1.015 rows=24768 loops=1)\n -> Hash (cost=5.03..5.03 rows=29 width=6) (actual time=0.032..0.032 rows=29 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on outsole_target (cost=0.00..5.03 rows=29 width=6) (actual time=0.010..0.025 rows=29 loops=1)\n Filter: (id_ou <= 29)\n Rows Removed by Filter: 213\n -> Hash Join (cost=1015.85..1319.50 rows=1 width=104) (actual time=168.307..215.513 rows=8548 loops=1)\n Hash Cond: ((qou.curr_season = qli.curr_season) AND ((qou.curr_code)::text = (qli.curr_code)::text))\n Join Filter: ((((qup.ibitmask | qin.ibitmask) | qli.ibitmask) | qou.ibitmask) IS NOT NULL)\n -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=76) (actual time=18.200..45.188 rows=10275 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\n Rows Removed by Filter: 424\n -> Hash (cost=1015.83..1015.83 rows=1 width=228) (actual time=150.094..150.095 rows=8845 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1899kB\n -> Hash Join (cost=707.35..1015.83 rows=1 width=228) (actual time=121.898..147.726 rows=8845 loops=1)\n Hash Cond: ((qin.curr_season = qli.curr_season) AND ((qin.curr_code)::text = (qli.curr_code)::text))\n -> CTE Scan on qin (cost=0.00..293.65 rows=1186 width=76) (actual time=11.425..34.674 rows=10197 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 481\n -> Hash (cost=706.86..706.86 rows=33 width=152) (actual time=110.470..110.470 rows=9007 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1473kB\n -> Merge Join (cost=689.20..706.86 rows=33 width=152) (actual time=105.862..108.925 rows=9007 loops=1)\n Merge Cond: ((qup.curr_season = qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n -> Sort (cost=342.09..344.96 rows=1147 width=76) (actual time=73.419..73.653 rows=9320 loops=1)\n Sort Key: qup.curr_season, qup.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1391kB\n -> CTE Scan on qup (cost=0.00..283.80 rows=1147 width=76) (actual time=35.467..71.904 rows=9320 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 21))\n Rows Removed by Filter: 1415\n -> Sort (cost=347.12..350.02 rows=1163 width=76) (actual time=32.440..32.697 rows=10289 loops=1)\n Sort Key: qli.curr_season, qli.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1349kB\n -> CTE Scan on qli (cost=0.00..287.90 rows=1163 width=76) (actual time=9.447..30.666 rows=10289 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 180\n -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104) (actual time=4.597..6.700 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n -> Merge Left Join (cost=1958.66..2135.28 rows=5733 width=136) (actual time=3.427..3.863 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104) (actual time=2.321..2.556 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n -> Sort (cost=641.68..656.02 rows=5733 width=72) (actual time=1.286..1.324 rows=1415 loops=1)\n Sort Key: qup_1.curr_season, qup_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 204kB\n -> CTE Scan on qup qup_1 (cost=0.00..283.80 rows=5733 width=72) (actual time=0.009..1.093 rows=1415 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 21))\n Rows Removed by Filter: 9320\n -> Sort (cost=651.57..666.11 rows=5816 width=72) (actual time=1.033..1.038 rows=180 loops=1)\n Sort Key: qli_1.curr_season, qli_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 41kB\n -> CTE Scan on qli qli_1 (cost=0.00..287.90 rows=5816 width=72) (actual time=0.055..1.007 rows=180 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10289\n -> Sort (cost=665.41..680.24 rows=5932 width=72) (actual time=1.104..1.117 rows=481 loops=1)\n Sort Key: qin_1.curr_season, qin_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qin qin_1 (cost=0.00..293.65 rows=5932 width=72) (actual time=0.016..1.038 rows=481 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10197\n -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual time=1.163..1.174 rows=417 loops=1)\n Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944 width=72) (actual time=0.029..1.068 rows=424 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n Rows Removed by Filter: 10275\n Planning Time: 2.297 ms\n Execution Time: 224.759 ms\n(118 Zeilen)\n\n3. Slow query from wrong plan as result on similar case with inner join\n\nWhen the 3 left joins above are changed to inner joins like:\n\nfrom qup\njoin qli on (qli.curr_season=qup.curr_season and qli.curr_code=qup.curr_code and qli.ibitmask>0 and cardinality(qli.mat_arr) <=8)\njoin qin on (qin.curr_season=qup.curr_season and qin.curr_code=qup.curr_code and qin.ibitmask>0 and cardinality(qin.mat_arr) <=8)\njoin qou on (qou.curr_season=qup.curr_season and qou.curr_code=qup.curr_code and qou.ibitmask>0 and cardinality(qou.mat_arr) <=11)\nwhere qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\n\nThe same rows estimation takes place as with the left joins, but the planner now decides to use a nested loop for the last join, which results in a 500fold execution time:\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=13365.31..17472.18 rows=5734 width=104) (actual time=139.037..13403.310 rows=9963 loops=1)\n CTE qup\n -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80) (actual time=35.399..67.102 rows=10735 loops=1)\n Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual time=35.382..36.743 rows=50969 loops=1)\n Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 4722kB\n -> Hash Left Join (cost=41.71..1246.13 rows=50969 width=18) (actual time=0.157..10.715 rows=50969 loops=1)\n Hash Cond: ((sa_upper.sup_mat_code)::text = upper_target.up_mat_code)\n -> Seq Scan on sa_upper (cost=0.00..884.69 rows=50969 width=16) (actual time=0.008..2.001 rows=50969 loops=1)\n -> Hash (cost=35.53..35.53 rows=495 width=6) (actual time=0.146..0.146 rows=495 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\n -> Seq Scan on upper_target (cost=0.00..35.53 rows=495 width=6) (actual time=0.006..0.105 rows=495 loops=1)\n Filter: (id_up <= 495)\n Rows Removed by Filter: 1467\n CTE qli\n -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80) (actual time=9.541..27.419 rows=10469 loops=1)\n Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual time=9.534..9.908 rows=11774 loops=1)\n Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1120kB\n -> Hash Left Join (cost=7.34..301.19 rows=11774 width=18) (actual time=0.049..2.451 rows=11774 loops=1)\n Hash Cond: ((sa_lining.sli_mat_code)::text = lining_target.li_mat_code)\n -> Seq Scan on sa_lining (cost=0.00..204.74 rows=11774 width=16) (actual time=0.010..0.462 rows=11774 loops=1)\n -> Hash (cost=5.86..5.86 rows=118 width=6) (actual time=0.035..0.035 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on lining_target (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.025 rows=119 loops=1)\n Filter: (id_li <= 119)\n Rows Removed by Filter: 190\n CTE qin\n -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80) (actual time=11.649..30.910 rows=10678 loops=1)\n Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual time=11.642..12.115 rows=15230 loops=1)\n Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1336kB\n -> Hash Left Join (cost=10.49..369.26 rows=15230 width=18) (actual time=0.056..3.144 rows=15230 loops=1)\n Hash Cond: ((sa_insole.sin_mat_code)::text = insole_target.in_mat_code)\n -> Seq Scan on sa_insole (cost=0.00..264.30 rows=15230 width=16) (actual time=0.008..0.594 rows=15230 loops=1)\n -> Hash (cost=9.01..9.01 rows=118 width=6) (actual time=0.045..0.046 rows=119 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Seq Scan on insole_target (cost=0.00..9.01 rows=118 width=6) (actual time=0.008..0.034 rows=119 loops=1)\n Filter: (id_in <= 119)\n Rows Removed by Filter: 362\n CTE qou\n -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80) (actual time=18.163..51.151 rows=10699 loops=1)\n Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual time=18.150..20.000 rows=24768 loops=1)\n Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code COLLATE \"C\"\n Sort Method: quicksort Memory: 2317kB\n -> Hash Left Join (cost=5.39..558.63 rows=24768 width=18) (actual time=0.036..5.106 rows=24768 loops=1)\n Hash Cond: ((sa_outsole.sou_mat_code)::text = outsole_target.ou_mat_code)\n -> Seq Scan on sa_outsole (cost=0.00..430.68 rows=24768 width=16) (actual time=0.008..1.005 rows=24768 loops=1)\n -> Hash (cost=5.03..5.03 rows=29 width=6) (actual time=0.024..0.024 rows=29 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on outsole_target (cost=0.00..5.03 rows=29 width=6) (actual time=0.007..0.018 rows=29 loops=1)\n Filter: (id_ou <= 29)\n Rows Removed by Filter: 213\n -> Nested Loop (cost=707.35..1328.37 rows=1 width=104) (actual time=139.036..13395.820 rows=8548 loops=1)\n Join Filter: ((qli.curr_season = qin.curr_season) AND ((qli.curr_code)::text = (qin.curr_code)::text))\n Rows Removed by Join Filter: 88552397\n -> Hash Join (cost=707.35..1016.45 rows=1 width=216) (actual time=127.374..168.249 rows=8685 loops=1)\n Hash Cond: ((qou.curr_season = qli.curr_season) AND ((qou.curr_code)::text = (qli.curr_code)::text))\n -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=72) (actual time=18.165..54.968 rows=10275 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\n Rows Removed by Filter: 424\n -> Hash (cost=706.86..706.86 rows=33 width=144) (actual time=109.205..109.207 rows=9007 loops=1)\n Buckets: 16384 (originally 1024) Batches: 1 (originally 1) Memory Usage: 1369kB\n -> Merge Join (cost=689.20..706.86 rows=33 width=144) (actual time=104.785..107.748 rows=9007 loops=1)\n Merge Cond: ((qup.curr_season = qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n -> Sort (cost=342.09..344.96 rows=1147 width=72) (actual time=72.320..72.559 rows=9320 loops=1)\n Sort Key: qup.curr_season, qup.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1357kB\n -> CTE Scan on qup (cost=0.00..283.80 rows=1147 width=72) (actual time=35.401..70.834 rows=9320 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 21))\n Rows Removed by Filter: 1415\n -> Sort (cost=347.12..350.02 rows=1163 width=72) (actual time=32.461..32.719 rows=10289 loops=1)\n Sort Key: qli.curr_season, qli.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 1269kB\n -> CTE Scan on qli (cost=0.00..287.90 rows=1163 width=72) (actual time=9.543..30.696 rows=10289 loops=1)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 180\n -> CTE Scan on qin (cost=0.00..293.65 rows=1186 width=72) (actual time=0.001..1.159 rows=10197 loops=8685)\n Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n Rows Removed by Filter: 481\n -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104) (actual time=4.606..6.733 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n -> Merge Left Join (cost=1958.66..2135.28 rows=5733 width=136) (actual time=3.479..3.930 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n -> Merge Left Join (cost=1293.25..1388.21 rows=5733 width=104) (actual time=2.368..2.610 rows=1415 loops=1)\n Merge Cond: ((qup_1.curr_season = qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n -> Sort (cost=641.68..656.02 rows=5733 width=72) (actual time=1.296..1.335 rows=1415 loops=1)\n Sort Key: qup_1.curr_season, qup_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 204kB\n -> CTE Scan on qup qup_1 (cost=0.00..283.80 rows=5733 width=72) (actual time=0.010..1.119 rows=1415 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 21))\n Rows Removed by Filter: 9320\n -> Sort (cost=651.57..666.11 rows=5816 width=72) (actual time=1.069..1.075 rows=180 loops=1)\n Sort Key: qli_1.curr_season, qli_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 41kB\n -> CTE Scan on qli qli_1 (cost=0.00..287.90 rows=5816 width=72) (actual time=0.057..1.026 rows=180 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10289\n -> Sort (cost=665.41..680.24 rows=5932 width=72) (actual time=1.110..1.124 rows=481 loops=1)\n Sort Key: qin_1.curr_season, qin_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qin qin_1 (cost=0.00..293.65 rows=5932 width=72) (actual time=0.016..1.046 rows=481 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 8))\n Rows Removed by Filter: 10197\n -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual time=1.119..1.128 rows=417 loops=1)\n Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n Sort Method: quicksort Memory: 68kB\n -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944 width=72) (actual time=0.029..1.056 rows=424 loops=1)\n Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n Rows Removed by Filter: 10275\n Planning Time: 1.746 ms\n Execution Time: 13405.503 ms\n(116 Zeilen)\n\nThis case really brought me to detect the problem!\n\nThe original query and data are not shown here, but the principle should be clear from the execution plans.\n\nI think the planner shouldn't change the row estimations on further steps after left joins at all, and be a bit more conservative on inner joins.\nThis may be related to the fact that this case has 2 join-conditions (xx_season an xx_code).\n\nThanks for looking\n\nHans Buschmann\n\n\n\n\n\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.",
"msg_date": "Mon, 14 Aug 2023 11:12:07 +0000",
"msg_from": "Jian Guo <gjian@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Hi,\n\nI haven't looked at the patch, but please add the patch to the next\ncommit fest (2023-09), so that we don't lose track of it.\n\nSee https://commitfest.postgresql.org\n\n\nregards\n\nTomas\n\nOn 8/14/23 13:12, Jian Guo wrote:\n> Hi hackers,\n> \n> I have written a patch to add stats info for Vars in CTEs. With this\n> patch, the join size estimation on the upper of CTE scans became more\n> accurate.\n> \n> In the function |selfuncs.c:eqjoinsel| it uses the number of the\n> distinct values of the two join variables to estimate join size, and in\n> the function |selfuncs.c:get_variable_numdistinct| return a default\n> value |DEFAULT_NUM_DISTINCT| (200 in Postgres and 1000 in Greenplum),\n> with the default value, you can never expect a good plan.\n> \n> Thanks if anyone could give a review.\n> \n> Regards,\n> Jian\n> \n> ------------------------------------------------------------------------\n> *From:* Hans Buschmann <buschmann@nidsa.net>\n> *Sent:* Wednesday, February 8, 2023 21:55\n> *To:* pgsql-hackers@lists.postgresql.org\n> <pgsql-hackers@lists.postgresql.org>\n> *Subject:* Wrong rows estimations with joins of CTEs slows queries by\n> more than factor 500\n> \n> \t\n> !! External Email\n> \n> During data refactoring of our Application I encountered $subject when\n> joining 4 CTEs with left join or inner join.\n> \n> \n> 1. Background\n> \n> PG 15.1 on Windows x64 (OS seems no to have no meening here)\n> \n> \n> I try to collect data from 4 (analyzed) tables (up,li,in,ou) by grouping\n> certain data (4 CTEs qup,qli,qin,qou)\n> \n> The grouping of the data in the CTEs gives estimated row counts of about\n> 1000 (1 tenth of the real value) This is OK for estimation.\n> \n> \n> These 4 CTEs are then used to combine the data by joining them.\n> \n> \n> 2. Problem\n> \n> The 4 CTEs are joined by left joins as shown below:\n> \n> \n> from qup\n> left join qli on (qli.curr_season=qup.curr_season and\n> qli.curr_code=qup.curr_code and qli.ibitmask>0 and\n> cardinality(qli.mat_arr) <=8)\n> left join qin on (qin.curr_season=qup.curr_season and\n> qin.curr_code=qup.curr_code and qin.ibitmask>0 and\n> cardinality(qin.mat_arr) <=8)\n> left join qou on (qou.curr_season=qup.curr_season and\n> qou.curr_code=qup.curr_code and qou.ibitmask>0 and\n> cardinality(qou.mat_arr) <=11)\n> where qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\n> \n> The plan first retrieves qup and qli, taking the estimated row counts of\n> 1163 and 1147 respectively\n> \n> \n> BUT the result is then hashed and the row count is estimated as 33!\n> \n> \n> In a Left join the row count stays always the same as the one of left\n> table (here qup with 1163 rows)\n> \n> \n> The same algorithm which reduces the row estimation from 1163 to 33 is\n> used in the next step to give an estimation of 1 row.\n> \n> This is totally wrong.\n> \n> \n> Here is the execution plan of the query:\n> \n> (search the plan for rows=33)\n> \n> \n> \n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------\n> Append (cost=13673.81..17463.30 rows=5734 width=104) (actual\n> time=168.307..222.670 rows=9963 loops=1)\n> CTE qup\n> -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80)\n> (actual time=35.466..68.131 rows=10735 loops=1)\n> Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n> -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual\n> time=35.454..36.819 rows=50969 loops=1)\n> Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 4722kB\n> -> Hash Left Join (cost=41.71..1246.13 rows=50969\n> width=18) (actual time=0.148..10.687 rows=50969 loops=1)\n> Hash Cond: ((sa_upper.sup_mat_code)::text =\n> upper_target.up_mat_code)\n> -> Seq Scan on sa_upper (cost=0.00..884.69\n> rows=50969 width=16) (actual time=0.005..1.972 rows=50969 loops=1)\n> -> Hash (cost=35.53..35.53 rows=495 width=6)\n> (actual time=0.140..0.140 rows=495 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 27kB\n> -> Seq Scan on upper_target \n> (cost=0.00..35.53 rows=495 width=6) (actual time=0.007..0.103 rows=495\n> loops=1)\n> Filter: (id_up <= 495)\n> Rows Removed by Filter: 1467\n> CTE qli\n> -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80)\n> (actual time=9.446..27.388 rows=10469 loops=1)\n> Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n> -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual\n> time=9.440..9.811 rows=11774 loops=1)\n> Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 1120kB\n> -> Hash Left Join (cost=7.34..301.19 rows=11774\n> width=18) (actual time=0.045..2.438 rows=11774 loops=1)\n> Hash Cond: ((sa_lining.sli_mat_code)::text =\n> lining_target.li_mat_code)\n> -> Seq Scan on sa_lining (cost=0.00..204.74\n> rows=11774 width=16) (actual time=0.008..0.470 rows=11774 loops=1)\n> -> Hash (cost=5.86..5.86 rows=118 width=6)\n> (actual time=0.034..0.034 rows=119 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 13kB\n> -> Seq Scan on lining_target \n> (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.024 rows=119\n> loops=1)\n> Filter: (id_li <= 119)\n> Rows Removed by Filter: 190\n> CTE qin\n> -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80)\n> (actual time=11.424..31.508 rows=10678 loops=1)\n> Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n> -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual\n> time=11.416..11.908 rows=15230 loops=1)\n> Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 1336kB\n> -> Hash Left Join (cost=10.49..369.26 rows=15230\n> width=18) (actual time=0.051..3.108 rows=15230 loops=1)\n> Hash Cond: ((sa_insole.sin_mat_code)::text =\n> insole_target.in_mat_code)\n> -> Seq Scan on sa_insole (cost=0.00..264.30\n> rows=15230 width=16) (actual time=0.006..0.606 rows=15230 loops=1)\n> -> Hash (cost=9.01..9.01 rows=118 width=6)\n> (actual time=0.042..0.043 rows=119 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 13kB\n> -> Seq Scan on insole_target \n> (cost=0.00..9.01 rows=118 width=6) (actual time=0.008..0.032 rows=119\n> loops=1)\n> Filter: (id_in <= 119)\n> Rows Removed by Filter: 362\n> CTE qou\n> -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80)\n> (actual time=18.198..41.812 rows=10699 loops=1)\n> Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n> -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual\n> time=18.187..18.967 rows=24768 loops=1)\n> Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 2317kB\n> -> Hash Left Join (cost=5.39..558.63 rows=24768\n> width=18) (actual time=0.046..5.132 rows=24768 loops=1)\n> Hash Cond: ((sa_outsole.sou_mat_code)::text =\n> outsole_target.ou_mat_code)\n> -> Seq Scan on sa_outsole (cost=0.00..430.68\n> rows=24768 width=16) (actual time=0.010..1.015 rows=24768 loops=1)\n> -> Hash (cost=5.03..5.03 rows=29 width=6)\n> (actual time=0.032..0.032 rows=29 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 10kB\n> -> Seq Scan on outsole_target \n> (cost=0.00..5.03 rows=29 width=6) (actual time=0.010..0.025 rows=29 loops=1)\n> Filter: (id_ou <= 29)\n> Rows Removed by Filter: 213\n> -> Hash Join (cost=1015.85..1319.50 rows=1 width=104) (actual\n> time=168.307..215.513 rows=8548 loops=1)\n> Hash Cond: ((qou.curr_season = qli.curr_season) AND\n> ((qou.curr_code)::text = (qli.curr_code)::text))\n> Join Filter: ((((qup.ibitmask | qin.ibitmask) | qli.ibitmask) |\n> qou.ibitmask) IS NOT NULL)\n> -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=76)\n> (actual time=18.200..45.188 rows=10275 loops=1)\n> Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\n> Rows Removed by Filter: 424\n> -> Hash (cost=1015.83..1015.83 rows=1 width=228) (actual\n> time=150.094..150.095 rows=8845 loops=1)\n> Buckets: 16384 (originally 1024) Batches: 1 (originally\n> 1) Memory Usage: 1899kB\n> -> Hash Join (cost=707.35..1015.83 rows=1 width=228)\n> (actual time=121.898..147.726 rows=8845 loops=1)\n> Hash Cond: ((qin.curr_season = qli.curr_season) AND\n> ((qin.curr_code)::text = (qli.curr_code)::text))\n> -> CTE Scan on qin (cost=0.00..293.65 rows=1186\n> width=76) (actual time=11.425..34.674 rows=10197 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 8))\n> Rows Removed by Filter: 481\n> -> Hash (cost=706.86..706.86 rows=33 width=152)\n> (actual time=110.470..110.470 rows=9007 loops=1)\n> Buckets: 16384 (originally 1024) Batches: 1\n> (originally 1) Memory Usage: 1473kB\n> -> Merge Join (cost=689.20..706.86 rows=33\n> width=152) (actual time=105.862..108.925 rows=9007 loops=1)\n> Merge Cond: ((qup.curr_season =\n> qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n> -> Sort (cost=342.09..344.96\n> rows=1147 width=76) (actual time=73.419..73.653 rows=9320 loops=1)\n> Sort Key: qup.curr_season,\n> qup.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory:\n> 1391kB\n> -> CTE Scan on qup \n> (cost=0.00..283.80 rows=1147 width=76) (actual time=35.467..71.904\n> rows=9320 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 21))\n> Rows Removed by Filter: 1415\n> -> Sort (cost=347.12..350.02\n> rows=1163 width=76) (actual time=32.440..32.697 rows=10289 loops=1)\n> Sort Key: qli.curr_season,\n> qli.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory:\n> 1349kB\n> -> CTE Scan on qli \n> (cost=0.00..287.90 rows=1163 width=76) (actual time=9.447..30.666\n> rows=10289 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 8))\n> Rows Removed by Filter: 180\n> -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104)\n> (actual time=4.597..6.700 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND\n> ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n> -> Merge Left Join (cost=1958.66..2135.28 rows=5733\n> width=136) (actual time=3.427..3.863 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND\n> ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n> -> Merge Left Join (cost=1293.25..1388.21 rows=5733\n> width=104) (actual time=2.321..2.556 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season =\n> qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n> -> Sort (cost=641.68..656.02 rows=5733 width=72)\n> (actual time=1.286..1.324 rows=1415 loops=1)\n> Sort Key: qup_1.curr_season, qup_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 204kB\n> -> CTE Scan on qup qup_1 (cost=0.00..283.80\n> rows=5733 width=72) (actual time=0.009..1.093 rows=1415 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 21))\n> Rows Removed by Filter: 9320\n> -> Sort (cost=651.57..666.11 rows=5816 width=72)\n> (actual time=1.033..1.038 rows=180 loops=1)\n> Sort Key: qli_1.curr_season, qli_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 41kB\n> -> CTE Scan on qli qli_1 (cost=0.00..287.90\n> rows=5816 width=72) (actual time=0.055..1.007 rows=180 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 8))\n> Rows Removed by Filter: 10289\n> -> Sort (cost=665.41..680.24 rows=5932 width=72)\n> (actual time=1.104..1.117 rows=481 loops=1)\n> Sort Key: qin_1.curr_season, qin_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 68kB\n> -> CTE Scan on qin qin_1 (cost=0.00..293.65\n> rows=5932 width=72) (actual time=0.016..1.038 rows=481 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 8))\n> Rows Removed by Filter: 10197\n> -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual\n> time=1.163..1.174 rows=417 loops=1)\n> Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory: 68kB\n> -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944\n> width=72) (actual time=0.029..1.068 rows=424 loops=1)\n> Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n> Rows Removed by Filter: 10275\n> Planning Time: 2.297 ms\n> Execution Time: 224.759 ms\n> (118 Zeilen)\n> \n> 3. Slow query from wrong plan as result on similar case with inner join\n> \n> When the 3 left joins above are changed to inner joins like:\n> \n> from qup\n> join qli on (qli.curr_season=qup.curr_season and\n> qli.curr_code=qup.curr_code and qli.ibitmask>0 and\n> cardinality(qli.mat_arr) <=8)\n> join qin on (qin.curr_season=qup.curr_season and\n> qin.curr_code=qup.curr_code and qin.ibitmask>0 and\n> cardinality(qin.mat_arr) <=8)\n> join qou on (qou.curr_season=qup.curr_season and\n> qou.curr_code=qup.curr_code and qou.ibitmask>0 and\n> cardinality(qou.mat_arr) <=11)\n> where qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\n> \n> The same rows estimation takes place as with the left joins, but the\n> planner now decides to use a nested loop for the last join, which\n> results in a 500fold execution time:\n> \n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Append (cost=13365.31..17472.18 rows=5734 width=104) (actual\n> time=139.037..13403.310 rows=9963 loops=1)\n> CTE qup\n> -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80)\n> (actual time=35.399..67.102 rows=10735 loops=1)\n> Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n> -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual\n> time=35.382..36.743 rows=50969 loops=1)\n> Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 4722kB\n> -> Hash Left Join (cost=41.71..1246.13 rows=50969\n> width=18) (actual time=0.157..10.715 rows=50969 loops=1)\n> Hash Cond: ((sa_upper.sup_mat_code)::text =\n> upper_target.up_mat_code)\n> -> Seq Scan on sa_upper (cost=0.00..884.69\n> rows=50969 width=16) (actual time=0.008..2.001 rows=50969 loops=1)\n> -> Hash (cost=35.53..35.53 rows=495 width=6)\n> (actual time=0.146..0.146 rows=495 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 27kB\n> -> Seq Scan on upper_target \n> (cost=0.00..35.53 rows=495 width=6) (actual time=0.006..0.105 rows=495\n> loops=1)\n> Filter: (id_up <= 495)\n> Rows Removed by Filter: 1467\n> CTE qli\n> -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80)\n> (actual time=9.541..27.419 rows=10469 loops=1)\n> Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n> -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual\n> time=9.534..9.908 rows=11774 loops=1)\n> Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 1120kB\n> -> Hash Left Join (cost=7.34..301.19 rows=11774\n> width=18) (actual time=0.049..2.451 rows=11774 loops=1)\n> Hash Cond: ((sa_lining.sli_mat_code)::text =\n> lining_target.li_mat_code)\n> -> Seq Scan on sa_lining (cost=0.00..204.74\n> rows=11774 width=16) (actual time=0.010..0.462 rows=11774 loops=1)\n> -> Hash (cost=5.86..5.86 rows=118 width=6)\n> (actual time=0.035..0.035 rows=119 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 13kB\n> -> Seq Scan on lining_target \n> (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.025 rows=119\n> loops=1)\n> Filter: (id_li <= 119)\n> Rows Removed by Filter: 190\n> CTE qin\n> -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80)\n> (actual time=11.649..30.910 rows=10678 loops=1)\n> Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n> -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual\n> time=11.642..12.115 rows=15230 loops=1)\n> Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 1336kB\n> -> Hash Left Join (cost=10.49..369.26 rows=15230\n> width=18) (actual time=0.056..3.144 rows=15230 loops=1)\n> Hash Cond: ((sa_insole.sin_mat_code)::text =\n> insole_target.in_mat_code)\n> -> Seq Scan on sa_insole (cost=0.00..264.30\n> rows=15230 width=16) (actual time=0.008..0.594 rows=15230 loops=1)\n> -> Hash (cost=9.01..9.01 rows=118 width=6)\n> (actual time=0.045..0.046 rows=119 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 13kB\n> -> Seq Scan on insole_target \n> (cost=0.00..9.01 rows=118 width=6) (actual time=0.008..0.034 rows=119\n> loops=1)\n> Filter: (id_in <= 119)\n> Rows Removed by Filter: 362\n> CTE qou\n> -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80)\n> (actual time=18.163..51.151 rows=10699 loops=1)\n> Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n> -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual\n> time=18.150..20.000 rows=24768 loops=1)\n> Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 2317kB\n> -> Hash Left Join (cost=5.39..558.63 rows=24768\n> width=18) (actual time=0.036..5.106 rows=24768 loops=1)\n> Hash Cond: ((sa_outsole.sou_mat_code)::text =\n> outsole_target.ou_mat_code)\n> -> Seq Scan on sa_outsole (cost=0.00..430.68\n> rows=24768 width=16) (actual time=0.008..1.005 rows=24768 loops=1)\n> -> Hash (cost=5.03..5.03 rows=29 width=6)\n> (actual time=0.024..0.024 rows=29 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 10kB\n> -> Seq Scan on outsole_target \n> (cost=0.00..5.03 rows=29 width=6) (actual time=0.007..0.018 rows=29 loops=1)\n> Filter: (id_ou <= 29)\n> Rows Removed by Filter: 213\n> -> Nested Loop (cost=707.35..1328.37 rows=1 width=104) (actual\n> time=139.036..13395.820 rows=8548 loops=1)\n> Join Filter: ((qli.curr_season = qin.curr_season) AND\n> ((qli.curr_code)::text = (qin.curr_code)::text))\n> Rows Removed by Join Filter: 88552397\n> -> Hash Join (cost=707.35..1016.45 rows=1 width=216) (actual\n> time=127.374..168.249 rows=8685 loops=1)\n> Hash Cond: ((qou.curr_season = qli.curr_season) AND\n> ((qou.curr_code)::text = (qli.curr_code)::text))\n> -> CTE Scan on qou (cost=0.00..294.22 rows=1189\n> width=72) (actual time=18.165..54.968 rows=10275 loops=1)\n> Filter: ((ibitmask > 0) AND (cardinality(mat_arr)\n> <= 11))\n> Rows Removed by Filter: 424\n> -> Hash (cost=706.86..706.86 rows=33 width=144) (actual\n> time=109.205..109.207 rows=9007 loops=1)\n> Buckets: 16384 (originally 1024) Batches: 1\n> (originally 1) Memory Usage: 1369kB\n> -> Merge Join (cost=689.20..706.86 rows=33\n> width=144) (actual time=104.785..107.748 rows=9007 loops=1)\n> Merge Cond: ((qup.curr_season =\n> qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n> -> Sort (cost=342.09..344.96 rows=1147\n> width=72) (actual time=72.320..72.559 rows=9320 loops=1)\n> Sort Key: qup.curr_season,\n> qup.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory: 1357kB\n> -> CTE Scan on qup (cost=0.00..283.80\n> rows=1147 width=72) (actual time=35.401..70.834 rows=9320 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 21))\n> Rows Removed by Filter: 1415\n> -> Sort (cost=347.12..350.02 rows=1163\n> width=72) (actual time=32.461..32.719 rows=10289 loops=1)\n> Sort Key: qli.curr_season,\n> qli.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory: 1269kB\n> -> CTE Scan on qli (cost=0.00..287.90\n> rows=1163 width=72) (actual time=9.543..30.696 rows=10289 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 8))\n> Rows Removed by Filter: 180\n> -> CTE Scan on qin (cost=0.00..293.65 rows=1186 width=72)\n> (actual time=0.001..1.159 rows=10197 loops=8685)\n> Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n> Rows Removed by Filter: 481\n> -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104)\n> (actual time=4.606..6.733 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND\n> ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n> -> Merge Left Join (cost=1958.66..2135.28 rows=5733\n> width=136) (actual time=3.479..3.930 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND\n> ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n> -> Merge Left Join (cost=1293.25..1388.21 rows=5733\n> width=104) (actual time=2.368..2.610 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season =\n> qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n> -> Sort (cost=641.68..656.02 rows=5733 width=72)\n> (actual time=1.296..1.335 rows=1415 loops=1)\n> Sort Key: qup_1.curr_season, qup_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 204kB\n> -> CTE Scan on qup qup_1 (cost=0.00..283.80\n> rows=5733 width=72) (actual time=0.010..1.119 rows=1415 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 21))\n> Rows Removed by Filter: 9320\n> -> Sort (cost=651.57..666.11 rows=5816 width=72)\n> (actual time=1.069..1.075 rows=180 loops=1)\n> Sort Key: qli_1.curr_season, qli_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 41kB\n> -> CTE Scan on qli qli_1 (cost=0.00..287.90\n> rows=5816 width=72) (actual time=0.057..1.026 rows=180 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 8))\n> Rows Removed by Filter: 10289\n> -> Sort (cost=665.41..680.24 rows=5932 width=72)\n> (actual time=1.110..1.124 rows=481 loops=1)\n> Sort Key: qin_1.curr_season, qin_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 68kB\n> -> CTE Scan on qin qin_1 (cost=0.00..293.65\n> rows=5932 width=72) (actual time=0.016..1.046 rows=481 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 8))\n> Rows Removed by Filter: 10197\n> -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual\n> time=1.119..1.128 rows=417 loops=1)\n> Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory: 68kB\n> -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944\n> width=72) (actual time=0.029..1.056 rows=424 loops=1)\n> Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n> Rows Removed by Filter: 10275\n> Planning Time: 1.746 ms\n> Execution Time: 13405.503 ms\n> (116 Zeilen)\n> \n> This case really brought me to detect the problem!\n> \n> The original query and data are not shown here, but the principle should\n> be clear from the execution plans.\n> \n> I think the planner shouldn't change the row estimations on further\n> steps after left joins at all, and be a bit more conservative on inner\n> joins.\n> This may be related to the fact that this case has 2 join-conditions\n> (xx_season an xx_code).\n> \n> Thanks for looking\n> \n> Hans Buschmann\n> \n> \n> \n> \n> \n> \t\n> !! External Email: This email originated from outside of the\n> organization. Do not click links or open attachments unless you\n> recognize the sender.\n> \n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 14 Aug 2023 14:58:07 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Hi hackers,\n\nI found a new approach to fix this issue, which seems better, so I would like to post another version of the patch here. The origin patch made the assumption of the values of Vars from CTE must be unique, which could be very wrong. This patch examines variables for Vars inside CTE, which avoided the bad assumption, so the results could be much more accurate.\n\nRegards,\nJian\n\n________________________________\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Monday, August 14, 2023 20:58\nTo: Jian Guo <gjian@vmware.com>; Hans Buschmann <buschmann@nidsa.net>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Wrong rows estimations with joins of CTEs slows queries by more than factor 500\n\n!! External Email\n\nHi,\n\nI haven't looked at the patch, but please add the patch to the next\ncommit fest (2023-09), so that we don't lose track of it.\n\nSee https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommitfest.postgresql.org%2F&data=05%7C01%7Cgjian%40vmware.com%7C9d40e84af2c946f3517a08db9cc61ee2%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638276146959658928%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=EUlMgo%2BU4Oi%2BWf0cS%2FKnTmhHzZrYzu26PzfxYnZIDFs%3D&reserved=0<https://commitfest.postgresql.org/>\n\n\nregards\n\nTomas\n\nOn 8/14/23 13:12, Jian Guo wrote:\n> Hi hackers,\n>\n> I have written a patch to add stats info for Vars in CTEs. With this\n> patch, the join size estimation on the upper of CTE scans became more\n> accurate.\n>\n> In the function |selfuncs.c:eqjoinsel| it uses the number of the\n> distinct values of the two join variables to estimate join size, and in\n> the function |selfuncs.c:get_variable_numdistinct| return a default\n> value |DEFAULT_NUM_DISTINCT| (200 in Postgres and 1000 in Greenplum),\n> with the default value, you can never expect a good plan.\n>\n> Thanks if anyone could give a review.\n>\n> Regards,\n> Jian\n>\n> ------------------------------------------------------------------------\n> *From:* Hans Buschmann <buschmann@nidsa.net>\n> *Sent:* Wednesday, February 8, 2023 21:55\n> *To:* pgsql-hackers@lists.postgresql.org\n> <pgsql-hackers@lists.postgresql.org>\n> *Subject:* Wrong rows estimations with joins of CTEs slows queries by\n> more than factor 500\n>\n>\n> !! External Email\n>\n> During data refactoring of our Application I encountered $subject when\n> joining 4 CTEs with left join or inner join.\n>\n>\n> 1. Background\n>\n> PG 15.1 on Windows x64 (OS seems no to have no meening here)\n>\n>\n> I try to collect data from 4 (analyzed) tables (up,li,in,ou) by grouping\n> certain data (4 CTEs qup,qli,qin,qou)\n>\n> The grouping of the data in the CTEs gives estimated row counts of about\n> 1000 (1 tenth of the real value) This is OK for estimation.\n>\n>\n> These 4 CTEs are then used to combine the data by joining them.\n>\n>\n> 2. Problem\n>\n> The 4 CTEs are joined by left joins as shown below:\n>\n>\n> from qup\n> left join qli on (qli.curr_season=qup.curr_season and\n> qli.curr_code=qup.curr_code and qli.ibitmask>0 and\n> cardinality(qli.mat_arr) <=8)\n> left join qin on (qin.curr_season=qup.curr_season and\n> qin.curr_code=qup.curr_code and qin.ibitmask>0 and\n> cardinality(qin.mat_arr) <=8)\n> left join qou on (qou.curr_season=qup.curr_season and\n> qou.curr_code=qup.curr_code and qou.ibitmask>0 and\n> cardinality(qou.mat_arr) <=11)\n> where qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\n>\n> The plan first retrieves qup and qli, taking the estimated row counts of\n> 1163 and 1147 respectively\n>\n>\n> BUT the result is then hashed and the row count is estimated as 33!\n>\n>\n> In a Left join the row count stays always the same as the one of left\n> table (here qup with 1163 rows)\n>\n>\n> The same algorithm which reduces the row estimation from 1163 to 33 is\n> used in the next step to give an estimation of 1 row.\n>\n> This is totally wrong.\n>\n>\n> Here is the execution plan of the query:\n>\n> (search the plan for rows=33)\n>\n>\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------\n> Append (cost=13673.81..17463.30 rows=5734 width=104) (actual\n> time=168.307..222.670 rows=9963 loops=1)\n> CTE qup\n> -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80)\n> (actual time=35.466..68.131 rows=10735 loops=1)\n> Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n> -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual\n> time=35.454..36.819 rows=50969 loops=1)\n> Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 4722kB\n> -> Hash Left Join (cost=41.71..1246.13 rows=50969\n> width=18) (actual time=0.148..10.687 rows=50969 loops=1)\n> Hash Cond: ((sa_upper.sup_mat_code)::text =\n> upper_target.up_mat_code)\n> -> Seq Scan on sa_upper (cost=0.00..884.69\n> rows=50969 width=16) (actual time=0.005..1.972 rows=50969 loops=1)\n> -> Hash (cost=35.53..35.53 rows=495 width=6)\n> (actual time=0.140..0.140 rows=495 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 27kB\n> -> Seq Scan on upper_target\n> (cost=0.00..35.53 rows=495 width=6) (actual time=0.007..0.103 rows=495\n> loops=1)\n> Filter: (id_up <= 495)\n> Rows Removed by Filter: 1467\n> CTE qli\n> -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80)\n> (actual time=9.446..27.388 rows=10469 loops=1)\n> Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n> -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual\n> time=9.440..9.811 rows=11774 loops=1)\n> Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 1120kB\n> -> Hash Left Join (cost=7.34..301.19 rows=11774\n> width=18) (actual time=0.045..2.438 rows=11774 loops=1)\n> Hash Cond: ((sa_lining.sli_mat_code)::text =\n> lining_target.li_mat_code)\n> -> Seq Scan on sa_lining (cost=0.00..204.74\n> rows=11774 width=16) (actual time=0.008..0.470 rows=11774 loops=1)\n> -> Hash (cost=5.86..5.86 rows=118 width=6)\n> (actual time=0.034..0.034 rows=119 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 13kB\n> -> Seq Scan on lining_target\n> (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.024 rows=119\n> loops=1)\n> Filter: (id_li <= 119)\n> Rows Removed by Filter: 190\n> CTE qin\n> -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80)\n> (actual time=11.424..31.508 rows=10678 loops=1)\n> Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n> -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual\n> time=11.416..11.908 rows=15230 loops=1)\n> Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 1336kB\n> -> Hash Left Join (cost=10.49..369.26 rows=15230\n> width=18) (actual time=0.051..3.108 rows=15230 loops=1)\n> Hash Cond: ((sa_insole.sin_mat_code)::text =\n> insole_target.in_mat_code)\n> -> Seq Scan on sa_insole (cost=0.00..264.30\n> rows=15230 width=16) (actual time=0.006..0.606 rows=15230 loops=1)\n> -> Hash (cost=9.01..9.01 rows=118 width=6)\n> (actual time=0.042..0.043 rows=119 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 13kB\n> -> Seq Scan on insole_target\n> (cost=0.00..9.01 rows=118 width=6) (actual time=0.008..0.032 rows=119\n> loops=1)\n> Filter: (id_in <= 119)\n> Rows Removed by Filter: 362\n> CTE qou\n> -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80)\n> (actual time=18.198..41.812 rows=10699 loops=1)\n> Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n> -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual\n> time=18.187..18.967 rows=24768 loops=1)\n> Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 2317kB\n> -> Hash Left Join (cost=5.39..558.63 rows=24768\n> width=18) (actual time=0.046..5.132 rows=24768 loops=1)\n> Hash Cond: ((sa_outsole.sou_mat_code)::text =\n> outsole_target.ou_mat_code)\n> -> Seq Scan on sa_outsole (cost=0.00..430.68\n> rows=24768 width=16) (actual time=0.010..1.015 rows=24768 loops=1)\n> -> Hash (cost=5.03..5.03 rows=29 width=6)\n> (actual time=0.032..0.032 rows=29 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 10kB\n> -> Seq Scan on outsole_target\n> (cost=0.00..5.03 rows=29 width=6) (actual time=0.010..0.025 rows=29 loops=1)\n> Filter: (id_ou <= 29)\n> Rows Removed by Filter: 213\n> -> Hash Join (cost=1015.85..1319.50 rows=1 width=104) (actual\n> time=168.307..215.513 rows=8548 loops=1)\n> Hash Cond: ((qou.curr_season = qli.curr_season) AND\n> ((qou.curr_code)::text = (qli.curr_code)::text))\n> Join Filter: ((((qup.ibitmask | qin.ibitmask) | qli.ibitmask) |\n> qou.ibitmask) IS NOT NULL)\n> -> CTE Scan on qou (cost=0.00..294.22 rows=1189 width=76)\n> (actual time=18.200..45.188 rows=10275 loops=1)\n> Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 11))\n> Rows Removed by Filter: 424\n> -> Hash (cost=1015.83..1015.83 rows=1 width=228) (actual\n> time=150.094..150.095 rows=8845 loops=1)\n> Buckets: 16384 (originally 1024) Batches: 1 (originally\n> 1) Memory Usage: 1899kB\n> -> Hash Join (cost=707.35..1015.83 rows=1 width=228)\n> (actual time=121.898..147.726 rows=8845 loops=1)\n> Hash Cond: ((qin.curr_season = qli.curr_season) AND\n> ((qin.curr_code)::text = (qli.curr_code)::text))\n> -> CTE Scan on qin (cost=0.00..293.65 rows=1186\n> width=76) (actual time=11.425..34.674 rows=10197 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 8))\n> Rows Removed by Filter: 481\n> -> Hash (cost=706.86..706.86 rows=33 width=152)\n> (actual time=110.470..110.470 rows=9007 loops=1)\n> Buckets: 16384 (originally 1024) Batches: 1\n> (originally 1) Memory Usage: 1473kB\n> -> Merge Join (cost=689.20..706.86 rows=33\n> width=152) (actual time=105.862..108.925 rows=9007 loops=1)\n> Merge Cond: ((qup.curr_season =\n> qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n> -> Sort (cost=342.09..344.96\n> rows=1147 width=76) (actual time=73.419..73.653 rows=9320 loops=1)\n> Sort Key: qup.curr_season,\n> qup.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory:\n> 1391kB\n> -> CTE Scan on qup\n> (cost=0.00..283.80 rows=1147 width=76) (actual time=35.467..71.904\n> rows=9320 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 21))\n> Rows Removed by Filter: 1415\n> -> Sort (cost=347.12..350.02\n> rows=1163 width=76) (actual time=32.440..32.697 rows=10289 loops=1)\n> Sort Key: qli.curr_season,\n> qli.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory:\n> 1349kB\n> -> CTE Scan on qli\n> (cost=0.00..287.90 rows=1163 width=76) (actual time=9.447..30.666\n> rows=10289 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 8))\n> Rows Removed by Filter: 180\n> -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104)\n> (actual time=4.597..6.700 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND\n> ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n> -> Merge Left Join (cost=1958.66..2135.28 rows=5733\n> width=136) (actual time=3.427..3.863 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND\n> ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n> -> Merge Left Join (cost=1293.25..1388.21 rows=5733\n> width=104) (actual time=2.321..2.556 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season =\n> qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n> -> Sort (cost=641.68..656.02 rows=5733 width=72)\n> (actual time=1.286..1.324 rows=1415 loops=1)\n> Sort Key: qup_1.curr_season, qup_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 204kB\n> -> CTE Scan on qup qup_1 (cost=0.00..283.80\n> rows=5733 width=72) (actual time=0.009..1.093 rows=1415 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 21))\n> Rows Removed by Filter: 9320\n> -> Sort (cost=651.57..666.11 rows=5816 width=72)\n> (actual time=1.033..1.038 rows=180 loops=1)\n> Sort Key: qli_1.curr_season, qli_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 41kB\n> -> CTE Scan on qli qli_1 (cost=0.00..287.90\n> rows=5816 width=72) (actual time=0.055..1.007 rows=180 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 8))\n> Rows Removed by Filter: 10289\n> -> Sort (cost=665.41..680.24 rows=5932 width=72)\n> (actual time=1.104..1.117 rows=481 loops=1)\n> Sort Key: qin_1.curr_season, qin_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 68kB\n> -> CTE Scan on qin qin_1 (cost=0.00..293.65\n> rows=5932 width=72) (actual time=0.016..1.038 rows=481 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 8))\n> Rows Removed by Filter: 10197\n> -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual\n> time=1.163..1.174 rows=417 loops=1)\n> Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory: 68kB\n> -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944\n> width=72) (actual time=0.029..1.068 rows=424 loops=1)\n> Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n> Rows Removed by Filter: 10275\n> Planning Time: 2.297 ms\n> Execution Time: 224.759 ms\n> (118 Zeilen)\n>\n> 3. Slow query from wrong plan as result on similar case with inner join\n>\n> When the 3 left joins above are changed to inner joins like:\n>\n> from qup\n> join qli on (qli.curr_season=qup.curr_season and\n> qli.curr_code=qup.curr_code and qli.ibitmask>0 and\n> cardinality(qli.mat_arr) <=8)\n> join qin on (qin.curr_season=qup.curr_season and\n> qin.curr_code=qup.curr_code and qin.ibitmask>0 and\n> cardinality(qin.mat_arr) <=8)\n> join qou on (qou.curr_season=qup.curr_season and\n> qou.curr_code=qup.curr_code and qou.ibitmask>0 and\n> cardinality(qou.mat_arr) <=11)\n> where qup.ibitmask>0 and cardinality(qup.mat_arr) <=21\n>\n> The same rows estimation takes place as with the left joins, but the\n> planner now decides to use a nested loop for the last join, which\n> results in a 500fold execution time:\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Append (cost=13365.31..17472.18 rows=5734 width=104) (actual\n> time=139.037..13403.310 rows=9963 loops=1)\n> CTE qup\n> -> GroupAggregate (cost=5231.22..6303.78 rows=10320 width=80)\n> (actual time=35.399..67.102 rows=10735 loops=1)\n> Group Key: sa_upper.sup_season, sa_upper.sup_sa_code\n> -> Sort (cost=5231.22..5358.64 rows=50969 width=18) (actual\n> time=35.382..36.743 rows=50969 loops=1)\n> Sort Key: sa_upper.sup_season, sa_upper.sup_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 4722kB\n> -> Hash Left Join (cost=41.71..1246.13 rows=50969\n> width=18) (actual time=0.157..10.715 rows=50969 loops=1)\n> Hash Cond: ((sa_upper.sup_mat_code)::text =\n> upper_target.up_mat_code)\n> -> Seq Scan on sa_upper (cost=0.00..884.69\n> rows=50969 width=16) (actual time=0.008..2.001 rows=50969 loops=1)\n> -> Hash (cost=35.53..35.53 rows=495 width=6)\n> (actual time=0.146..0.146 rows=495 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 27kB\n> -> Seq Scan on upper_target\n> (cost=0.00..35.53 rows=495 width=6) (actual time=0.006..0.105 rows=495\n> loops=1)\n> Filter: (id_up <= 495)\n> Rows Removed by Filter: 1467\n> CTE qli\n> -> GroupAggregate (cost=1097.31..1486.56 rows=10469 width=80)\n> (actual time=9.541..27.419 rows=10469 loops=1)\n> Group Key: sa_lining.sli_season, sa_lining.sli_sa_code\n> -> Sort (cost=1097.31..1126.74 rows=11774 width=18) (actual\n> time=9.534..9.908 rows=11774 loops=1)\n> Sort Key: sa_lining.sli_season, sa_lining.sli_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 1120kB\n> -> Hash Left Join (cost=7.34..301.19 rows=11774\n> width=18) (actual time=0.049..2.451 rows=11774 loops=1)\n> Hash Cond: ((sa_lining.sli_mat_code)::text =\n> lining_target.li_mat_code)\n> -> Seq Scan on sa_lining (cost=0.00..204.74\n> rows=11774 width=16) (actual time=0.010..0.462 rows=11774 loops=1)\n> -> Hash (cost=5.86..5.86 rows=118 width=6)\n> (actual time=0.035..0.035 rows=119 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 13kB\n> -> Seq Scan on lining_target\n> (cost=0.00..5.86 rows=118 width=6) (actual time=0.008..0.025 rows=119\n> loops=1)\n> Filter: (id_li <= 119)\n> Rows Removed by Filter: 190\n> CTE qin\n> -> GroupAggregate (cost=1427.34..1880.73 rows=10678 width=80)\n> (actual time=11.649..30.910 rows=10678 loops=1)\n> Group Key: sa_insole.sin_season, sa_insole.sin_sa_code\n> -> Sort (cost=1427.34..1465.41 rows=15230 width=18) (actual\n> time=11.642..12.115 rows=15230 loops=1)\n> Sort Key: sa_insole.sin_season, sa_insole.sin_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 1336kB\n> -> Hash Left Join (cost=10.49..369.26 rows=15230\n> width=18) (actual time=0.056..3.144 rows=15230 loops=1)\n> Hash Cond: ((sa_insole.sin_mat_code)::text =\n> insole_target.in_mat_code)\n> -> Seq Scan on sa_insole (cost=0.00..264.30\n> rows=15230 width=16) (actual time=0.008..0.594 rows=15230 loops=1)\n> -> Hash (cost=9.01..9.01 rows=118 width=6)\n> (actual time=0.045..0.046 rows=119 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 13kB\n> -> Seq Scan on insole_target\n> (cost=0.00..9.01 rows=118 width=6) (actual time=0.008..0.034 rows=119\n> loops=1)\n> Filter: (id_in <= 119)\n> Rows Removed by Filter: 362\n> CTE qou\n> -> GroupAggregate (cost=2366.22..2986.89 rows=10699 width=80)\n> (actual time=18.163..51.151 rows=10699 loops=1)\n> Group Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n> -> Sort (cost=2366.22..2428.14 rows=24768 width=18) (actual\n> time=18.150..20.000 rows=24768 loops=1)\n> Sort Key: sa_outsole.sou_season, sa_outsole.sou_sa_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 2317kB\n> -> Hash Left Join (cost=5.39..558.63 rows=24768\n> width=18) (actual time=0.036..5.106 rows=24768 loops=1)\n> Hash Cond: ((sa_outsole.sou_mat_code)::text =\n> outsole_target.ou_mat_code)\n> -> Seq Scan on sa_outsole (cost=0.00..430.68\n> rows=24768 width=16) (actual time=0.008..1.005 rows=24768 loops=1)\n> -> Hash (cost=5.03..5.03 rows=29 width=6)\n> (actual time=0.024..0.024 rows=29 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 10kB\n> -> Seq Scan on outsole_target\n> (cost=0.00..5.03 rows=29 width=6) (actual time=0.007..0.018 rows=29 loops=1)\n> Filter: (id_ou <= 29)\n> Rows Removed by Filter: 213\n> -> Nested Loop (cost=707.35..1328.37 rows=1 width=104) (actual\n> time=139.036..13395.820 rows=8548 loops=1)\n> Join Filter: ((qli.curr_season = qin.curr_season) AND\n> ((qli.curr_code)::text = (qin.curr_code)::text))\n> Rows Removed by Join Filter: 88552397\n> -> Hash Join (cost=707.35..1016.45 rows=1 width=216) (actual\n> time=127.374..168.249 rows=8685 loops=1)\n> Hash Cond: ((qou.curr_season = qli.curr_season) AND\n> ((qou.curr_code)::text = (qli.curr_code)::text))\n> -> CTE Scan on qou (cost=0.00..294.22 rows=1189\n> width=72) (actual time=18.165..54.968 rows=10275 loops=1)\n> Filter: ((ibitmask > 0) AND (cardinality(mat_arr)\n> <= 11))\n> Rows Removed by Filter: 424\n> -> Hash (cost=706.86..706.86 rows=33 width=144) (actual\n> time=109.205..109.207 rows=9007 loops=1)\n> Buckets: 16384 (originally 1024) Batches: 1\n> (originally 1) Memory Usage: 1369kB\n> -> Merge Join (cost=689.20..706.86 rows=33\n> width=144) (actual time=104.785..107.748 rows=9007 loops=1)\n> Merge Cond: ((qup.curr_season =\n> qli.curr_season) AND ((qup.curr_code)::text = (qli.curr_code)::text))\n> -> Sort (cost=342.09..344.96 rows=1147\n> width=72) (actual time=72.320..72.559 rows=9320 loops=1)\n> Sort Key: qup.curr_season,\n> qup.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory: 1357kB\n> -> CTE Scan on qup (cost=0.00..283.80\n> rows=1147 width=72) (actual time=35.401..70.834 rows=9320 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 21))\n> Rows Removed by Filter: 1415\n> -> Sort (cost=347.12..350.02 rows=1163\n> width=72) (actual time=32.461..32.719 rows=10289 loops=1)\n> Sort Key: qli.curr_season,\n> qli.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory: 1269kB\n> -> CTE Scan on qli (cost=0.00..287.90\n> rows=1163 width=72) (actual time=9.543..30.696 rows=10289 loops=1)\n> Filter: ((ibitmask > 0) AND\n> (cardinality(mat_arr) <= 8))\n> Rows Removed by Filter: 180\n> -> CTE Scan on qin (cost=0.00..293.65 rows=1186 width=72)\n> (actual time=0.001..1.159 rows=10197 loops=8685)\n> Filter: ((ibitmask > 0) AND (cardinality(mat_arr) <= 8))\n> Rows Removed by Filter: 481\n> -> Merge Left Join (cost=2625.49..3399.84 rows=5733 width=104)\n> (actual time=4.606..6.733 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season = qou_1.curr_season) AND\n> ((qup_1.curr_code)::text = (qou_1.curr_code)::text))\n> -> Merge Left Join (cost=1958.66..2135.28 rows=5733\n> width=136) (actual time=3.479..3.930 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season = qin_1.curr_season) AND\n> ((qup_1.curr_code)::text = (qin_1.curr_code)::text))\n> -> Merge Left Join (cost=1293.25..1388.21 rows=5733\n> width=104) (actual time=2.368..2.610 rows=1415 loops=1)\n> Merge Cond: ((qup_1.curr_season =\n> qli_1.curr_season) AND ((qup_1.curr_code)::text = (qli_1.curr_code)::text))\n> -> Sort (cost=641.68..656.02 rows=5733 width=72)\n> (actual time=1.296..1.335 rows=1415 loops=1)\n> Sort Key: qup_1.curr_season, qup_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 204kB\n> -> CTE Scan on qup qup_1 (cost=0.00..283.80\n> rows=5733 width=72) (actual time=0.010..1.119 rows=1415 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 21))\n> Rows Removed by Filter: 9320\n> -> Sort (cost=651.57..666.11 rows=5816 width=72)\n> (actual time=1.069..1.075 rows=180 loops=1)\n> Sort Key: qli_1.curr_season, qli_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 41kB\n> -> CTE Scan on qli qli_1 (cost=0.00..287.90\n> rows=5816 width=72) (actual time=0.057..1.026 rows=180 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 8))\n> Rows Removed by Filter: 10289\n> -> Sort (cost=665.41..680.24 rows=5932 width=72)\n> (actual time=1.110..1.124 rows=481 loops=1)\n> Sort Key: qin_1.curr_season, qin_1.curr_code\n> COLLATE \"C\"\n> Sort Method: quicksort Memory: 68kB\n> -> CTE Scan on qin qin_1 (cost=0.00..293.65\n> rows=5932 width=72) (actual time=0.016..1.046 rows=481 loops=1)\n> Filter: ((ibitmask < 0) OR\n> (cardinality(mat_arr) > 8))\n> Rows Removed by Filter: 10197\n> -> Sort (cost=666.83..681.69 rows=5944 width=72) (actual\n> time=1.119..1.128 rows=417 loops=1)\n> Sort Key: qou_1.curr_season, qou_1.curr_code COLLATE \"C\"\n> Sort Method: quicksort Memory: 68kB\n> -> CTE Scan on qou qou_1 (cost=0.00..294.22 rows=5944\n> width=72) (actual time=0.029..1.056 rows=424 loops=1)\n> Filter: ((ibitmask < 0) OR (cardinality(mat_arr) > 11))\n> Rows Removed by Filter: 10275\n> Planning Time: 1.746 ms\n> Execution Time: 13405.503 ms\n> (116 Zeilen)\n>\n> This case really brought me to detect the problem!\n>\n> The original query and data are not shown here, but the principle should\n> be clear from the execution plans.\n>\n> I think the planner shouldn't change the row estimations on further\n> steps after left joins at all, and be a bit more conservative on inner\n> joins.\n> This may be related to the fact that this case has 2 join-conditions\n> (xx_season an xx_code).\n>\n> Thanks for looking\n>\n> Hans Buschmann\n>\n>\n>\n>\n>\n>\n> !! External Email: This email originated from outside of the\n> organization. Do not click links or open attachments unless you\n> recognize the sender.\n>\n\n--\nTomas Vondra\nEnterpriseDB: https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.enterprisedb.com%2F&data=05%7C01%7Cgjian%40vmware.com%7C9d40e84af2c946f3517a08db9cc61ee2%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638276146959658928%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=VzTmxC6ay28C8%2BaA3Dsi%2BDDWxGEgh9UVaPfc%2BMiL5Mo%3D&reserved=0<http://www.enterprisedb.com/>\nThe Enterprise PostgreSQL Company\n\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.",
"msg_date": "Mon, 21 Aug 2023 08:16:12 +0000",
"msg_from": "Jian Guo <gjian@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On 8/21/23 10:16, Jian Guo wrote:\n> Hi hackers,\n> \n> I found a new approach to fix this issue, which seems better, so I would\n> like to post another version of the patch here. The origin patch made\n> the assumption of the values of Vars from CTE must be unique, which\n> could be very wrong. This patch examines variables for Vars inside CTE,\n> which avoided the bad assumption, so the results could be much more\n> accurate.\n> \n\nNo problem with posting a reworked patch to the same thread, but I'll\nrepeat my suggestion to register this in the CF app [1]. The benefit is\nthat people are more likely to notice the patch and also cfbot [2] will\nrun regression tests.\n\n[1] https://commitfest.postgresql.org\n[2] http://cfbot.cputube.org/\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 21 Aug 2023 12:56:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Sure, Tomas.\n\nHere is the PG Commitfest link: https://commitfest.postgresql.org/44/4510/\n________________________________\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Monday, August 21, 2023 18:56\nTo: Jian Guo <gjian@vmware.com>; Hans Buschmann <buschmann@nidsa.net>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nCc: Zhenghua Lyu <zlyu@vmware.com>\nSubject: Re: Wrong rows estimations with joins of CTEs slows queries by more than factor 500\n\n!! External Email\n\nOn 8/21/23 10:16, Jian Guo wrote:\n> Hi hackers,\n>\n> I found a new approach to fix this issue, which seems better, so I would\n> like to post another version of the patch here. The origin patch made\n> the assumption of the values of Vars from CTE must be unique, which\n> could be very wrong. This patch examines variables for Vars inside CTE,\n> which avoided the bad assumption, so the results could be much more\n> accurate.\n>\n\nNo problem with posting a reworked patch to the same thread, but I'll\nrepeat my suggestion to register this in the CF app [1]. The benefit is\nthat people are more likely to notice the patch and also cfbot [2] will\nrun regression tests.\n\n[1] https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommitfest.postgresql.org%2F&data=05%7C01%7Cgjian%40vmware.com%7C4562125966b248a1e18308dba2353d8f%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638282121775872407%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=OmMo0lQtSvDFWu8VbI0ZorDpZ3BuxsmkTjagGfnryEc%3D&reserved=0<https://commitfest.postgresql.org/>\n[2] https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcfbot.cputube.org%2F&data=05%7C01%7Cgjian%40vmware.com%7C4562125966b248a1e18308dba2353d8f%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638282121775872407%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=xTYDRybLm0AYyvRNqtN85fZWeUJREshIq7PYhz8bMgU%3D&reserved=0<http://cfbot.cputube.org/>\n\n\n--\nTomas Vondra\nEnterpriseDB: https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.enterprisedb.com%2F&data=05%7C01%7Cgjian%40vmware.com%7C4562125966b248a1e18308dba2353d8f%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638282121775872407%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Zn4W8nPFmKxCLQ3XM555UlnM%2F9q1XLkJU5PRxT1VSig%3D&reserved=0<http://www.enterprisedb.com/>\nThe Enterprise PostgreSQL Company\n\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.\n\n\n\n\n\n\n\n\nSure, Tomas. \n\n\n\n\nHere is the PG Commitfest link: https://commitfest.postgresql.org/44/4510/\n\n\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nSent: Monday, August 21, 2023 18:56\nTo: Jian Guo <gjian@vmware.com>; Hans Buschmann <buschmann@nidsa.net>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nCc: Zhenghua Lyu <zlyu@vmware.com>\nSubject: Re: Wrong rows estimations with joins of CTEs slows queries by more than factor 500\n \n\n\n!! External Email\n\nOn 8/21/23 10:16, Jian Guo wrote:\n> Hi hackers,\n>\n> I found a new approach to fix this issue, which seems better, so I would\n> like to post another version of the patch here. The origin patch made\n> the assumption of the values of Vars from CTE must be unique, which\n> could be very wrong. This patch examines variables for Vars inside CTE,\n> which avoided the bad assumption, so the results could be much more\n> accurate.\n>\n\nNo problem with posting a reworked patch to the same thread, but I'll\nrepeat my suggestion to register this in the CF app [1]. The benefit is\nthat people are more likely to notice the patch and also cfbot [2] will\nrun regression tests.\n\n[1] https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommitfest.postgresql.org%2F&data=05%7C01%7Cgjian%40vmware.com%7C4562125966b248a1e18308dba2353d8f%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638282121775872407%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=OmMo0lQtSvDFWu8VbI0ZorDpZ3BuxsmkTjagGfnryEc%3D&reserved=0\n[2] https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcfbot.cputube.org%2F&data=05%7C01%7Cgjian%40vmware.com%7C4562125966b248a1e18308dba2353d8f%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638282121775872407%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=xTYDRybLm0AYyvRNqtN85fZWeUJREshIq7PYhz8bMgU%3D&reserved=0\n\n\n--\nTomas Vondra\nEnterpriseDB: https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.enterprisedb.com%2F&data=05%7C01%7Cgjian%40vmware.com%7C4562125966b248a1e18308dba2353d8f%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638282121775872407%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Zn4W8nPFmKxCLQ3XM555UlnM%2F9q1XLkJU5PRxT1VSig%3D&reserved=0\nThe Enterprise PostgreSQL Company\n\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.",
"msg_date": "Tue, 22 Aug 2023 02:35:30 +0000",
"msg_from": "Jian Guo <gjian@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 10:35 AM Jian Guo <gjian@vmware.com> wrote:\n>\n> Sure, Tomas.\n>\n> Here is the PG Commitfest link: https://commitfest.postgresql.org/44/4510/\n> ________________________________\n\nhi.\nwondering around http://cfbot.cputube.org/\nthere is a compiler warning: https://cirrus-ci.com/task/6052087599988736\n\nI slightly edited the code, making the compiler warning out.\n\nI am not sure if the following duplicate comment from (rte->rtekind ==\nRTE_SUBQUERY && !rte->inh) branch is correct.\n/*\n* OK, recurse into the subquery. Note that the original setting\n* of vardata->isunique (which will surely be false) is left\n* unchanged in this situation. That's what we want, since even\n* if the underlying column is unique, the subquery may have\n* joined to other tables in a way that creates duplicates.\n*/\n\nIndex varnoSaved = var->varno;\nhere varnoSaved should be int?\n\nimage attached is the coverage report\nif I understand coverage report correctly,\n`\nif (rel->subroot) examine_simple_variable(rel->subroot, var, vardata);\n`\nthe above never actually executed?",
"msg_date": "Wed, 6 Sep 2023 14:00:39 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Hi Jian He,\r\n\r\nThanks for fixing the compiler warnings, seems the CI used a little old compiler and complained:\r\n\r\n ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]\r\n\r\nBut later C standard have relaxed the requirements for this, ISO C99 and later standard allow declarations and code to be freely mixed within compound statements: https://gcc.gnu.org/onlinedocs/gcc/Mixed-Labels-and-Declarations.html\r\nMixed Labels and Declarations (Using the GNU Compiler Collection (GCC))<https://gcc.gnu.org/onlinedocs/gcc/Mixed-Labels-and-Declarations.html>\r\nMixed Labels and Declarations (Using the GNU Compiler Collection (GCC))\r\ngcc.gnu.org\r\n\r\n________________________________\r\nFrom: jian he <jian.universality@gmail.com>\r\nSent: Wednesday, September 6, 2023 14:00\r\nTo: Jian Guo <gjian@vmware.com>\r\nCc: Tomas Vondra <tomas.vondra@enterprisedb.com>; Hans Buschmann <buschmann@nidsa.net>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\nSubject: Re: Wrong rows estimations with joins of CTEs slows queries by more than factor 500\r\n\r\n!! External Email\r\n\r\nOn Tue, Aug 22, 2023 at 10:35 AM Jian Guo <gjian@vmware.com> wrote:\r\n>\r\n> Sure, Tomas.\r\n>\r\n> Here is the PG Commitfest link: https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommitfest.postgresql.org%2F44%2F4510%2F&data=05%7C01%7Cgjian%40vmware.com%7C711eddbb381e4e5ed2cb08dbae9ea0cf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638295768555223775%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=%2FuPl5rS1rFaQRnNevIVxKZCNA2Bbmr2rg%2BRoX5yUE9s%3D&reserved=0<https://commitfest.postgresql.org/44/4510/>\r\n> ________________________________\r\n\r\nhi.\r\nwondering around https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcfbot.cputube.org%2F&data=05%7C01%7Cgjian%40vmware.com%7C711eddbb381e4e5ed2cb08dbae9ea0cf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638295768555223775%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=t%2B8JrNQQAibe3Hdeico06U3HhLx70B17kzPMERY39os%3D&reserved=0<http://cfbot.cputube.org/>\r\nthere is a compiler warning: https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcirrus-ci.com%2Ftask%2F6052087599988736&data=05%7C01%7Cgjian%40vmware.com%7C711eddbb381e4e5ed2cb08dbae9ea0cf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638295768555223775%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=8WbXadRi7MhO0AiHjtJOs4y5mqCP8VHBdcQao%2FXPpM8%3D&reserved=0<https://cirrus-ci.com/task/6052087599988736>\r\n\r\nI slightly edited the code, making the compiler warning out.\r\n\r\nI am not sure if the following duplicate comment from (rte->rtekind ==\r\nRTE_SUBQUERY && !rte->inh) branch is correct.\r\n/*\r\n* OK, recurse into the subquery. Note that the original setting\r\n* of vardata->isunique (which will surely be false) is left\r\n* unchanged in this situation. That's what we want, since even\r\n* if the underlying column is unique, the subquery may have\r\n* joined to other tables in a way that creates duplicates.\r\n*/\r\n\r\nIndex varnoSaved = var->varno;\r\nhere varnoSaved should be int?\r\n\r\nimage attached is the coverage report\r\nif I understand coverage report correctly,\r\n`\r\nif (rel->subroot) examine_simple_variable(rel->subroot, var, vardata);\r\n`\r\nthe above never actually executed?\r\n\r\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.\r\n\n\n\n\n\n\n\n\r\nHi Jian He,\n\n\n\n\r\nThanks for fixing the compiler warnings, seems the CI used a little old compiler and complained:\n\n\n\n\r\n ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]\n\n\n\n\r\nBut later C standard have relaxed the requirements for this, ISO C99 and later standard allow declarations and code to be freely mixed within compound statements:\r\n\r\nhttps://gcc.gnu.org/onlinedocs/gcc/Mixed-Labels-and-Declarations.html\n\n\n\n\n\n\n\n\nMixed Labels and Declarations (Using the GNU Compiler Collection (GCC))\n\r\nMixed Labels and Declarations (Using the GNU Compiler Collection (GCC))\n\r\ngcc.gnu.org\n\n\n\n\n\n\n\n\n\n\n\nFrom: jian he <jian.universality@gmail.com>\nSent: Wednesday, September 6, 2023 14:00\nTo: Jian Guo <gjian@vmware.com>\nCc: Tomas Vondra <tomas.vondra@enterprisedb.com>; Hans Buschmann <buschmann@nidsa.net>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Wrong rows estimations with joins of CTEs slows queries by more than factor 500\n \n\n\n!! External Email\n\r\nOn Tue, Aug 22, 2023 at 10:35 AM Jian Guo <gjian@vmware.com> wrote:\r\n>\r\n> Sure, Tomas.\r\n>\r\n> Here is the PG Commitfest link: \r\nhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommitfest.postgresql.org%2F44%2F4510%2F&data=05%7C01%7Cgjian%40vmware.com%7C711eddbb381e4e5ed2cb08dbae9ea0cf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638295768555223775%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=%2FuPl5rS1rFaQRnNevIVxKZCNA2Bbmr2rg%2BRoX5yUE9s%3D&reserved=0\r\n> ________________________________\n\r\nhi.\r\nwondering around https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcfbot.cputube.org%2F&data=05%7C01%7Cgjian%40vmware.com%7C711eddbb381e4e5ed2cb08dbae9ea0cf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638295768555223775%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=t%2B8JrNQQAibe3Hdeico06U3HhLx70B17kzPMERY39os%3D&reserved=0\r\nthere is a compiler warning: \r\nhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcirrus-ci.com%2Ftask%2F6052087599988736&data=05%7C01%7Cgjian%40vmware.com%7C711eddbb381e4e5ed2cb08dbae9ea0cf%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638295768555223775%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=8WbXadRi7MhO0AiHjtJOs4y5mqCP8VHBdcQao%2FXPpM8%3D&reserved=0\n\r\nI slightly edited the code, making the compiler warning out.\n\r\nI am not sure if the following duplicate comment from (rte->rtekind ==\r\nRTE_SUBQUERY && !rte->inh) branch is correct.\r\n/*\r\n* OK, recurse into the subquery. Note that the original setting\r\n* of vardata->isunique (which will surely be false) is left\r\n* unchanged in this situation. That's what we want, since even\r\n* if the underlying column is unique, the subquery may have\r\n* joined to other tables in a way that creates duplicates.\r\n*/\n\r\nIndex varnoSaved = var->varno;\r\nhere varnoSaved should be int?\n\r\nimage attached is the coverage report\r\nif I understand coverage report correctly,\r\n`\r\nif (rel->subroot) examine_simple_variable(rel->subroot, var, vardata);\r\n`\r\nthe above never actually executed?\n\r\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.",
"msg_date": "Thu, 7 Sep 2023 09:26:41 +0000",
"msg_from": "Jian Guo <gjian@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Jian Guo <gjian@vmware.com> writes:\n> I found a new approach to fix this issue, which seems better, so I would like to post another version of the patch here. The origin patch made the assumption of the values of Vars from CTE must be unique, which could be very wrong. This patch examines variables for Vars inside CTE, which avoided the bad assumption, so the results could be much more accurate.\n\nYou have the right general idea, but there is nothing about this patch\nthat's right in detail. The outer Var doesn't refer to any particular\nRTE within the subquery; it refers to a targetlist entry. You have to\ndrill down to that, see if it's a Var, and if so you can recurse into\nthe subroot with that Var. As this stands, it might accidentally get\nthe right answer for \"SELECT * FROM foo\" subqueries, but it will get\nthe wrong answer or even crash for anything that's not that.\n\nThe existing RTE_SUBQUERY stanza has most of what we need for this,\nso I experimented with extending that to also handle RTE_CTE. It\nseems to work, though I soon found out that it needed tweaking for\nthe case where the CTE is INSERT/UPDATE/DELETE RETURNING.\n\nInterestingly, this does not change any existing regression test\nresults. I'd supposed there might be at least one place with a\nvisible plan change, but nope. Running a coverage test does show\nthat the new code paths are exercised, but I wonder if we ought\nto try to devise a regression test that proves it more directly.\n\n\t\t\tregards, tom lane\n\nPS: please, please, please do not quote the entire damn thread\nwhen replying. Trim it to just a minimum amount of relevant\ntext. You think people want to read all that again?",
"msg_date": "Wed, 08 Nov 2023 17:44:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Thu, Nov 9, 2023 at 6:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> The existing RTE_SUBQUERY stanza has most of what we need for this,\n> so I experimented with extending that to also handle RTE_CTE. It\n> seems to work, though I soon found out that it needed tweaking for\n> the case where the CTE is INSERT/UPDATE/DELETE RETURNING.\n\n\nThe change looks good to me. To nitpick, should we modify the comment\nof examine_simple_variable to also mention 'CTEs'?\n\n * This is split out as a subroutine so that we can recurse to deal with\n- * Vars referencing subqueries.\n+ * Vars referencing subqueries or CTEs.\n\n\n> Interestingly, this does not change any existing regression test\n> results. I'd supposed there might be at least one place with a\n> visible plan change, but nope. Running a coverage test does show\n> that the new code paths are exercised, but I wonder if we ought\n> to try to devise a regression test that proves it more directly.\n\n\nI think we ought to. Here is one regression test that proves that this\nchange improves query plans in some cases.\n\nUnpatched:\n\nexplain (costs off)\nwith x as MATERIALIZED (select unique1 from tenk1 b)\nselect count(*) from tenk1 a where unique1 in (select * from x);\n QUERY PLAN\n------------------------------------------------------------\n Aggregate\n CTE x\n -> Index Only Scan using tenk1_unique1 on tenk1 b\n -> Nested Loop\n -> HashAggregate\n Group Key: x.unique1\n -> CTE Scan on x\n -> Index Only Scan using tenk1_unique1 on tenk1 a\n Index Cond: (unique1 = x.unique1)\n(9 rows)\n\nPatched:\n\nexplain (costs off)\nwith x as MATERIALIZED (select unique1 from tenk1 b)\nselect count(*) from tenk1 a where unique1 in (select * from x);\n QUERY PLAN\n------------------------------------------------------------\n Aggregate\n CTE x\n -> Index Only Scan using tenk1_unique1 on tenk1 b\n -> Hash Semi Join\n Hash Cond: (a.unique1 = x.unique1)\n -> Index Only Scan using tenk1_unique1 on tenk1 a\n -> Hash\n -> CTE Scan on x\n(8 rows)\n\nI think the second plan (patched) makes more sense. In the first plan\n(unpatched), the HashAggregate node actually does not reduce the the\nnumber of rows because it groups by 'unique1', but planner does not know\nthat because it lacks statistics for Vars referencing the CTE.\n\nThanks\nRichard\n\nOn Thu, Nov 9, 2023 at 6:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nThe existing RTE_SUBQUERY stanza has most of what we need for this,\nso I experimented with extending that to also handle RTE_CTE. It\nseems to work, though I soon found out that it needed tweaking for\nthe case where the CTE is INSERT/UPDATE/DELETE RETURNING.The change looks good to me. To nitpick, should we modify the commentof examine_simple_variable to also mention 'CTEs'? * This is split out as a subroutine so that we can recurse to deal with- * Vars referencing subqueries.+ * Vars referencing subqueries or CTEs. \nInterestingly, this does not change any existing regression test\nresults. I'd supposed there might be at least one place with a\nvisible plan change, but nope. Running a coverage test does show\nthat the new code paths are exercised, but I wonder if we ought\nto try to devise a regression test that proves it more directly.I think we ought to. Here is one regression test that proves that thischange improves query plans in some cases.Unpatched:explain (costs off)with x as MATERIALIZED (select unique1 from tenk1 b)select count(*) from tenk1 a where unique1 in (select * from x); QUERY PLAN------------------------------------------------------------ Aggregate CTE x -> Index Only Scan using tenk1_unique1 on tenk1 b -> Nested Loop -> HashAggregate Group Key: x.unique1 -> CTE Scan on x -> Index Only Scan using tenk1_unique1 on tenk1 a Index Cond: (unique1 = x.unique1)(9 rows)Patched:explain (costs off)with x as MATERIALIZED (select unique1 from tenk1 b)select count(*) from tenk1 a where unique1 in (select * from x); QUERY PLAN------------------------------------------------------------ Aggregate CTE x -> Index Only Scan using tenk1_unique1 on tenk1 b -> Hash Semi Join Hash Cond: (a.unique1 = x.unique1) -> Index Only Scan using tenk1_unique1 on tenk1 a -> Hash -> CTE Scan on x(8 rows)I think the second plan (patched) makes more sense. In the first plan(unpatched), the HashAggregate node actually does not reduce the thenumber of rows because it groups by 'unique1', but planner does not knowthat because it lacks statistics for Vars referencing the CTE.ThanksRichard",
"msg_date": "Thu, 16 Nov 2023 17:24:55 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> I think the second plan (patched) makes more sense. In the first plan\n> (unpatched), the HashAggregate node actually does not reduce the the\n> number of rows because it groups by 'unique1', but planner does not know\n> that because it lacks statistics for Vars referencing the CTE.\n\nYeah. It's faster in reality too:\n\nregression=# explain analyze with x as MATERIALIZED (select unique1 from tenk1 b)\nselect count(*) from tenk1 a where unique1 in (select * from x);\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=692.29..692.30 rows=1 width=8) (actual time=15.186..15.188 rows=1 loops=1)\n CTE x\n -> Index Only Scan using tenk1_unique1 on tenk1 b (cost=0.29..270.29 rows=10000 width=4) (actual time=0.028..0.754 rows=10000 loops=1)\n Heap Fetches: 0\n -> Nested Loop (cost=225.28..409.50 rows=5000 width=0) (actual time=3.652..14.733 rows=10000 loops=1)\n -> HashAggregate (cost=225.00..227.00 rows=200 width=4) (actual time=3.644..4.510 rows=10000 loops=1)\n Group Key: x.unique1\n Batches: 1 Memory Usage: 929kB\n -> CTE Scan on x (cost=0.00..200.00 rows=10000 width=4) (actual time=0.030..1.932 rows=10000 loops=1)\n -> Index Only Scan using tenk1_unique1 on tenk1 a (cost=0.29..0.90 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=10000)\n Index Cond: (unique1 = x.unique1)\n Heap Fetches: 0\n Planning Time: 0.519 ms\n Execution Time: 15.479 ms\n(14 rows)\n\nvs\n\nregression=# explain analyze with x as MATERIALIZED (select unique1 from tenk1 b)\nselect count(*) from tenk1 a where unique1 in (select * from x);\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1028.07..1028.08 rows=1 width=8) (actual time=4.578..4.579 rows=1 loops=1)\n CTE x\n -> Index Only Scan using tenk1_unique1 on tenk1 b (cost=0.29..270.29 rows=10000 width=4) (actual time=0.011..0.751 rows=10000 loops=1)\n Heap Fetches: 0\n -> Hash Semi Join (cost=325.28..732.78 rows=10000 width=0) (actual time=2.706..4.305 rows=10000 loops=1)\n Hash Cond: (a.unique1 = x.unique1)\n -> Index Only Scan using tenk1_unique1 on tenk1 a (cost=0.29..270.29 rows=10000 width=4) (actual time=0.011..0.676 rows=10000 loops=1)\n Heap Fetches: 0\n -> Hash (cost=200.00..200.00 rows=10000 width=4) (actual time=2.655..2.655 rows=10000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 480kB\n -> CTE Scan on x (cost=0.00..200.00 rows=10000 width=4) (actual time=0.012..1.963 rows=10000 loops=1)\n Planning Time: 0.504 ms\n Execution Time: 4.821 ms\n(13 rows)\n\nNow, what you get if you remove MATERIALIZED is faster yet:\n\nregression=# explain analyze with x as (select unique1 from tenk1 b)\nselect count(*) from tenk1 a where unique1 in (select * from x);\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=715.57..715.58 rows=1 width=8) (actual time=2.681..2.682 rows=1 loops=1)\n -> Merge Semi Join (cost=0.57..690.57 rows=10000 width=0) (actual time=0.016..2.408 rows=10000 loops=1)\n Merge Cond: (a.unique1 = b.unique1)\n -> Index Only Scan using tenk1_unique1 on tenk1 a (cost=0.29..270.29 rows=10000 width=4) (actual time=0.007..0.696 rows=10000 loops=1)\n Heap Fetches: 0\n -> Index Only Scan using tenk1_unique1 on tenk1 b (cost=0.29..270.29 rows=10000 width=4) (actual time=0.007..0.655 rows=10000 loops=1)\n Heap Fetches: 0\n Planning Time: 0.160 ms\n Execution Time: 2.718 ms\n(9 rows)\n\nI poked into that and found that the reason we don't get a mergejoin\nwith the materialized CTE is that the upper planner invocation doesn't\nknow that the CTE's output is sorted, so it thinks a separate sort\nstep would be needed.\n\nSo you could argue that there's more to do here, but I'm hesitant\nto go further. Part of the point of MATERIALIZED is to be an\noptimization fence, so breaking down that fence is something to be\nwary of. Maybe we shouldn't even take this patch --- but on\nbalance I think it's an OK compromise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Nov 2023 13:16:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 2:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> So you could argue that there's more to do here, but I'm hesitant\n> to go further. Part of the point of MATERIALIZED is to be an\n> optimization fence, so breaking down that fence is something to be\n> wary of. Maybe we shouldn't even take this patch --- but on\n> balance I think it's an OK compromise.\n\n\nAgreed. I think the patch is still valuable on its own, although it\ndoes not go down into MATERIALIZED case for further optimization. Maybe\nwe can take another query as regression test to prove its value, in\nwhich the CTE is not inlined without MATERIALIZED, such as\n\nexplain (costs off)\nwith x as (select unique1, unique2 from tenk1 b)\nselect count(*) from tenk1 a\nwhere unique1 in (select unique1 from x x1) and\n unique1 in (select unique2 from x x2);\n QUERY PLAN\n------------------------------------------------------------------\n Aggregate\n CTE x\n -> Seq Scan on tenk1 b\n -> Hash Join\n Hash Cond: (a.unique1 = x2.unique2)\n -> Nested Loop\n -> HashAggregate\n Group Key: x1.unique1\n -> CTE Scan on x x1\n -> Index Only Scan using tenk1_unique1 on tenk1 a\n Index Cond: (unique1 = x1.unique1)\n -> Hash\n -> HashAggregate\n Group Key: x2.unique2\n -> CTE Scan on x x2\n(15 rows)\n\nvs\n\nexplain (costs off)\nwith x as (select unique1, unique2 from tenk1 b)\nselect count(*) from tenk1 a\nwhere unique1 in (select unique1 from x x1) and\n unique1 in (select unique2 from x x2);\n QUERY PLAN\n------------------------------------------------------------------\n Aggregate\n CTE x\n -> Seq Scan on tenk1 b\n -> Hash Semi Join\n Hash Cond: (a.unique1 = x2.unique2)\n -> Hash Semi Join\n Hash Cond: (a.unique1 = x1.unique1)\n -> Index Only Scan using tenk1_unique1 on tenk1 a\n -> Hash\n -> CTE Scan on x x1\n -> Hash\n -> CTE Scan on x x2\n(12 rows)\n\nI believe the second plan is faster in reality too.\n\nThanks\nRichard\n\nOn Fri, Nov 17, 2023 at 2:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nSo you could argue that there's more to do here, but I'm hesitant\nto go further. Part of the point of MATERIALIZED is to be an\noptimization fence, so breaking down that fence is something to be\nwary of. Maybe we shouldn't even take this patch --- but on\nbalance I think it's an OK compromise.Agreed. I think the patch is still valuable on its own, although itdoes not go down into MATERIALIZED case for further optimization. Maybewe can take another query as regression test to prove its value, inwhich the CTE is not inlined without MATERIALIZED, such asexplain (costs off)with x as (select unique1, unique2 from tenk1 b)select count(*) from tenk1 awhere unique1 in (select unique1 from x x1) and unique1 in (select unique2 from x x2); QUERY PLAN------------------------------------------------------------------ Aggregate CTE x -> Seq Scan on tenk1 b -> Hash Join Hash Cond: (a.unique1 = x2.unique2) -> Nested Loop -> HashAggregate Group Key: x1.unique1 -> CTE Scan on x x1 -> Index Only Scan using tenk1_unique1 on tenk1 a Index Cond: (unique1 = x1.unique1) -> Hash -> HashAggregate Group Key: x2.unique2 -> CTE Scan on x x2(15 rows)vsexplain (costs off)with x as (select unique1, unique2 from tenk1 b)select count(*) from tenk1 awhere unique1 in (select unique1 from x x1) and unique1 in (select unique2 from x x2); QUERY PLAN------------------------------------------------------------------ Aggregate CTE x -> Seq Scan on tenk1 b -> Hash Semi Join Hash Cond: (a.unique1 = x2.unique2) -> Hash Semi Join Hash Cond: (a.unique1 = x1.unique1) -> Index Only Scan using tenk1_unique1 on tenk1 a -> Hash -> CTE Scan on x x1 -> Hash -> CTE Scan on x x2(12 rows)I believe the second plan is faster in reality too.ThanksRichard",
"msg_date": "Fri, 17 Nov 2023 10:09:10 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Fri, Nov 17, 2023 at 2:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So you could argue that there's more to do here, but I'm hesitant\n>> to go further. Part of the point of MATERIALIZED is to be an\n>> optimization fence, so breaking down that fence is something to be\n>> wary of. Maybe we shouldn't even take this patch --- but on\n>> balance I think it's an OK compromise.\n\n> Agreed. I think the patch is still valuable on its own, although it\n> does not go down into MATERIALIZED case for further optimization.\n\nRight. My earlier response was rather rushed, so let me explain\nmy thinking a bit more.\n\nWhen I realized that the discrepancy between MATERIALIZED-and-not\nplans was due to the upper planner not seeing the pathkeys for the\nCTE scan, my first thought was to try to export those pathkeys.\nAnd my second thought was that the CTE should return multiple\npotential paths, much as we do for sub-SELECT-in-FROM subqueries,\nwith the upper planner eventually choosing one of those paths.\nBut that second idea would break down the optimization fence\nalmost completely, because the opinions of the upper planner would\ninfluence which plan we choose for the CTE query. I think we\nshouldn't go there, at least not for a CTE explicitly marked\nMATERIALIZED. (Maybe if it's not marked MATERIALIZED, but we\nchose not to flatten it for some other reason, we could think\nabout that differently? Not sure.)\n\nI think that when we say that MATERIALIZED is meant as an optimization\nfence, what we mostly mean is that the upper query shouldn't influence\nthe choice of plan for the sub-query. However, we surely allow our\nstatistics or guesses for the sub-query to subsequently influence what\nthe upper planner does. If that weren't true, we shouldn't even\nexpose any non-default rowcount guess to the upper planner --- but\nthat would lead to really horrid results, so we allow that information\nto percolate up from the sub-query. It seems like exposing column\nstatistics to the upper planner, as the proposed patch does, isn't\nfundamentally different from exposing rowcount estimates.\n\nThat line of argument also leads to the conclusion that it'd be\nokay to expose info about the ordering of the CTE result to the\nupper planner. This patch doesn't do that, and I'm not sufficiently\nexcited about the issue to go write some code. But if someone else\ndoes, I think we shouldn't exclude doing it on the grounds of wanting\nto preserve an optimization fence. The fence is sort of one-way\nin this line of thinking: information can propagate up to the outer\nplanner level, but not down into the CTE plan.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Nov 2023 22:38:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Thursday, November 16, 2023, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> That line of argument also leads to the conclusion that it'd be\n> okay to expose info about the ordering of the CTE result to the\n> upper planner. This patch doesn't do that, and I'm not sufficiently\n> excited about the issue to go write some code. But if someone else\n> does, I think we shouldn't exclude doing it on the grounds of wanting\n> to preserve an optimization fence. The fence is sort of one-way\n> in this line of thinking: information can propagate up to the outer\n> planner level, but not down into the CTE plan.\n>\n\nThis is indeed my understanding of what materialized means. Basically, the\nCTE is done first and in isolation; but any knowledge of its result shape\ncan be used when referencing it.\n\nDavid J.\n\nOn Thursday, November 16, 2023, Tom Lane <tgl@sss.pgh.pa.us> wrote:\nThat line of argument also leads to the conclusion that it'd be\nokay to expose info about the ordering of the CTE result to the\nupper planner. This patch doesn't do that, and I'm not sufficiently\nexcited about the issue to go write some code. But if someone else\ndoes, I think we shouldn't exclude doing it on the grounds of wanting\nto preserve an optimization fence. The fence is sort of one-way\nin this line of thinking: information can propagate up to the outer\nplanner level, but not down into the CTE plan.\nThis is indeed my understanding of what materialized means. Basically, the CTE is done first and in isolation; but any knowledge of its result shape can be used when referencing it.David J.",
"msg_date": "Thu, 16 Nov 2023 20:45:44 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Thu, 2023-11-16 at 22:38 -0500, Tom Lane wrote:\n> That line of argument also leads to the conclusion that it'd be\n> okay to expose info about the ordering of the CTE result to the\n> upper planner. [...] The fence is sort of one-way\n> in this line of thinking: information can propagate up to the outer\n> planner level, but not down into the CTE plan.\n> \n> Thoughts?\n\nThat agrees with my intuition about MATERIALIZED CTEs.\nI think of them as \"first calculate the CTE, then calculate the\nrest of the query\" or an ad-hoc temporary table for the duration\nof a query. I would expect the upper planner to know estimates\nand other data about the result of the CTE.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 17 Nov 2023 04:53:31 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> That line of argument also leads to the conclusion that it'd be\n> okay to expose info about the ordering of the CTE result to the\n> upper planner. This patch doesn't do that, and I'm not sufficiently\n> excited about the issue to go write some code. But if someone else\n> does, I think we shouldn't exclude doing it on the grounds of wanting\n> to preserve an optimization fence. The fence is sort of one-way\n> in this line of thinking: information can propagate up to the outer\n> planner level, but not down into the CTE plan.\n>\n> Thoughts?\n\n\nExactly! Thanks for the detailed explanation.\n\nThanks\nRichard\n\nOn Fri, Nov 17, 2023 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nThat line of argument also leads to the conclusion that it'd be\nokay to expose info about the ordering of the CTE result to the\nupper planner. This patch doesn't do that, and I'm not sufficiently\nexcited about the issue to go write some code. But if someone else\ndoes, I think we shouldn't exclude doing it on the grounds of wanting\nto preserve an optimization fence. The fence is sort of one-way\nin this line of thinking: information can propagate up to the outer\nplanner level, but not down into the CTE plan.\n\nThoughts?Exactly! Thanks for the detailed explanation.ThanksRichard",
"msg_date": "Fri, 17 Nov 2023 14:41:12 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Fri, Nov 17, 2023 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That line of argument also leads to the conclusion that it'd be\n>> okay to expose info about the ordering of the CTE result to the\n>> upper planner. This patch doesn't do that, and I'm not sufficiently\n>> excited about the issue to go write some code. But if someone else\n>> does, I think we shouldn't exclude doing it on the grounds of wanting\n>> to preserve an optimization fence. The fence is sort of one-way\n>> in this line of thinking: information can propagate up to the outer\n>> planner level, but not down into the CTE plan.\n\n> Exactly! Thanks for the detailed explanation.\n\nOK. I pushed the patch after a bit more review: we can simplify\nthings some more by using the subroot->parse querytree for all\ntests. After the previous refactoring, it wasn't buying us anything\nto do some initial tests with the raw querytree. (The original\nidea of that, I believe, was to avoid doing find_base_rel if we\ncould; but now that's not helpful.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Nov 2023 14:42:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> That line of argument also leads to the conclusion that it'd be\n> okay to expose info about the ordering of the CTE result to the\n> upper planner. This patch doesn't do that, and I'm not sufficiently\n> excited about the issue to go write some code. But if someone else\n> does, I think we shouldn't exclude doing it on the grounds of wanting\n> to preserve an optimization fence. The fence is sort of one-way\n> in this line of thinking: information can propagate up to the outer\n> planner level, but not down into the CTE plan.\n\n\nIn the light of this conclusion, I had a go at propagating the pathkeys\nfrom CTEs up to the outer planner and came up with the attached.\n\nComments/thoughts?\n\nThanks\nRichard",
"msg_date": "Mon, 20 Nov 2023 10:42:31 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Fri, Nov 17, 2023 at 11:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That line of argument also leads to the conclusion that it'd be\n>> okay to expose info about the ordering of the CTE result to the\n>> upper planner.\n\n> In the light of this conclusion, I had a go at propagating the pathkeys\n> from CTEs up to the outer planner and came up with the attached.\n\nOh, nice! I remembered we had code already to do this for regular\nSubqueryScans, but I thought we'd need to do some refactoring to\napply it to CTEs. I think you are right though that\nconvert_subquery_pathkeys can be used as-is. Some thoughts:\n\n* Do we really need to use make_tlist_from_pathtarget? Why isn't\nthe tlist of the cteplan good enough (indeed, more so)?\n\n* I don't love having this code assume that it knows how to find\nthe Path the cteplan was made from. It'd be better to make\nSS_process_ctes save that somewhere, maybe in a list paralleling\nroot->cte_plan_ids.\n\nAlternatively: maybe it's time to do what the comments in\nSS_process_ctes vaguely speculate about, and just save the Path\nat that point, with construction of the plan left for createplan()?\nThat might be a lot of refactoring for not much gain, so not sure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Nov 2023 12:45:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 1:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> * Do we really need to use make_tlist_from_pathtarget? Why isn't\n> the tlist of the cteplan good enough (indeed, more so)?\n\n\nI think you are right. The cteplan->targetlist is built for the CTE's\nbest path by build_path_tlist(), which is almost the same as\nmake_tlist_from_pathtarget() except that it also replaces nestloop\nparams. So cteplan->targetlist is good enough here.\n\n\n> * I don't love having this code assume that it knows how to find\n> the Path the cteplan was made from. It'd be better to make\n> SS_process_ctes save that somewhere, maybe in a list paralleling\n> root->cte_plan_ids.\n\n\nFair point.\n\nI've updated the patch to v2 for the changes.\n\n\n> Alternatively: maybe it's time to do what the comments in\n> SS_process_ctes vaguely speculate about, and just save the Path\n> at that point, with construction of the plan left for createplan()?\n> That might be a lot of refactoring for not much gain, so not sure.\n\n\nI'm not sure if this is worth the effort. And it seems that we have the\nsame situation with SubLinks where we construct the plan in subselect.c\nrather than createplan.c.\n\nThanks\nRichard",
"msg_date": "Tue, 21 Nov 2023 14:18:19 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Hello Tom and Richard,\n\n17.11.2023 22:42, Tom Lane wrote:\n> OK. I pushed the patch after a bit more review: we can simplify\n> things some more by using the subroot->parse querytree for all\n> tests. After the previous refactoring, it wasn't buying us anything\n> to do some initial tests with the raw querytree. (The original\n> idea of that, I believe, was to avoid doing find_base_rel if we\n> could; but now that's not helpful.)\n\nPlease look at the following query:\nCREATE TABLE t(i int);\nINSERT INTO t VALUES (1);\nVACUUM ANALYZE t;\n\nWITH ir AS (INSERT INTO t VALUES (2) RETURNING i)\nSELECT * FROM ir WHERE i = 2;\n\nwhich produces ERROR: no relation entry for relid 1\nstarting from f7816aec2.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 6 Jan 2024 12:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> Please look at the following query:\n> CREATE TABLE t(i int);\n> INSERT INTO t VALUES (1);\n> VACUUM ANALYZE t;\n\n> WITH ir AS (INSERT INTO t VALUES (2) RETURNING i)\n> SELECT * FROM ir WHERE i = 2;\n\n> which produces ERROR: no relation entry for relid 1\n> starting from f7816aec2.\n\nThanks for the report! I guess we need something like the attached.\nI'm surprised that this hasn't been noticed before; was the case\nreally unreachable before?\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 06 Jan 2024 17:41:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Sun, Jan 7, 2024 at 6:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alexander Lakhin <exclusion@gmail.com> writes:\n> > Please look at the following query:\n> > CREATE TABLE t(i int);\n> > INSERT INTO t VALUES (1);\n> > VACUUM ANALYZE t;\n>\n> > WITH ir AS (INSERT INTO t VALUES (2) RETURNING i)\n> > SELECT * FROM ir WHERE i = 2;\n>\n> > which produces ERROR: no relation entry for relid 1\n> > starting from f7816aec2.\n\n\nNice catch.\n\n\n> Thanks for the report! I guess we need something like the attached.\n\n\n+1.\n\n\n> I'm surprised that this hasn't been noticed before; was the case\n> really unreachable before?\n\n\nIt seems that this case is only reachable with Vars of an INSERT target\nrelation, and it seems that there is no other way to reference such a\nVar other than using CTE.\n\nThanks\nRichard\n\nOn Sun, Jan 7, 2024 at 6:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alexander Lakhin <exclusion@gmail.com> writes:\n> Please look at the following query:\n> CREATE TABLE t(i int);\n> INSERT INTO t VALUES (1);\n> VACUUM ANALYZE t;\n\n> WITH ir AS (INSERT INTO t VALUES (2) RETURNING i)\n> SELECT * FROM ir WHERE i = 2;\n\n> which produces ERROR: no relation entry for relid 1\n> starting from f7816aec2.Nice catch. \nThanks for the report! I guess we need something like the attached.+1. \nI'm surprised that this hasn't been noticed before; was the case\nreally unreachable before?It seems that this case is only reachable with Vars of an INSERT targetrelation, and it seems that there is no other way to reference such aVar other than using CTE.ThanksRichard",
"msg_date": "Mon, 8 Jan 2024 19:14:11 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Sun, Jan 7, 2024 at 6:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Thanks for the report! I guess we need something like the attached.\n\n> +1.\n\nPushed, thanks for looking at it.\n\n>> I'm surprised that this hasn't been noticed before; was the case\n>> really unreachable before?\n\n> It seems that this case is only reachable with Vars of an INSERT target\n> relation, and it seems that there is no other way to reference such a\n> Var other than using CTE.\n\nI'm a little uncomfortable with that conclusion, but for the moment\nI refrained from back-patching. We can always add the patch to v16\nlater if we find it's not so unreachable. (Before v16, there was\nno find_base_rel here at all.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jan 2024 11:51:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Mon, 8 Jan 2024 at 22:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > On Sun, Jan 7, 2024 at 6:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Thanks for the report! I guess we need something like the attached.\n>\n> > +1.\n>\n> Pushed, thanks for looking at it.\n\nI have changed the status of the commitfest entry to \"Committed\" as I\nnoticed the patch has already been committed.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 27 Jan 2024 07:37:51 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Sat, Jan 27, 2024 at 10:08 AM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Mon, 8 Jan 2024 at 22:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Richard Guo <guofenglinux@gmail.com> writes:\n> > > On Sun, Jan 7, 2024 at 6:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> Thanks for the report! I guess we need something like the attached.\n> >\n> > > +1.\n> >\n> > Pushed, thanks for looking at it.\n>\n> I have changed the status of the commitfest entry to \"Committed\" as I\n> noticed the patch has already been committed.\n\n\nWell, the situation seems a little complex here. At first, this thread\nwas dedicated to discussing the 'Examine-simple-variable-for-Var-in-CTE'\npatch, which has already been pushed in [1]. Subsequently, I proposed\nanother patch 'Propagate-pathkeys-from-CTEs-up-to-the-outer-query' in\n[2], which is currently under review and is what the commitfest entry\nfor. Later on, within the same thread, another patch was posted as a\nfix to the first patch and was subsequently pushed in [3]. I believe\nthis sequence of events might have led to confusion.\n\nWhat is the usual practice in such situations? I guess I'd better to\nfork a new thread to discuss my proposed patch which is about the\n'Propagate-pathkeys-from-CTEs-up-to-the-outer-query'.\n\n[1] https://www.postgresql.org/message-id/754093.1700250120%40sss.pgh.pa.us\n[2]\nhttps://www.postgresql.org/message-id/CAMbWs49gAHeEOn0rpdUUYXryaa60KZ8JKwk1aSERttY9caCYkA%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/1941515.1704732682%40sss.pgh.pa.us\n\nThanks\nRichard\n\nOn Sat, Jan 27, 2024 at 10:08 AM vignesh C <vignesh21@gmail.com> wrote:On Mon, 8 Jan 2024 at 22:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > On Sun, Jan 7, 2024 at 6:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Thanks for the report! I guess we need something like the attached.\n>\n> > +1.\n>\n> Pushed, thanks for looking at it.\n\nI have changed the status of the commitfest entry to \"Committed\" as I\nnoticed the patch has already been committed.Well, the situation seems a little complex here. At first, this threadwas dedicated to discussing the 'Examine-simple-variable-for-Var-in-CTE'patch, which has already been pushed in [1]. Subsequently, I proposedanother patch 'Propagate-pathkeys-from-CTEs-up-to-the-outer-query' in[2], which is currently under review and is what the commitfest entryfor. Later on, within the same thread, another patch was posted as afix to the first patch and was subsequently pushed in [3]. I believethis sequence of events might have led to confusion.What is the usual practice in such situations? I guess I'd better tofork a new thread to discuss my proposed patch which is about the'Propagate-pathkeys-from-CTEs-up-to-the-outer-query'.[1] https://www.postgresql.org/message-id/754093.1700250120%40sss.pgh.pa.us[2] https://www.postgresql.org/message-id/CAMbWs49gAHeEOn0rpdUUYXryaa60KZ8JKwk1aSERttY9caCYkA%40mail.gmail.com[3] https://www.postgresql.org/message-id/1941515.1704732682%40sss.pgh.pa.usThanksRichard",
"msg_date": "Mon, 29 Jan 2024 10:30:52 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Mon, 29 Jan 2024 at 08:01, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Sat, Jan 27, 2024 at 10:08 AM vignesh C <vignesh21@gmail.com> wrote:\n>>\n>> On Mon, 8 Jan 2024 at 22:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >\n>> > Richard Guo <guofenglinux@gmail.com> writes:\n>> > > On Sun, Jan 7, 2024 at 6:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> > >> Thanks for the report! I guess we need something like the attached.\n>> >\n>> > > +1.\n>> >\n>> > Pushed, thanks for looking at it.\n>>\n>> I have changed the status of the commitfest entry to \"Committed\" as I\n>> noticed the patch has already been committed.\n>\n>\n> Well, the situation seems a little complex here. At first, this thread\n> was dedicated to discussing the 'Examine-simple-variable-for-Var-in-CTE'\n> patch, which has already been pushed in [1]. Subsequently, I proposed\n> another patch 'Propagate-pathkeys-from-CTEs-up-to-the-outer-query' in\n> [2], which is currently under review and is what the commitfest entry\n> for. Later on, within the same thread, another patch was posted as a\n> fix to the first patch and was subsequently pushed in [3]. I believe\n> this sequence of events might have led to confusion.\n>\n> What is the usual practice in such situations? I guess I'd better to\n> fork a new thread to discuss my proposed patch which is about the\n> 'Propagate-pathkeys-from-CTEs-up-to-the-outer-query'.\n\nSorry I missed to notice that there was one pending patch yet to be\ncommitted, I feel you can continue discussing here itself just to\navoid losing any historical information about the issue and the\ncontinuation of the discussion. You can add a new commitfest entry for\nthis.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 29 Jan 2024 08:49:53 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 11:20 AM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Mon, 29 Jan 2024 at 08:01, Richard Guo <guofenglinux@gmail.com> wrote:\n> > On Sat, Jan 27, 2024 at 10:08 AM vignesh C <vignesh21@gmail.com> wrote:\n> >> I have changed the status of the commitfest entry to \"Committed\" as I\n> >> noticed the patch has already been committed.\n> >\n> > Well, the situation seems a little complex here. At first, this thread\n> > was dedicated to discussing the 'Examine-simple-variable-for-Var-in-CTE'\n> > patch, which has already been pushed in [1]. Subsequently, I proposed\n> > another patch 'Propagate-pathkeys-from-CTEs-up-to-the-outer-query' in\n> > [2], which is currently under review and is what the commitfest entry\n> > for. Later on, within the same thread, another patch was posted as a\n> > fix to the first patch and was subsequently pushed in [3]. I believe\n> > this sequence of events might have led to confusion.\n> >\n> > What is the usual practice in such situations? I guess I'd better to\n> > fork a new thread to discuss my proposed patch which is about the\n> > 'Propagate-pathkeys-from-CTEs-up-to-the-outer-query'.\n>\n> Sorry I missed to notice that there was one pending patch yet to be\n> committed, I feel you can continue discussing here itself just to\n> avoid losing any historical information about the issue and the\n> continuation of the discussion. You can add a new commitfest entry for\n> this.\n\n\nIt seems to me that a fresh new thread is a better option. I have just\nstarted a new thread in [1], and have tried to migrate the necessary\ncontext over there. I have also updated the commitfest entry\naccordingly.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAMbWs49xYd3f8CrE8-WW3--dV1zH_sDSDn-vs2DzHj81Wcnsew%40mail.gmail.com\n\nThanks\nRichard\n\nOn Mon, Jan 29, 2024 at 11:20 AM vignesh C <vignesh21@gmail.com> wrote:On Mon, 29 Jan 2024 at 08:01, Richard Guo <guofenglinux@gmail.com> wrote:\n> On Sat, Jan 27, 2024 at 10:08 AM vignesh C <vignesh21@gmail.com> wrote:\n>> I have changed the status of the commitfest entry to \"Committed\" as I\n>> noticed the patch has already been committed.\n>\n> Well, the situation seems a little complex here. At first, this thread\n> was dedicated to discussing the 'Examine-simple-variable-for-Var-in-CTE'\n> patch, which has already been pushed in [1]. Subsequently, I proposed\n> another patch 'Propagate-pathkeys-from-CTEs-up-to-the-outer-query' in\n> [2], which is currently under review and is what the commitfest entry\n> for. Later on, within the same thread, another patch was posted as a\n> fix to the first patch and was subsequently pushed in [3]. I believe\n> this sequence of events might have led to confusion.\n>\n> What is the usual practice in such situations? I guess I'd better to\n> fork a new thread to discuss my proposed patch which is about the\n> 'Propagate-pathkeys-from-CTEs-up-to-the-outer-query'.\n\nSorry I missed to notice that there was one pending patch yet to be\ncommitted, I feel you can continue discussing here itself just to\navoid losing any historical information about the issue and the\ncontinuation of the discussion. You can add a new commitfest entry for\nthis.It seems to me that a fresh new thread is a better option. I have juststarted a new thread in [1], and have tried to migrate the necessarycontext over there. I have also updated the commitfest entryaccordingly.[1] https://www.postgresql.org/message-id/flat/CAMbWs49xYd3f8CrE8-WW3--dV1zH_sDSDn-vs2DzHj81Wcnsew%40mail.gmail.comThanksRichard",
"msg_date": "Mon, 29 Jan 2024 11:40:00 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows estimations with joins of CTEs slows queries by more\n than factor 500"
}
] |
[
{
"msg_contents": "Stop recommending auto-download of DTD files, and indeed disable it.\n\nIt appears no longer possible to build the SGML docs without a local\ninstallation of the DocBook DTD, because sourceforge.net now only\npermits HTTPS access, and no common version of xsltproc supports that.\nHence, remove the bits of our documentation suggesting that that's\npossible or useful.\n\nIn fact, we might as well add the --nonet option to the build recipes\nautomatically, for a bit of extra security.\n\nAlso fix our documentation-tool-installation recipes for macOS to\nensure that xmllint and xsltproc are pulled in from MacPorts or\nHomebrew. The previous recipes assumed you could use the\nApple-supplied versions of these tools; which still works, except that\nyou'd need to set an environment variable to ensure that they would\nfind DTD files provided by those package managers. Simpler and easier\nto just recommend pulling in the additional packages.\n\nIn HEAD, also document how to build docs using Meson, and adjust\n\"ninja docs\" to just build the HTML docs, for consistency with the\ndefault behavior of doc/src/sgml/Makefile.\n\nIn a fit of neatnik-ism, I also made the ordering of the package\nlists match the order in which the tools are described at the head\nof the appendix.\n\nAleksander Alekseev, Peter Eisentraut, Tom Lane\n\nDiscussion: https://postgr.es/m/CAJ7c6TO8Aro2nxg=EQsVGiSDe-TstP4EsSvDHd7DSRsP40PgGA@mail.gmail.com\n\nBranch\n------\nREL_15_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/2ee703c9d1c6bbbae8b19807c23f91d75d17271e\n\nModified Files\n--------------\ndoc/src/sgml/Makefile | 8 +++++--\ndoc/src/sgml/docguide.sgml | 55 ++++++++++++++++++++++----------------------\ndoc/src/sgml/images/Makefile | 2 +-\n3 files changed, 34 insertions(+), 31 deletions(-)",
"msg_date": "Wed, 08 Feb 2023 22:15:59 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Stop recommending auto-download of DTD files,\n and indeed disable"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 5:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Stop recommending auto-download of DTD files, and indeed disable it.\n\nAccording to this commit:\n\n <para>\n The Homebrew-supplied programs require the following environment variable\n to be set:\n<programlisting>\nexport XML_CATALOG_FILES=/usr/local/etc/xml/catalog\n</programlisting>\n Without it, <command>xsltproc</command> will throw errors like this:\n<programlisting>\nI/O error : Attempt to load network entity\nhttp://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\npostgres.sgml:21: warning: failed to load external entity\n\"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"\n...\n</programlisting>\n </para>\n\nI use MacPorts, rather than Homebrew, but still found it necessary to\ndo something similar, specifically:\n\nexport XML_CATALOG_FILES=/opt/local/etc/xml/catalog\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Mar 2023 16:45:58 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Stop recommending auto-download of DTD files,\n and indeed disable"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 4:45 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I use MacPorts, rather than Homebrew, but still found it necessary to\n> do something similar, specifically:\n>\n> export XML_CATALOG_FILES=/opt/local/etc/xml/catalog\n\nAh, never mind. I had an incorrect value in my environment. If I unset\nit completely, it works just as well as setting a correct value.\n\nSorry for the noise.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Mar 2023 16:52:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Stop recommending auto-download of DTD files,\n and indeed disable"
}
] |
[
{
"msg_contents": "Hi,\n\nVarious users and threads on -hackers (incl. [0] specifically) have\ncomplained about the overhead of our WAL format. One example of this\nis that we write at least 44 bytes to register any changes to a single\nrelation's page: There is a 24-byte WAL record header, a 4-byte block\nrecord header, plus 12 bytes for RelFileLocator, plus 4 bytes for\nBlockNumber. This is a very significant overhead, which is threatening\nto grow ever larger as we're considering moving to larger identifiers\nin PostgreSQL: 56-bit relfilenodes, 64-bit XIDs, etc. I think we can\ndo significantly better than that in most, if not all, cases.\n\nFor PostgreSQL 17, I'd like to propose an effort to improve our WAL\ninfrastructure. The tail of this mail contains identified areas for\nimprovement and feasibility analysis for improving on those items.\nThere are probably more issues and potential improvements, but these\nare the ones I was aware of.\n\nNote that I'm mentioning PostgreSQL 17 because I don't have a lot of\ntime for 16, and I think changing WAL formats and behavior all at once\nreduces the total effort for keeping external tooling compatible (as\nopposed to piecemeal changes); \"to rip off the band-aid\". I'm sending\nthis mail now to collect community feedback and to make the thread\ndiscoverable in the archives - I can't find any other threads in the\n-hackers archives that were created specifically to cover people's\ngripes with WAL and solutions to those gripes.\n\nKind regards,\n\nMatthias van de Meent\n\nCC-ed Heikki, Robert, Andres and Dilip for their interest in the topic\nshown in the 56-bit relfilenumber thread, and Koichi for his work on\nparallel recovery.\n(sorry for the duplicate mail, I misconfigured my sender address)\n\n------------\n\nI am aware of the following ideas that exist, several of which came up\nat the FOSDEM Developer Meeting last Thursday [1][2]:\n\nReducing the size of the XLog record header:\n============\n\nWe currently store XIDs in every XLog record header, where only some need\nthem. E.g. most index records do not need XIDs for recovery, nor\nconflict resolution.\n\nThe length field is 4 bytes, but many records are <UINT8 bytes long,\nand most <UINT16. Putting total record length in a variable-length\nfield could reduce the average record overhead even more.\n\nThere are currently 2 bytes of alignment losses in the XLog header.\n\nWe may not need to store the xl_prev pointer to the previous record;\nan implicit reference only contained in the checksum _may_ be enough.\n\nTogether, these updates can reduce the header size by several bytes in\nthe common case.\n\n\nReducing the size of the XLog block header\n============\nThe block header has a 2-byte length field. Not all registered blocks\nhave associated data, and many records only have little data\n(<UINT8_MAX). It would make sense to make this field variable length,\ntoo.\n\n\nReducing the size of RelFileLocator, BlockNumber\n============\nMost of the time, the values in RelFileLocator's fields (and\nBlockNumber) are small enough to fit in 2 or 3 bytes. We could shave\noff some more bytes here, if the values are low enough and if we have\nthe bits to spare.\n\n\nReducing the need for FPIs in some cases\n============\nWe log FPIs when we modify a page for the first time after a\ncheckpoint to make sure we can recover from torn writes. However, as\nlong as the data we change is at known offsets in the page (e.g. in\nthe page's checksum field) we could also choose to _not_ emit an FPI,\nand instead mark the page as 'dirty+partial FPI emitted'. Redo would\n_always_ have to replay these partial FPI images, but the overall WAL\nvolume should be able to decrease dramatically for checksum-enabled\nsystems where hintbit update WAL records can account for a large\nportion of the WAL volume.\n\nWe'd need to update the rules on when we emit an FPI, but I think this\nis not impossible and indeed quite feasible.\n\nSome ideas about this \"partial FPI\":\n\nAs long as the line pointer array is not being modified, we can use\nthis partial FPI for a page's Item's header flag bits updates\n(assuming we know what was modified from the xlog record): the items\nremain on their location in the page, so even with torn writes the\nreferences on the page don't change, allowing us to use them to locate\nthe page's item's header and overwrite the bytes. We can definitely\nuse this partial FPI for visibility hint bits, and use it for VM-bit\nupdate logging as well.\n\nThe buffer descriptor (and/or page header bits) would need indicator\nflags not only for 'is dirty' but also for indicating that it has only\nhad a partial FPI this checkpoint, so that when the page is modified\nagain in a way that is incompatible with the \"partial FPI\" rules we\ndon't forget to emit another FPI\n\nSee also: [2]\n\n\nReducing the overhead of reading large WAL records\n============\nRight now, WAL records are first fully read into a sequential buffer,\nthen checksum-validated, and then parsed. For small records that fit\non a page, we don't need to copy the data to calculate the checksum,\nbut for records spanning multiple pages, we write the whole record\ninto a locally allocated buffer, and then copy all that data again\nwhen parsing the record's data.\n\nSee for example the index build records generated during GIN index\nbuild on a table with data - it uses several xlog pages to deal with a\nsingle GIN record that contains several FPIs of the newly built index.\nDuring redo this currently requires two allocations of that record's\nsize - one for a temporary buffer to hold the data, and one for the\nDecodedXLogRecord struct that will be used in the redo machinery.\n\nThis feels extremely wasteful. I think we should be able to parse and\nchecksum a record's data in the same pass: all but the first 861 bytes\nof a record are data sections, which can be copied and aligned\ndirectly into the relevant sections of the DecodedXLogRecord.\n\n\nParallelizing recovery\n============\nIn a system that is not in use (i.e. crash recovery), each block need\nto be replayed sequentially, but there is no need for that in general.\nWe could distribute WAL records across threads for replay, with some\nextra coordination required for multi-block (such as HEAP/UPDATE) and\nextra-relational records (such as XACT/COMMIT).\nThis may extend to replicas, but I'm not confident that all apply\norders are OK as long as each block's WAL is applied linearly, with a\nprime counterexample being index insertions: An index tuple may only\nbe inserted after the heap tuple is inserted, or the index would\nreport that it is corrupted because it can't find the heap tuples it\nis pointing to.\nI think the XLog records of different databases could be replayed in\nparallel but I'm not super confident about this; as long as we\nsynchronize on shared-information records (like updates in the\ncatalogs, or commits) I *think* we should be fine.\n\n------------------------------\n\nFeasability:\n\n\"Reducing the size of the XLog block header\" is implementable in the\nnear term, as it requires very few changes in the XLog machinery. I am\nplanning on providing a patch for PostgreSQL 17.\n\n\"Reducing the size of the XLog record header\" takes more effort, but I\nthink this too is possible in the near future. I'm planning on\nproviding this in a patch for 17 as well.\n\n\"Reducing the size of RelFileLocator, BlockNumber\" has been proposed\nby Andres Freund in [0], and showed some nice reduction in WAL size in\nhis testing. However, I am concerned about the potentially\nsignificantly different performance between new clusters and older\nclusters due to the significantly different compressible database IDs\n/ relfilenumber / blocknumber in those systems.\nI do think it is worth looking into improving it, but I think we'd\nneed changes to how relation and database IDs are generated first:\nsharing a counter with TOAST (and other things that could generate\napparent gaps in a database's (or cluster's) assigned ids) is in my\nopinion extremely unfavorable for the efficiency of the feature.\n\nI think that \"Reducing the need for FPIs in some cases\" might be\nsomewhat controversial, and would need to be considered separate from\nany other changes. I only (re?)discovered this idea last week, so I\nhaven't yet discovered all the pros and cons. But, because it could\npotentially reduce the IO overhead of maintenance operations by up to\n50% (still one dirty page write, but that's down from a full page in\nWAL + dirty page write), I think it is definitely worth looking into.\n\n\"Reducing the overhead of reading large WAL records\" requires another\nupdate in the WAL reader. I don't think there are many controversial\ncomponents to this, though, as it is mostly refactoring and making\nsure we don't lose performance.\n\n\"Parallelizing recovery\" is not impossible, but requires significant\nwork regarding ordering of operations before it would be usable on hot\nstandby nodes. I did find a wiki page on the topic [3], and it\ncontains some further reading and links to a git branch with work on\nthis topic (latest based on 14.6).\n\n\n[0] \"problems with making relfilenodes 56-bits\":\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoaa9Yc9O-FP4vS_xTKf8Wgy8TzHpjnjN56_ShKE%3DjrP-Q%40mail.gmail.com\n[1] https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2023_Developer_Meeting#XLog_Format\n[2] https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2023_Developer_Meeting#Page_Format\n[3] https://wiki.postgresql.org/wiki/Parallel_Recovery\n\n\n",
"msg_date": "Thu, 9 Feb 2023 00:09:46 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: WAL infrastructure issues, updates and improvements"
}
] |
[
{
"msg_contents": "Hello.\n\nWhile looking a patch, I found that pqSocketPoll passes through the\nresult from poll(2) to the caller and throws away revents. If I\nunderstand it correctly, poll() *doesn't* return -1 nor errno by the\nreason it has set POLLERR, POLLHUP, POLLNVAL, and POLLRDHUP for some\nof the target sockets, and returns 0 unless poll() itself failed to\nwork.\n\nIt doesn't seem to be the intended behavior since the function sets\nPOLLERR to pollfd.events. (but the bit is ignored by poll(), though)\n\nIs the above diagnosis correct?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 09 Feb 2023 11:50:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is psSocketPoll doing the right thing?"
},
{
"msg_contents": "\n\nOn 2023/02/09 11:50, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> While looking a patch, I found that pqSocketPoll passes through the\n> result from poll(2) to the caller and throws away revents. If I\n> understand it correctly, poll() *doesn't* return -1 nor errno by the\n> reason it has set POLLERR, POLLHUP, POLLNVAL, and POLLRDHUP for some\n> of the target sockets, and returns 0 unless poll() itself failed to\n> work.\n\nAs far as I understand correctly, poll() returns >0 if \"revents\"\nhas either of those bits, not 0 nor -1.\n\nYou're thinking that pqSocketPoll() should check \"revents\" and\nreturn -1 if either of those bits is set?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 9 Feb 2023 17:32:08 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Is psSocketPoll doing the right thing?"
},
{
"msg_contents": "> Subject: is p*s*Socket..\n\nOops...\n\nAt Thu, 9 Feb 2023 17:32:08 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2023/02/09 11:50, Kyotaro Horiguchi wrote:\n> > Hello.\n> > While looking a patch, I found that pqSocketPoll passes through the\n> > result from poll(2) to the caller and throws away revents. If I\n> > understand it correctly, poll() *doesn't* return -1 nor errno by the\n> > reason it has set POLLERR, POLLHUP, POLLNVAL, and POLLRDHUP for some\n> > of the target sockets, and returns 0 unless poll() itself failed to\n> > work.\n> \n> As far as I understand correctly, poll() returns >0 if \"revents\"\n> has either of those bits, not 0 nor -1.\n\nRight. as my understanding.\n\nIf any of the sockets is in any of the states, pqSocketPoll returns a\npositive, which makes pqSocketCheck return 1. Finally\npqRead/WriteReady return \"ready\" even though the connection socket is\nin an error state. Actually that behavior doesn't harm since the\nsucceeding actual read/write will \"properly\" fail. However, once we\nuse this function to simply check the socket is sound without doing an\nactual read/write, that behavior starts giving a harm by the false\nanswer.\n\n> You're thinking that pqSocketPoll() should check \"revents\" and\n> return -1 if either of those bits is set?\n\nIn short, yes.\n\npqSocketPoll() should somehow inform callers about that\nstate. Fortunately pqSocketPoll is a private function thus we can\nrefactor the function so that it can do that properly.\n\nIf no one object to that change, I'll work on that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 10 Feb 2023 10:42:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is psSocketPoll doing the right thing?"
},
{
"msg_contents": "On 2023-02-10 10:42, Kyotaro Horiguchi wrote:\n>> On 2023/02/09 11:50, Kyotaro Horiguchi wrote:\n>> > Hello.\n>> > While looking a patch, I found that pqSocketPoll passes through the\n>> > result from poll(2) to the caller and throws away revents. If I\n>> > understand it correctly, poll() *doesn't* return -1 nor errno by the\n>> > reason it has set POLLERR, POLLHUP, POLLNVAL, and POLLRDHUP for some\n>> > of the target sockets, and returns 0 unless poll() itself failed to\n>> > work.\n>> \n>> As far as I understand correctly, poll() returns >0 if \"revents\"\n>> has either of those bits, not 0 nor -1.\n> \n> Right. as my understanding.\n> \n> If any of the sockets is in any of the states, pqSocketPoll returns a\n> positive, which makes pqSocketCheck return 1. Finally\n> pqRead/WriteReady return \"ready\" even though the connection socket is\n> in an error state. Actually that behavior doesn't harm since the\n> succeeding actual read/write will \"properly\" fail. However, once we\n> use this function to simply check the socket is sound without doing an\n> actual read/write, that behavior starts giving a harm by the false\n> answer.\n\nI agree with you. Current pqScoketCheck could return a false result\nfrom a caller's point of view.\n\n\n>> You're thinking that pqSocketPoll() should check \"revents\" and\n>> return -1 if either of those bits is set?\n> \n> In short, yes.\n> \n> pqSocketPoll() should somehow inform callers about that\n> state. Fortunately pqSocketPoll is a private function thus we can\n> refactor the function so that it can do that properly.\n\nDoes this mean that pqSocketPoll or pqSocketCheck somehow returns the\npoll's result including error conditions (POLLERR, POLLHUP, POLLNVAL)\nto callers? Then callers filter the result to make their final result.\n\nregards,\n\n-- \nKatsuragi Yuta\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 16 Feb 2023 13:44:34 +0900",
"msg_from": "Katsuragi Yuta <katsuragiy@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Is psSocketPoll doing the right thing?"
},
{
"msg_contents": "Dear Horiguchi-san,\n\nMy comment may be no longer needed, but I can +1 to your opinion.\n\n> > On 2023/02/09 11:50, Kyotaro Horiguchi wrote:\n> > > Hello.\n> > > While looking a patch, I found that pqSocketPoll passes through the\n> > > result from poll(2) to the caller and throws away revents. If I\n> > > understand it correctly, poll() *doesn't* return -1 nor errno by the\n> > > reason it has set POLLERR, POLLHUP, POLLNVAL, and POLLRDHUP for\n> some\n> > > of the target sockets, and returns 0 unless poll() itself failed to\n> > > work.\n> >\n> > As far as I understand correctly, poll() returns >0 if \"revents\"\n> > has either of those bits, not 0 nor -1.\n> \n> Right. as my understanding.\n> \n> If any of the sockets is in any of the states, pqSocketPoll returns a\n> positive, which makes pqSocketCheck return 1. Finally\n> pqRead/WriteReady return \"ready\" even though the connection socket is\n> in an error state. Actually that behavior doesn't harm since the\n> succeeding actual read/write will \"properly\" fail. However, once we\n> use this function to simply check the socket is sound without doing an\n> actual read/write, that behavior starts giving a harm by the false\n> answer.\n\nI checked man page of poll(3), and it said that POLLERR, POLLHUP, POLLNVAL is only\nvalid in revents. Moreover, poll() has is clarified that it returns natural number\nif revent is larger than zero. So revent should be checked even if the returned\nvalue > 0.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Mon, 20 Feb 2023 06:40:30 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Is psSocketPoll doing the right thing?"
}
] |
[
{
"msg_contents": "Hi all,\n\nThe following SQL sequence causes a failure of pg_upgrade when these\nare executed on a cluster of ~13, doing an upgrade to 14~, assuming\nthat the relation page size is 8kB. This creates a partition table\nwith a set of values large enough that it can be created in ~13:\nCREATE TABLE parent_list (id int) PARTITION BY LIST (id);\nCREATE OR REPLACE FUNCTION create_long_list(tabname text,\n tabparent text,\n num_vals int)\nRETURNS VOID AS\n$func$\nDECLARE\n query text;\nBEGIN\n query := 'CREATE TABLE ' || tabname ||\n ' PARTITION OF ' || tabparent || ' FOR VALUES IN (';\n FOR i IN 1..num_vals LOOP\n query := query || i;\n IF i != num_vals THEN\n query := query || ', ';\n END IF;\n END LOOP;\n query := query || ')';\n EXECUTE format(query);\nEND\n$func$ LANGUAGE plpgsql;\n-- Large enough to trigger pg_class failure in 14~\n-- Use 953 to make it work in 14~\nSELECT create_long_list('child_list_2', 'parent_list', 956);\n\nHowever, pg_upgrade fails in the middle of processing when restoring\nthe objects in the new cluster, with the same error as one would get\nbecause the row is too big and we have no toast tables in pg_class:\npg_restore: error: could not execute query: ERROR: row is too big:\nsize 8184, maximum size 8160\nCommand was: ALTER TABLE ONLY \"public\".\"parent_list\" ATTACH PARTITION\n\nThen, as of pg_upgrade_internal.log:\nRestoring database schemas in the new cluster\n*failure*\nConsult the last few lines of \"pg_upgrade_dump_13468.log\" for\nthe probable cause of the failure.\n\nNo fields have been added to pg_class between 13 and 14, however the\namount of data stored in relpartbound got larger between these two\nversions (just do a length() on it for example using what I posted\nabove). Hence, if the original cluster has a version of pg_class\nlarge enough to just fit into a single page without the need of\ntoasting, it may fail when created in the new cluster because it lacks\nspace to fit on a page because of this extra partition bound data.\n\nIn such cases, the usual recommendation would be to adjust the\npartition layer so as the schema has smaller pg_node_trees for the\npartition bounds. Still, waiting for something to blow up in the\nmiddle of pg_upgrade is very unfriendly, and a pg_upgrade --link would\nmean the need to rollback to a previous snapshot, which can be\ncostly.\n\nAdding a toast table to pg_class or even pg_attribute (because this\ncould also happen with a bunch of attribute-level ACLs) has been\nproposed for some time, though there have always been concerns about\ncircling dependencies back to pg_class. More toasting or a split of\nrelpartbound into a separate catalog (with toast in it) would solve\nthis issue at its root, but that's not something that would happen in\n14~15 anyway.\n\nShouldn't we have a safeguard of some kind in the pre-check phase of\npg_upgrade at least? I think that this comes down to checking\nsum(pg_column_size(pg_class.*)), roughly, with alignment and page\nheader, and do the same for pg_attribute.\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 9 Feb 2023 14:17:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "pg_upgrade failures with large partition definitions on upgrades\n from ~13 to 14~"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> The following SQL sequence causes a failure of pg_upgrade when these\n> are executed on a cluster of ~13, doing an upgrade to 14~, assuming\n> that the relation page size is 8kB.\n> ...\n> No fields have been added to pg_class between 13 and 14, however the\n> amount of data stored in relpartbound got larger between these two\n> versions (just do a length() on it for example using what I posted\n> above). Hence, if the original cluster has a version of pg_class\n> large enough to just fit into a single page without the need of\n> toasting, it may fail when created in the new cluster because it lacks\n> space to fit on a page because of this extra partition bound data.\n\nBleah.\n\n> Shouldn't we have a safeguard of some kind in the pre-check phase of\n> pg_upgrade at least? I think that this comes down to checking\n> sum(pg_column_size(pg_class.*)), roughly, with alignment and page\n> header, and do the same for pg_attribute.\n\nIt might be worth expending a pre-check on, if only because the\ncheck could offer some advice about fixing the problem. But it\nseems like quite a corner case --- what are the odds of hitting\nthis?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Feb 2023 00:33:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failures with large partition definitions on upgrades\n from ~13 to 14~"
},
{
"msg_contents": "On Thu, Feb 09, 2023 at 12:33:06AM -0500, Tom Lane wrote:\n> It might be worth expending a pre-check on, if only because the\n> check could offer some advice about fixing the problem.\n\nBased on the information coming from pg_class, yes, something could be\nreported back. Now things get more hairy if the oversized tuple has a\nmix of long ACLs and a long partition bound.\n\n> But it seems like quite a corner case --- what are the odds of\n> hitting this?\n\nLow, I guess, as you need a tuple small enough that it fits right into\na page in 13~, but large enough to hit the upper-bound on insert\nbecause of the extra overhead of relpartbound (something like 20B, at\nshort glance, in my case). Well, this would not be an issue if there\nwere more toasting done. I agree that schemas with such long\ndefinitions point out to deficiencies usually, but the user experience\nis bad when once would expect an upgrade with no hiccups, then fails\non this stuff, delaying an upgrade longer because the instance\nrequires a rollback.\n--\nMichael",
"msg_date": "Thu, 9 Feb 2023 14:52:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade failures with large partition definitions on upgrades\n from ~13 to 14~"
}
] |
[
{
"msg_contents": "When we try to generate qual variants with different nullingrels in\ndeconstruct_distribute_oj_quals, we traverse all the JoinTreeItems and\nadjust qual nulling bits as we crawl up the join tree. For a\nSpecialJoinInfo which commutes with current sjinfo from below left, in\nthe next level up it would null all the relids in its righthand. So we\nadjust qual nulling bits as below.\n\n /*\n * Adjust qual nulling bits for next level up, if needed. We\n * don't want to put sjinfo's own bit in at all, and if we're\n * above sjinfo then we did it already.\n */\n if (below_sjinfo)\n quals = (List *)\n add_nulling_relids((Node *) quals,\n othersj->min_righthand,\n bms_make_singleton(othersj->ojrelid));\n\nIt seems to me there is oversight here. Actually in next level up this\nothersj would null all the relids in its syn_righthand, not only the\nrelids in its min_righthand. If the quals happen to contain references\nto relids which are in othersj->syn_righthand but not in\nothersj->min_righthand, these relids would not get updated with\nothersj->ojrelid added. And this would cause qual nulling bits not\nconsistent.\n\nI've managed to devise a query that can show this problem.\n\ncreate table t1(a int, b int);\ncreate table t2(a int, b int);\ncreate table t3(a int, b int);\ncreate table t4(a int, b int);\n\ninsert into t1 select i, i from generate_series(1,10)i;\ninsert into t2 select i, i from generate_series(1,10)i;\ninsert into t3 select i, i from generate_series(1,1000)i;\ninsert into t4 select i, i from generate_series(1,1000)i;\nanalyze;\n\nselect * from t1 left join (t2 left join t3 on t2.a > t3.a) on t1.b = t2.b\nleft join t4 on t2.b = t3.b;\n\nThis query would trigger the Assert() in search_indexed_tlist_for_var.\nSo I wonder that we should use othersj->syn_righthand here.\n\n--- a/src/backend/optimizer/plan/initsplan.c\n+++ b/src/backend/optimizer/plan/initsplan.c\n@@ -2046,7 +2046,7 @@ deconstruct_distribute_oj_quals(PlannerInfo *root,\n if (below_sjinfo)\n quals = (List *)\n add_nulling_relids((Node *) quals,\n- othersj->min_righthand,\n+ othersj->syn_righthand,\n bms_make_singleton(othersj->ojrelid));\n\nThanks\nRichard\n\nWhen we try to generate qual variants with different nullingrels indeconstruct_distribute_oj_quals, we traverse all the JoinTreeItems andadjust qual nulling bits as we crawl up the join tree. For aSpecialJoinInfo which commutes with current sjinfo from below left, inthe next level up it would null all the relids in its righthand. So weadjust qual nulling bits as below. /* * Adjust qual nulling bits for next level up, if needed. We * don't want to put sjinfo's own bit in at all, and if we're * above sjinfo then we did it already. */ if (below_sjinfo) quals = (List *) add_nulling_relids((Node *) quals, othersj->min_righthand, bms_make_singleton(othersj->ojrelid));It seems to me there is oversight here. Actually in next level up thisothersj would null all the relids in its syn_righthand, not only therelids in its min_righthand. If the quals happen to contain referencesto relids which are in othersj->syn_righthand but not inothersj->min_righthand, these relids would not get updated withothersj->ojrelid added. And this would cause qual nulling bits notconsistent.I've managed to devise a query that can show this problem.create table t1(a int, b int);create table t2(a int, b int);create table t3(a int, b int);create table t4(a int, b int);insert into t1 select i, i from generate_series(1,10)i;insert into t2 select i, i from generate_series(1,10)i;insert into t3 select i, i from generate_series(1,1000)i;insert into t4 select i, i from generate_series(1,1000)i;analyze;select * from t1 left join (t2 left join t3 on t2.a > t3.a) on t1.b = t2.b left join t4 on t2.b = t3.b;This query would trigger the Assert() in search_indexed_tlist_for_var.So I wonder that we should use othersj->syn_righthand here.--- a/src/backend/optimizer/plan/initsplan.c+++ b/src/backend/optimizer/plan/initsplan.c@@ -2046,7 +2046,7 @@ deconstruct_distribute_oj_quals(PlannerInfo *root, if (below_sjinfo) quals = (List *) add_nulling_relids((Node *) quals,- othersj->min_righthand,+ othersj->syn_righthand, bms_make_singleton(othersj->ojrelid));ThanksRichard",
"msg_date": "Thu, 9 Feb 2023 17:16:15 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Inconsistent nullingrels due to oversight in\n deconstruct_distribute_oj_quals"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> It seems to me there is oversight here. Actually in next level up this\n> othersj would null all the relids in its syn_righthand, not only the\n> relids in its min_righthand.\n\nGood point. I think this code originated before it was clear to me\nthat nullingrels would need to follow the syntactic structure.\n\n> This query would trigger the Assert() in search_indexed_tlist_for_var.\n> So I wonder that we should use othersj->syn_righthand here.\n\nThere are two such calls in deconstruct_distribute_oj_quals ...\ndon't they both need this change?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Feb 2023 10:55:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent nullingrels due to oversight in\n deconstruct_distribute_oj_quals"
},
{
"msg_contents": "I wrote:\n> Richard Guo <guofenglinux@gmail.com> writes:\n>> It seems to me there is oversight here. Actually in next level up this\n>> othersj would null all the relids in its syn_righthand, not only the\n>> relids in its min_righthand.\n\n> Good point. I think this code originated before it was clear to me\n> that nullingrels would need to follow the syntactic structure.\n\nAlthough ... the entire point here is that we're trying to build quals\nthat don't match the original syntactic structure. I'm worried that\ndistribute_qual_to_rels will do the wrong thing (put the qual at the\nwrong level) if we add more nullingrel bits than we meant to. This\nmight be less trivial than it appears.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Feb 2023 11:28:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent nullingrels due to oversight in\n deconstruct_distribute_oj_quals"
},
{
"msg_contents": "On Thu, Feb 9, 2023 at 11:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > This query would trigger the Assert() in search_indexed_tlist_for_var.\n> > So I wonder that we should use othersj->syn_righthand here.\n>\n> There are two such calls in deconstruct_distribute_oj_quals ...\n> don't they both need this change?\n\n\nYeah, I wondered about that too, but didn't manage to devise a query\nthat can show the problem caused by the call for 'above_sjinfo' case.\nAfter a night of sleep I came up with one this morning. :-)\n\ncreate table t (a int, b int);\n\ninsert into t select i, i from generate_series(1,10)i;\nanalyze t;\n\nselect * from t t1 left join t t2 left join t t3 on t2.b = t3.b left join t\nt4 on t2.a > t3.a on t2.a > t1.a;\n\nIn this query, for the qual 't2.a > t3.a', when we try to push t3/t4\njoin to above t1/t2 join, we fail to add t1/t2 ojrelid to\nnullingrels of t3.a, because t3 is not in t1/t2 join's min_righthand\n(but in its syn_righthand). We really should have done that because\nafter the commutation t1/t2 join can null not only t2 but also t3 in\nthis case.\n\nThanks\nRichard\n\nOn Thu, Feb 9, 2023 at 11:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> This query would trigger the Assert() in search_indexed_tlist_for_var.\n> So I wonder that we should use othersj->syn_righthand here.\n\nThere are two such calls in deconstruct_distribute_oj_quals ...\ndon't they both need this change? Yeah, I wondered about that too, but didn't manage to devise a querythat can show the problem caused by the call for 'above_sjinfo' case.After a night of sleep I came up with one this morning. :-)create table t (a int, b int);insert into t select i, i from generate_series(1,10)i;analyze t;select * from t t1 left join t t2 left join t t3 on t2.b = t3.b left join t t4 on t2.a > t3.a on t2.a > t1.a;In this query, for the qual 't2.a > t3.a', when we try to push t3/t4join to above t1/t2 join, we fail to add t1/t2 ojrelid tonullingrels of t3.a, because t3 is not in t1/t2 join's min_righthand(but in its syn_righthand). We really should have done that becauseafter the commutation t1/t2 join can null not only t2 but also t3 inthis case.ThanksRichard",
"msg_date": "Fri, 10 Feb 2023 11:08:10 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent nullingrels due to oversight in\n deconstruct_distribute_oj_quals"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 11:08 AM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Thu, Feb 9, 2023 at 11:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Richard Guo <guofenglinux@gmail.com> writes:\n>> > This query would trigger the Assert() in search_indexed_tlist_for_var.\n>> > So I wonder that we should use othersj->syn_righthand here.\n>>\n>> There are two such calls in deconstruct_distribute_oj_quals ...\n>> don't they both need this change?\n>\n>\n> Yeah, I wondered about that too, but didn't manage to devise a query\n> that can show the problem caused by the call for 'above_sjinfo' case.\n> After a night of sleep I came up with one this morning. :-)\n>\n> create table t (a int, b int);\n>\n> insert into t select i, i from generate_series(1,10)i;\n> analyze t;\n>\n> select * from t t1 left join t t2 left join t t3 on t2.b = t3.b left join\n> t t4 on t2.a > t3.a on t2.a > t1.a;\n>\n> In this query, for the qual 't2.a > t3.a', when we try to push t3/t4\n> join to above t1/t2 join, we fail to add t1/t2 ojrelid to\n> nullingrels of t3.a, because t3 is not in t1/t2 join's min_righthand\n> (but in its syn_righthand). We really should have done that because\n> after the commutation t1/t2 join can null not only t2 but also t3 in\n> this case.\n>\n\nHowever, for 'above_sjinfo' case, we should not use\nothersj->syn_righthand, because othersj->syn_righthand contains relids\nin sjinfo's righthand which should not be nulled by othersj after the\ncommutation. It seems what we should use here is sjinfo->syn_lefthand.\n\n--- a/src/backend/optimizer/plan/initsplan.c\n+++ b/src/backend/optimizer/plan/initsplan.c\n@@ -1990,7 +1990,7 @@ deconstruct_distribute_oj_quals(PlannerInfo *root,\n if (above_sjinfo)\n quals = (List *)\n add_nulling_relids((Node *) quals,\n- othersj->min_righthand,\n+ sjinfo->syn_lefthand,\n bms_make_singleton(othersj->ojrelid));\n\nThanks\nRichard\n\nOn Fri, Feb 10, 2023 at 11:08 AM Richard Guo <guofenglinux@gmail.com> wrote:On Thu, Feb 9, 2023 at 11:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> This query would trigger the Assert() in search_indexed_tlist_for_var.\n> So I wonder that we should use othersj->syn_righthand here.\n\nThere are two such calls in deconstruct_distribute_oj_quals ...\ndon't they both need this change? Yeah, I wondered about that too, but didn't manage to devise a querythat can show the problem caused by the call for 'above_sjinfo' case.After a night of sleep I came up with one this morning. :-)create table t (a int, b int);insert into t select i, i from generate_series(1,10)i;analyze t;select * from t t1 left join t t2 left join t t3 on t2.b = t3.b left join t t4 on t2.a > t3.a on t2.a > t1.a;In this query, for the qual 't2.a > t3.a', when we try to push t3/t4join to above t1/t2 join, we fail to add t1/t2 ojrelid tonullingrels of t3.a, because t3 is not in t1/t2 join's min_righthand(but in its syn_righthand). We really should have done that becauseafter the commutation t1/t2 join can null not only t2 but also t3 inthis case. However, for 'above_sjinfo' case, we should not useothersj->syn_righthand, because othersj->syn_righthand contains relidsin sjinfo's righthand which should not be nulled by othersj after thecommutation. It seems what we should use here is sjinfo->syn_lefthand.--- a/src/backend/optimizer/plan/initsplan.c+++ b/src/backend/optimizer/plan/initsplan.c@@ -1990,7 +1990,7 @@ deconstruct_distribute_oj_quals(PlannerInfo *root, if (above_sjinfo) quals = (List *) add_nulling_relids((Node *) quals,- othersj->min_righthand,+ sjinfo->syn_lefthand, bms_make_singleton(othersj->ojrelid));ThanksRichard",
"msg_date": "Fri, 10 Feb 2023 16:27:14 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent nullingrels due to oversight in\n deconstruct_distribute_oj_quals"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> However, for 'above_sjinfo' case, we should not use\n> othersj->syn_righthand, because othersj->syn_righthand contains relids\n> in sjinfo's righthand which should not be nulled by othersj after the\n> commutation. It seems what we should use here is sjinfo->syn_lefthand.\n\nI had a hard time wrapping my brain around that to start with, but\nnow I think you're right. othersj is syntactically above the current\njoin, so its syn_righthand will cover all of the current join, but\nwe only want to add nulling bits to Vars of the current join's LHS.\n(That is, we need to transform Pbc to Pb*c, not Pb*c*.)\n\nI also realized that there was a fairly critical nearby bug:\nmake_outerjoininfo was failing to check whether the upper join's qual\nis actually of the form \"Pbc\", without any references to the lower join's\nLHS. So that led us to setting commute bits in some cases where we\nshouldn't, further confusing deconstruct_distribute_oj_quals. (I think\nthis snuck in because its other code path doesn't need to make such a\ncheck, it being syntactically impossible to have such a reference if\nwe start from the other form of the identity.)\n\nFix pushed. This seems to take care of Robins' latest example in\nthe bug #17781 thread [1], too.\n\n\t\t\tregards, tom lane\n\n[1] https://postgr.es/m/CAEP4nAx9C5gXNBfEA0JBfz7B+5f1Bawt-RWQWyhev-wdps8BZA@mail.gmail.com\n\n\n",
"msg_date": "Fri, 10 Feb 2023 13:40:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent nullingrels due to oversight in\n deconstruct_distribute_oj_quals"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nPlease find attached a patch proposal for $SUBJECT.\n\nThe idea has been raised in [1] by Andres: it would allow to simplify even more the work done to\ngenerate pg_stat_get_xact*() functions with Macros.\n\nIndeed, with the reconciliation done in find_tabstat_entry() then all the pg_stat_get_xact*() functions\n(making use of find_tabstat_entry()) now \"look the same\" (should they take into account live subtransactions or not).\n\nLooking forward to your feedback,\n\nRegards\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n[1]: https://www.postgresql.org/message-id/20230111225907.6el6c5j3hukizqxc%40awork3.anarazel.de",
"msg_date": "Thu, 9 Feb 2023 11:38:18 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "At Thu, 9 Feb 2023 11:38:18 +0100, \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> wrote in \n> Hi hackers,\n> \n> Please find attached a patch proposal for $SUBJECT.\n> \n> The idea has been raised in [1] by Andres: it would allow to simplify\n> even more the work done to\n> generate pg_stat_get_xact*() functions with Macros.\n> \n> Indeed, with the reconciliation done in find_tabstat_entry() then all\n> the pg_stat_get_xact*() functions\n> (making use of find_tabstat_entry()) now \"look the same\" (should they\n> take into account live subtransactions or not).\n> \n> Looking forward to your feedback,\n\nI like that direction.\n\nDon't we rename PgStat_FunctionCounts to PgStat_FuncStatus, unifying\nneighboring functions?\n\nWhy does find_tabstat_entry() copies the whole pending data and\nperforms subxaction summarization? The summarization is needed only by\nfew callers but now that cost is imposed to the all callers along with\nadditional palloc()/pfree() calls. That doesn't seem reasonable.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 10 Feb 2023 11:32:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 2/10/23 3:32 AM, Kyotaro Horiguchi wrote:\n> At Thu, 9 Feb 2023 11:38:18 +0100, \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> wrote in\n>> Hi hackers,\n>>\n>> Please find attached a patch proposal for $SUBJECT.\n>>\n>> The idea has been raised in [1] by Andres: it would allow to simplify\n>> even more the work done to\n>> generate pg_stat_get_xact*() functions with Macros.\n>>\n>> Indeed, with the reconciliation done in find_tabstat_entry() then all\n>> the pg_stat_get_xact*() functions\n>> (making use of find_tabstat_entry()) now \"look the same\" (should they\n>> take into account live subtransactions or not).\n>>\n>> Looking forward to your feedback,\n> \n> I like that direction.\n> \n\nThanks for looking at it!\n\n> Don't we rename PgStat_FunctionCounts to PgStat_FuncStatus, unifying\n> neighboring functions?\n> \n\nNot sure, I think it's the counter part of PgStat_TableCounts for example.\n\n> Why does find_tabstat_entry() copies the whole pending data and\n> performs subxaction summarization? \n\nIt copies the pending data to not increment it's counters while doing the summarization.\nThe summarization was done here to avoid the pg_stat_get_xact*() functions to do the computation so that all\nthe pg_stat_get_xact*() functions look the same but....\n\n> The summarization is needed only by\n> few callers but now that cost is imposed to the all callers along with\n> additional palloc()/pfree() calls. That doesn't seem reasonable.\n> \n\nI agree that's not the best approach.....\n\nLet me come back with another proposal (thinking to increment reconciled\ncounters in pgstat_count_heap_insert(), pgstat_count_heap_delete() and\npgstat_count_heap_update()).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 10 Feb 2023 16:50:32 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-09 11:38:18 +0100, Drouvot, Bertrand wrote:\n> Please find attached a patch proposal for $SUBJECT.\n> \n> The idea has been raised in [1] by Andres: it would allow to simplify even more the work done to\n> generate pg_stat_get_xact*() functions with Macros.\n\nThanks!\n\nI think this is useful beyond being able to generate those functions with\nmacros. The fact that we had to deal with transactional code in pgstatfuncs.c\nmeant that a lot of the relevant itnernals had to be exposed \"outside\" pgstat,\nwhich seems architecturally not great.\n\n\n> Indeed, with the reconciliation done in find_tabstat_entry() then all the pg_stat_get_xact*() functions\n> (making use of find_tabstat_entry()) now \"look the same\" (should they take into account live subtransactions or not).\n\nI'm not bothered by making all of pg_stat_get_xact* functions more expensive,\nthey're not a hot code path. But if we need to, we could just add a parameter\nto find_tabstat_entry() indicating whether we need to reconcile or not.\n\n\n> \t/* save stats for this function, later used to compensate for recursion */\n> -\tfcu->save_f_total_time = pending->f_counts.f_total_time;\n> +\tfcu->save_f_total_time = pending->f_total_time;\n> \n> \t/* save current backend-wide total time */\n> \tfcu->save_total = total_func_time;\n\nThe diff is noisy due to all the mechanical changes like the above. Could that\nbe split into a separate commit?\n\n\n> find_tabstat_entry(Oid rel_id)\n> {\n> \tPgStat_EntryRef *entry_ref;\n> +\tPgStat_TableStatus *tablestatus = NULL;\n> \n> \tentry_ref = pgstat_fetch_pending_entry(PGSTAT_KIND_RELATION, MyDatabaseId, rel_id);\n> \tif (!entry_ref)\n> \t\tentry_ref = pgstat_fetch_pending_entry(PGSTAT_KIND_RELATION, InvalidOid, rel_id);\n> \n> \tif (entry_ref)\n> -\t\treturn entry_ref->pending;\n> -\treturn NULL;\n> +\t{\n> +\t\tPgStat_TableStatus *tabentry = (PgStat_TableStatus *) entry_ref->pending;\n\nI'd add an early return for the !entry_ref case, that way you don't need to\nindent the bulk of the function.\n\n\n> +\t\tPgStat_TableXactStatus *trans;\n> +\n> +\t\ttablestatus = palloc(sizeof(PgStat_TableStatus));\n> +\t\tmemcpy(tablestatus, tabentry, sizeof(PgStat_TableStatus));\n\nFor things like this I'd use\n *tablestatus = *tabentry;\n\nthat way the compiler will warn you about mismatching types, and you don't\nneed the sizeof().\n\n\n> +\t\t/* live subtransactions' counts aren't in t_counts yet */\n> +\t\tfor (trans = tabentry->trans; trans != NULL; trans = trans->upper)\n> +\t\t{\n> +\t\t\ttablestatus->t_counts.t_tuples_inserted += trans->tuples_inserted;\n> +\t\t\ttablestatus->t_counts.t_tuples_updated += trans->tuples_updated;\n> +\t\t\ttablestatus->t_counts.t_tuples_deleted += trans->tuples_deleted;\n> +\t\t}\n> +\t}\n\nHm, why do we end uup with t_counts still being used here, but removed other\nplaces?\n\n\n> diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c\n> index 6737493402..40a6fbf871 100644\n> --- a/src/backend/utils/adt/pgstatfuncs.c\n> +++ b/src/backend/utils/adt/pgstatfuncs.c\n> @@ -1366,7 +1366,10 @@ pg_stat_get_xact_numscans(PG_FUNCTION_ARGS)\n> \tif ((tabentry = find_tabstat_entry(relid)) == NULL)\n> \t\tresult = 0;\n> \telse\n> +\t{\n> \t\tresult = (int64) (tabentry->t_counts.t_numscans);\n> +\t\tpfree(tabentry);\n> +\t}\n> \n> \tPG_RETURN_INT64(result);\n> }\n\nI don't think we need to bother with individual pfrees in this path. The\ncaller will call the function in a dedicated memory context, that'll be reset\nvery soon after this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Feb 2023 13:46:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 2/10/23 10:46 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2023-02-09 11:38:18 +0100, Drouvot, Bertrand wrote:\n>> Please find attached a patch proposal for $SUBJECT.\n>>\n>> The idea has been raised in [1] by Andres: it would allow to simplify even more the work done to\n>> generate pg_stat_get_xact*() functions with Macros.\n> \n> Thanks!\n> \n\nThanks for looking at it!\n\n> I think this is useful beyond being able to generate those functions with\n> macros. The fact that we had to deal with transactional code in pgstatfuncs.c\n> meant that a lot of the relevant itnernals had to be exposed \"outside\" pgstat,\n> which seems architecturally not great.\n> \n \nRight, good point.\n\n>> Indeed, with the reconciliation done in find_tabstat_entry() then all the pg_stat_get_xact*() functions\n>> (making use of find_tabstat_entry()) now \"look the same\" (should they take into account live subtransactions or not).\n> \n> I'm not bothered by making all of pg_stat_get_xact* functions more expensive,\n> they're not a hot code path. But if we need to, we could just add a parameter\n> to find_tabstat_entry() indicating whether we need to reconcile or not.\n> \n\nI think that's a good idea to avoid doing extra work if not needed.\nV2 adds such a bool.\n\n>> \t/* save stats for this function, later used to compensate for recursion */\n>> -\tfcu->save_f_total_time = pending->f_counts.f_total_time;\n>> +\tfcu->save_f_total_time = pending->f_total_time;\n>> \n>> \t/* save current backend-wide total time */\n>> \tfcu->save_total = total_func_time;\n> \n> The diff is noisy due to all the mechanical changes like the above. Could that\n> be split into a separate commit?\n> \n\nFully agree, the PgStat_BackendFunctionEntry stuff will be done in a separate patch.\n\n> \n>> find_tabstat_entry(Oid rel_id)\n>> {\n>> \tPgStat_EntryRef *entry_ref;\n>> +\tPgStat_TableStatus *tablestatus = NULL;\n>> \n>> \tentry_ref = pgstat_fetch_pending_entry(PGSTAT_KIND_RELATION, MyDatabaseId, rel_id);\n>> \tif (!entry_ref)\n>> \t\tentry_ref = pgstat_fetch_pending_entry(PGSTAT_KIND_RELATION, InvalidOid, rel_id);\n>> \n>> \tif (entry_ref)\n>> -\t\treturn entry_ref->pending;\n>> -\treturn NULL;\n>> +\t{\n>> +\t\tPgStat_TableStatus *tabentry = (PgStat_TableStatus *) entry_ref->pending;\n> \n> I'd add an early return for the !entry_ref case, that way you don't need to\n> indent the bulk of the function.\n> \n\nGood point, done in V2.\n\n> \n>> +\t\tPgStat_TableXactStatus *trans;\n>> +\n>> +\t\ttablestatus = palloc(sizeof(PgStat_TableStatus));\n>> +\t\tmemcpy(tablestatus, tabentry, sizeof(PgStat_TableStatus));\n> \n> For things like this I'd use\n> *tablestatus = *tabentry;\n> \n> that way the compiler will warn you about mismatching types, and you don't\n> need the sizeof().\n> \n> \n\nGood point, done in V2.\n\n>> +\t\t/* live subtransactions' counts aren't in t_counts yet */\n>> +\t\tfor (trans = tabentry->trans; trans != NULL; trans = trans->upper)\n>> +\t\t{\n>> +\t\t\ttablestatus->t_counts.t_tuples_inserted += trans->tuples_inserted;\n>> +\t\t\ttablestatus->t_counts.t_tuples_updated += trans->tuples_updated;\n>> +\t\t\ttablestatus->t_counts.t_tuples_deleted += trans->tuples_deleted;\n>> +\t\t}\n>> +\t}\n> \n> Hm, why do we end uup with t_counts still being used here, but removed other\n> places?\n> \n\nt_counts are not removed, maybe you are confused with the \"f_counts\" that were removed\nin V1 due to the PgStat_BackendFunctionEntry related changes?\n\n> \n>> diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c\n>> index 6737493402..40a6fbf871 100644\n>> --- a/src/backend/utils/adt/pgstatfuncs.c\n>> +++ b/src/backend/utils/adt/pgstatfuncs.c\n>> @@ -1366,7 +1366,10 @@ pg_stat_get_xact_numscans(PG_FUNCTION_ARGS)\n>> \tif ((tabentry = find_tabstat_entry(relid)) == NULL)\n>> \t\tresult = 0;\n>> \telse\n>> +\t{\n>> \t\tresult = (int64) (tabentry->t_counts.t_numscans);\n>> +\t\tpfree(tabentry);\n>> +\t}\n>> \n>> \tPG_RETURN_INT64(result);\n>> }\n> \n> I don't think we need to bother with individual pfrees in this path. The\n> caller will call the function in a dedicated memory context, that'll be reset\n> very soon after this.\n\nOh right, the palloc is done in the ExprContext memory context that is reset soon after.\nRemoving the pfrees in V2 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 13 Feb 2023 08:09:50 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "At Mon, 13 Feb 2023 08:09:50 +0100, \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> wrote in \n>> I think this is useful beyond being able to generate those functions\n>> with\n>> macros. The fact that we had to deal with transactional code in\n>> pgstatfuncs.c\n>> meant that a lot of the relevant itnernals had to be exposed \"outside\"\n>> pgstat,\n>> which seems architecturally not great.\n>> \n> Right, good point.\n\nAgreed.\n\n> Removing the pfrees in V2 attached.\n\nAh, that sound good.\n\n \tif (!entry_ref)\n+\t{\n \t\tentry_ref = pgstat_fetch_pending_entry(PGSTAT_KIND_RELATION, InvalidOid, rel_id);\n+\t\treturn tablestatus;\n+\t}\n\nWe should return something if the call returns a non-null value?\n\nSo, since we want to hide the internal from pgstatfuncs, the\nadditional flag should be gone. If the additional cost doesn't bother\nanyone, I don't mind to remove the flag. The patch will get far\nsimpler by that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:40:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 2/13/23 8:40 AM, Kyotaro Horiguchi wrote:\n> At Mon, 13 Feb 2023 08:09:50 +0100, \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> wrote in\n>>> I think this is useful beyond being able to generate those functions\n>>> with\n>>> macros. The fact that we had to deal with transactional code in\n>>> pgstatfuncs.c\n>>> meant that a lot of the relevant itnernals had to be exposed \"outside\"\n>>> pgstat,\n>>> which seems architecturally not great.\n>>>\n>> Right, good point.\n> \n> Agreed.\n> \n>> Removing the pfrees in V2 attached.\n> \n> Ah, that sound good.\n> \n> \tif (!entry_ref)\n> +\t{\n> \t\tentry_ref = pgstat_fetch_pending_entry(PGSTAT_KIND_RELATION, InvalidOid, rel_id);\n> +\t\treturn tablestatus;\n> +\t}\n> \n> We should return something if the call returns a non-null value?\n\nWhat we do is: if entry_ref is NULL then we return NULL (so that the caller returns 0).\n\nIf entry_ref is not NULL then we return a copy of entry_ref->pending (with or without subtrans).\n\n> \n> So, since we want to hide the internal from pgstatfuncs, the\n> additional flag should be gone. \n\nI think there is pros and cons for both but I don't have a strong opinion about that.\n\nSo also proposing V3 attached without the flag in case this is the preferred approach.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 13 Feb 2023 09:58:52 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Thanks for the new version.\n\nAt Mon, 13 Feb 2023 09:58:52 +0100, \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> wrote in \n> Hi,\n> \n> On 2/13/23 8:40 AM, Kyotaro Horiguchi wrote:\n> > At Mon, 13 Feb 2023 08:09:50 +0100, \"Drouvot, Bertrand\"\n> > <bertranddrouvot.pg@gmail.com> wrote in\n> >>> I think this is useful beyond being able to generate those functions\n> >>> with\n> >>> macros. The fact that we had to deal with transactional code in\n> >>> pgstatfuncs.c\n> >>> meant that a lot of the relevant itnernals had to be exposed \"outside\"\n> >>> pgstat,\n> >>> which seems architecturally not great.\n> >>>\n> >> Right, good point.\n> > Agreed.\n> > \n> >> Removing the pfrees in V2 attached.\n> > Ah, that sound good.\n> > \tif (!entry_ref)\n> > +\t{\n> > \t\tentry_ref = pgstat_fetch_pending_entry(PGSTAT_KIND_RELATION,\n> > \t\tInvalidOid, rel_id);\n> > +\t\treturn tablestatus;\n> > +\t}\n> > We should return something if the call returns a non-null value?\n> \n> What we do is: if entry_ref is NULL then we return NULL (so that the\n> caller returns 0).\n> \n> If entry_ref is not NULL then we return a copy of entry_ref->pending\n> (with or without subtrans).\n\nIsn't it ignoring the second call to pgstat_fetch_pending_entry?\n\nWhat the code did is: if entry_ref is NULL for MyDatabaseId then we\n*retry* fetching an global (not database-wise) entry. If any global\nentry is found, return it (correctly entry_ref->pending) to the caller.\n\nThe current patch returns NULL when a glboal entry is found.\n\nI thought that we might be able to return entry_ref->pending since the\ncallers don't call pfree on the returned pointer, but it is not great\nthat we don't inform the callers if the returned memory can be safely\npfreed or not.\n\nThus what I have in mind is the following.\n\n> \tif (!entry_ref)\n> +\t{\n> \t\tentry_ref = pgstat_fetch_pending_entry(PGSTAT_KIND_RELATION,\n> \t\tInvalidOid, rel_id);\n> +\t\tif (!entry_ref)\n> + return NULL;\n> +\t}\n\n\n\n> > So, since we want to hide the internal from pgstatfuncs, the\n> > additional flag should be gone. \n> \n> I think there is pros and cons for both but I don't have a strong\n> opinion about that.\n> \n> So also proposing V3 attached without the flag in case this is the\n> preferred approach.\n\nThat part looks good to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 14 Feb 2023 15:11:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 2/14/23 7:11 AM, Kyotaro Horiguchi wrote:\n> \n> Isn't it ignoring the second call to pgstat_fetch_pending_entry?\n> \n\nOh right, my bad (the issue has been introduced in V2).\nFixed in V4.\n\n> I thought that we might be able to return entry_ref->pending since the\n> callers don't call pfree on the returned pointer, but it is not great\n> that we don't inform the callers if the returned memory can be safely\n> pfreed or not.\n> \n> Thus what I have in mind is the following.\n> \n>> \tif (!entry_ref)\n>> +\t{\n>> \t\tentry_ref = pgstat_fetch_pending_entry(PGSTAT_KIND_RELATION,\n>> \t\tInvalidOid, rel_id);\n>> +\t\tif (!entry_ref)\n>> + return NULL;\n>> +\t}\n\nLGTM, done that way in V4.\n\n> \n> \n> \n>>> So, since we want to hide the internal from pgstatfuncs, the\n>>> additional flag should be gone.\n>>\n>> I think there is pros and cons for both but I don't have a strong\n>> opinion about that.\n>>\n>> So also proposing V3 attached without the flag in case this is the\n>> preferred approach.\n> \n> That part looks good to me.\n> \n\nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 14 Feb 2023 15:43:26 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "At Tue, 14 Feb 2023 15:43:26 +0100, \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> wrote in \n> Oh right, my bad (the issue has been introduced in V2).\n> Fixed in V4.\n\nGreat!\n\n> > I thought that we might be able to return entry_ref->pending since the\n> > callers don't call pfree on the returned pointer, but it is not great\n> > that we don't inform the callers if the returned memory can be safely\n> > pfreed or not.\n> > Thus what I have in mind is the following.\n> > \n> >> \tif (!entry_ref)\n> >> +\t{\n> >> \t\tentry_ref = pgstat_fetch_pending_entry(PGSTAT_KIND_RELATION,\n> >> \t\tInvalidOid, rel_id);\n> >> +\t\tif (!entry_ref)\n> >> + return NULL;\n> >> +\t}\n> \n> LGTM, done that way in V4.\n\nThat part looks good to me, thanks!\n\nI was going through v4 and it seems to me that the comment for\nfind_tabstat_entry may not be quite right.\n\n> * find any existing PgStat_TableStatus entry for rel\n> *\n> * Find any existing PgStat_TableStatus entry for rel_id in the current\n> * database. If not found, try finding from shared tables.\n> *\n> * If no entry found, return NULL, don't create a new one\n\nThe comment assumed that the function directly returns an entry from\nshared memory, but now it copies the entry's contents into a palloc'ed\nmemory and stores the sums of some counters for the current\ntransaction in it. Do you think we should update the comment to\nreflect this change?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Feb 2023 09:56:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 2/15/23 1:56 AM, Kyotaro Horiguchi wrote:\n> At Tue, 14 Feb 2023 15:43:26 +0100, \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> wrote in\n> \n> The comment assumed that the function directly returns an entry from\n> shared memory, but now it copies the entry's contents into a palloc'ed\n> memory and stores the sums of some counters for the current\n> transaction in it. Do you think we should update the comment to\n> reflect this change?\n> \n\nGood point, thanks! Yeah, definitively, done in V5 attached.\n \nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 15 Feb 2023 09:21:48 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-15 09:21:48 +0100, Drouvot, Bertrand wrote:\n> diff --git a/src/backend/utils/activity/pgstat_relation.c b/src/backend/utils/activity/pgstat_relation.c\n> index f793ac1516..b26e2a5a7a 100644\n> --- a/src/backend/utils/activity/pgstat_relation.c\n> +++ b/src/backend/utils/activity/pgstat_relation.c\n> @@ -471,20 +471,46 @@ pgstat_fetch_stat_tabentry_ext(bool shared, Oid reloid)\n> * Find any existing PgStat_TableStatus entry for rel_id in the current\n> * database. If not found, try finding from shared tables.\n> *\n> + * If an entry is found, copy it and increment the copy's counters with their\n> + * subtransactions counterparts. Then return the copy. There is no need for the\n> + * caller to pfree the copy as the MemoryContext will be reset soon after.\n> + *\n\nThe \"There is no need\" bit seems a bit off. Yes, that's true for the current\ncallers, but who says that it has to stay that way?\n\nOtherwise this looks ready, on a casual scan.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 13:21:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 2/16/23 10:21 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2023-02-15 09:21:48 +0100, Drouvot, Bertrand wrote:\n>> diff --git a/src/backend/utils/activity/pgstat_relation.c b/src/backend/utils/activity/pgstat_relation.c\n>> index f793ac1516..b26e2a5a7a 100644\n>> --- a/src/backend/utils/activity/pgstat_relation.c\n>> +++ b/src/backend/utils/activity/pgstat_relation.c\n>> @@ -471,20 +471,46 @@ pgstat_fetch_stat_tabentry_ext(bool shared, Oid reloid)\n>> * Find any existing PgStat_TableStatus entry for rel_id in the current\n>> * database. If not found, try finding from shared tables.\n>> *\n>> + * If an entry is found, copy it and increment the copy's counters with their\n>> + * subtransactions counterparts. Then return the copy. There is no need for the\n>> + * caller to pfree the copy as the MemoryContext will be reset soon after.\n>> + *\n> \n> The \"There is no need\" bit seems a bit off. Yes, that's true for the current\n> callers, but who says that it has to stay that way?\n> \n\nGood point. Wording has been changed in V6 attached.\n\n> Otherwise this looks ready, on a casual scan.\n> \n\nThanks for having looked at it!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 6 Mar 2023 08:33:15 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 08:33:15AM +0100, Drouvot, Bertrand wrote:\n> Thanks for having looked at it!\n\nLooking at that, I have a few comments.\n\n+ tabentry = (PgStat_TableStatus *) entry_ref->pending;\n+ tablestatus = palloc(sizeof(PgStat_TableStatus));\n+ *tablestatus = *tabentry;\n+\n[...]\n+ for (trans = tabentry->trans; trans != NULL; trans = trans->upper)\n+ {\n+ tablestatus->t_counts.t_tuples_inserted += trans->tuples_inserted;\n+ tablestatus->t_counts.t_tuples_updated += trans->tuples_updated;\n+ tablestatus->t_counts.t_tuples_deleted += trans->tuples_deleted;\n+ }\n \n- if (entry_ref)\n- return entry_ref->pending;\n- return NULL;\n+ return tablestatus;\n\nFrom what I get with this change, the number of tuples changed by DMLs\nhave their computations done a bit earlier, meaning that it would make\nall the callers of find_tabstat_entry() pay the computation cost.\nStill it is not really going to matter, because we will just do the\ncomputation once when looking at any pending changes of\npg_stat_xact_all_tables for each entry. There are 9 callers of\nfind_tabstat_entry, with 7 being used for pg_stat_xact_all_tables.\nHow much do we need to care about the remaining two callers\npg_stat_get_xact_blocks_fetched() and pg_stat_get_xact_blocks_hit()?\nCould it be a problem if these two also pay the extra computation cost\nif a transaction with many subtransactions (aka )needs to look at their\ndata? These two are used nowhere, they have pg_proc entries and they\nare undocumented, so it is hard to say the impact of this change on\nthem..\n\nSecond question: once the data from the subtransactions is copied,\nwould it be cleaner to set trans to NULL after the data copy is done?\n\nIt would feel a bit safer to me to document that find_tabstat_entry()\nis currently only used for this xact system view.. The extra\ncomputation could lead to surprises, actually, if this routine is used\noutside this context? Perhaps that's OK, but it does not give me a\nwarm feeling, just to reshape three functions of pgstatfuncs.c with\nmacros.\n--\nMichael",
"msg_date": "Thu, 16 Mar 2023 15:29:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 3/16/23 7:29 AM, Michael Paquier wrote:\n> On Mon, Mar 06, 2023 at 08:33:15AM +0100, Drouvot, Bertrand wrote:\n>> Thanks for having looked at it!\n> \n> Looking at that, I have a few comments.\n> \n> + tabentry = (PgStat_TableStatus *) entry_ref->pending;\n> + tablestatus = palloc(sizeof(PgStat_TableStatus));\n> + *tablestatus = *tabentry;\n> +\n> [...]\n> + for (trans = tabentry->trans; trans != NULL; trans = trans->upper)\n> + {\n> + tablestatus->t_counts.t_tuples_inserted += trans->tuples_inserted;\n> + tablestatus->t_counts.t_tuples_updated += trans->tuples_updated;\n> + tablestatus->t_counts.t_tuples_deleted += trans->tuples_deleted;\n> + }\n> \n> - if (entry_ref)\n> - return entry_ref->pending;\n> - return NULL;\n> + return tablestatus;\n> \n> From what I get with this change, the number of tuples changed by DMLs\n> have their computations done a bit earlier,\n\nThanks for looking at it!\n\nRight, but note this is in a dedicated new tablestatus (created within find_tabstat_entry()).\n\n> meaning that it would make\n> all the callers of find_tabstat_entry() pay the computation cost.\n\nRight. Another suggested approach was to add a flag but then we'd not really\nhide the internal from pgstatfuncs.\n\n> Still it is not really going to matter, because we will just do the\n> computation once when looking at any pending changes of\n> pg_stat_xact_all_tables for each entry. \n\nYes.\n\n> There are 9 callers of\n> find_tabstat_entry, with 7 being used for pg_stat_xact_all_tables.\n\nRight, those are:\n\npg_stat_get_xact_numscans()\n\npg_stat_get_xact_tuples_returned()\n\npg_stat_get_xact_tuples_fetched()\n\npg_stat_get_xact_tuples_inserted()\n\npg_stat_get_xact_tuples_updated()\n\npg_stat_get_xact_tuples_deleted()\n\npg_stat_get_xact_tuples_hot_updated()\n\n> How much do we need to care about the remaining two callers\n> pg_stat_get_xact_blocks_fetched() and pg_stat_get_xact_blocks_hit()?\n\nRegarding pg_stat_get_xact_blocks_fetched() and pg_stat_get_xact_blocks_hit()\nthe callers (if any) are outside of the core PG (as from what I can see they are not used\nat all).\n\nI don't think we should pay any particular attention to those 2 ones as anyway nothing\nprevent the 7 others to be called outside of the pg_stat_xact_all_tables view.\n\n> Could it be a problem if these two also pay the extra computation cost\n> if a transaction with many subtransactions (aka )needs to look at their\n> data? These two are used nowhere, they have pg_proc entries and they\n> are undocumented, so it is hard to say the impact of this change on\n> them..\n> \n\nRight, and that's the same for the 7 others as nothing prevent them to be called outside\nof the pg_stat_xact_all_tables view.\n\nDo you think it would be better to add the extra flag then?\n\n> Second question: once the data from the subtransactions is copied,\n> would it be cleaner to set trans to NULL after the data copy is done?\n> \n\nThat would not hurt but I'm not sure it's worth it (well, it's currently\nnot done in pg_stat_get_xact_tuples_inserted() for example).\n\n> It would feel a bit safer to me to document that find_tabstat_entry()\n> is currently only used for this xact system view.. The extra\n> computation could lead to surprises, actually, if this routine is used\n> outside this context? Perhaps that's OK, but it does not give me a\n> warm feeling, just to reshape three functions of pgstatfuncs.c with\n> macros.\n\nThat's a fair point. On the other hand those 9 functions (which can all be used outside\nof the pg_stat_xact_all_tables view) are not documented, so I'm not sure this is that much of\na concern (and if we think it is we still gave the option to add an extra flag to indicate whether\nor not the extra computation is needed.)\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Mar 2023 11:32:56 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 11:32:56AM +0100, Drouvot, Bertrand wrote:\n> On 3/16/23 7:29 AM, Michael Paquier wrote:\n>> From what I get with this change, the number of tuples changed by DMLs\n>> have their computations done a bit earlier,\n> \n> Thanks for looking at it!\n> \n> Right, but note this is in a dedicated new tablestatus (created\n> within find_tabstat_entry()).\n\nSure, however it copies the pointer of the PgStat_TableXactStatus from\ntabentry, isn't it? This means that it keeps a reference of the chain\nof subtransactions. It does not matter for the functions but it could\nfor out-of-core callers of find_tabstat_entry(), no? Perhaps you are\nright and that's not worth worrying, still I don't feel particularly\nconfident that this is the best approach we can take.\n\n>> How much do we need to care about the remaining two callers\n>> pg_stat_get_xact_blocks_fetched() and pg_stat_get_xact_blocks_hit()?\n> \n> Regarding pg_stat_get_xact_blocks_fetched() and pg_stat_get_xact_blocks_hit()\n> the callers (if any) are outside of the core PG (as from what I can\n> see they are not used at all).\n> \n> I don't think we should pay any particular attention to those 2 ones\n> as anyway nothing prevent the 7 others to be called outside of the\n> pg_stat_xact_all_tables view.\n\nI am not quite sure, TBH. Did you look at the difference with a long\nchain of subtrans, like savepoints? The ODBC driver \"loves\" producing\na lot of savepoints, for example.\n\n>> It would feel a bit safer to me to document that find_tabstat_entry()\n>> is currently only used for this xact system view.. The extra\n>> computation could lead to surprises, actually, if this routine is used\n>> outside this context? Perhaps that's OK, but it does not give me a\n>> warm feeling, just to reshape three functions of pgstatfuncs.c with\n>> macros.\n> \n> That's a fair point. On the other hand those 9 functions (which can\n> all be used outside of the pg_stat_xact_all_tables view) are not\n> documented, so I'm not sure this is that much of a concern (and if\n> we think it is we still gave the option to add an extra flag to\n> indicate whether or not the extra computation is needed.)\n\nThat's not quite exact, I think. The first 7 functions are used in a\nsystem catalog that is documented. Still we have a problem here. I\ncan actually see a few projects relying on these two functions while\nlooking a bit around, so they are used. And the issue comes from\nddfc2d9, that has removed these functions from the documentation\nignoring that they are used in no system catalogs. I think that we\nshould fix that and re-add the two missing functions with a proper\ndescription in the docs, at least? There is no trace of them.\nPerhaps the ones exposted through pg_stat_xact_all_tables are fine if\nnot listed.\n--\nMichael",
"msg_date": "Thu, 16 Mar 2023 20:46:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On 3/16/23 12:46 PM, Michael Paquier wrote:\n> On Thu, Mar 16, 2023 at 11:32:56AM +0100, Drouvot, Bertrand wrote:\n>> On 3/16/23 7:29 AM, Michael Paquier wrote:\n>>> From what I get with this change, the number of tuples changed by DMLs\n>>> have their computations done a bit earlier,\n>>\n>> Thanks for looking at it!\n>>\n>> Right, but note this is in a dedicated new tablestatus (created\n>> within find_tabstat_entry()).\n> \n> Sure, however it copies the pointer of the PgStat_TableXactStatus from\n> tabentry, isn't it? \n\nOh I see what you mean, yeah, the pointer is copied.\n\n> This means that it keeps a reference of the chain\n> of subtransactions. It does not matter for the functions but it could\n> for out-of-core callers of find_tabstat_entry(), no?\n\nYeah, maybe.\n\n> Perhaps you are\n> right and that's not worth worrying, still I don't feel particularly\n> confident that this is the best approach we can take.\n> \n\ndue to what potential out-of-core callers could do with it?\n\n>>> How much do we need to care about the remaining two callers\n>>> pg_stat_get_xact_blocks_fetched() and pg_stat_get_xact_blocks_hit()?\n>>\n>> Regarding pg_stat_get_xact_blocks_fetched() and pg_stat_get_xact_blocks_hit()\n>> the callers (if any) are outside of the core PG (as from what I can\n>> see they are not used at all).\n>>\n>> I don't think we should pay any particular attention to those 2 ones\n>> as anyway nothing prevent the 7 others to be called outside of the\n>> pg_stat_xact_all_tables view.\n> \n> I am not quite sure, TBH. Did you look at the difference with a long\n> chain of subtrans, like savepoints? The ODBC driver \"loves\" producing\n> a lot of savepoints, for example.\n> \n\nNo, I did not measure the impact.\n\n>>> It would feel a bit safer to me to document that find_tabstat_entry()\n>>> is currently only used for this xact system view.. The extra\n>>> computation could lead to surprises, actually, if this routine is used\n>>> outside this context? Perhaps that's OK, but it does not give me a\n>>> warm feeling, just to reshape three functions of pgstatfuncs.c with\n>>> macros.\n>>\n>> That's a fair point. On the other hand those 9 functions (which can\n>> all be used outside of the pg_stat_xact_all_tables view) are not\n>> documented, so I'm not sure this is that much of a concern (and if\n>> we think it is we still gave the option to add an extra flag to\n>> indicate whether or not the extra computation is needed.)\n> \n> That's not quite exact, I think. The first 7 functions are used in a\n> system catalog that is documented. \n\nRight.\n\n> Still we have a problem here. I\n> can actually see a few projects relying on these two functions while\n> looking a bit around, so they are used. And the issue comes from\n> ddfc2d9, that has removed these functions from the documentation\n> ignoring that they are used in no system catalogs. I think that we\n> should fix that and re-add the two missing functions with a proper\n> description in the docs, at least? \n\nAs they could be/are used outside of the xact view, yes I think the same.\n\n> There is no trace of them.\n> Perhaps the ones exposted through pg_stat_xact_all_tables are fine if\n> not listed.\n\nI'd be tempted to add documentation for all of them, I can look at it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Mar 2023 14:13:45 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 02:13:45PM +0100, Drouvot, Bertrand wrote:\n> On 3/16/23 12:46 PM, Michael Paquier wrote:\n>> There is no trace of them.\n>> Perhaps the ones exposted through pg_stat_xact_all_tables are fine if\n>> not listed.\n> \n> I'd be tempted to add documentation for all of them, I can look at it.\n\nI am not sure that there is any need to completely undo ddfc2d9, later\nsimplified by 5f2b089, so my opinion would be to just add\ndocumentation for the functions that can be used but are used in none\nof the system functions. \n\nAnyway, double-checking, I only see an inconsistency for these two,\nconfirming my first impression:\n- pg_stat_get_xact_blocks_fetched\n- pg_stat_get_xact_blocks_hit\n\nThere may be a point in having them in some of the system views, but\nthe non-xact flavors are only used in the statio views, which don't\nreally need xact versions AFAIK. I am not sure that it makes much\nsense to add them in pg_stat_xact_all_tables, either. Another view is\njust remove them, though some could rely on them externally. At the\nend, documenting both still sounds like the best move to me.\n--\nMichael",
"msg_date": "Mon, 20 Mar 2023 08:43:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 3/20/23 12:43 AM, Michael Paquier wrote:\n> At the\n> end, documenting both still sounds like the best move to me.\n\nAgree.\n\nPlease find attached v1-0001-pg_stat_get_xact_blocks_fetched-and_hit-doc.patch doing so.\n\nI did not put the exact same wording as the one being removed in ddfc2d9, as:\n\n- For pg_stat_get_xact_blocks_hit(), I think it's better to be closer to say the\npg_statio_all_tables.heap_blks_hit definition.\n\n- For pg_stat_get_xact_blocks_fetched(), I think that using \"buffer\" is better (than block) as at the\nend it's related to pgstat_count_buffer_read().\n\nAt the end there is a choice to be made for both for the wording between block and buffer. Indeed their\ncounters get incremented in \"buffer\" macros while retrieved in those \"blocks\" functions.\n\n\"Buffer\" sounds more appropriate to me, so the attached has been done that way.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 20 Mar 2023 11:57:31 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 11:57:31AM +0100, Drouvot, Bertrand wrote:\n> \"Buffer\" sounds more appropriate to me, so the attached has been done that way.\n\nThis choice is OK for me.\n\n> + <indexterm>\n> + <primary>pg_stat_get_xact_blocks_fetched</primary>\n> + </indexterm>\n> + <function>pg_stat_get_xact_blocks_fetched</function> ( <type>oid</type> )\n> + <returnvalue>bigint</returnvalue>\n> + </para>\n> + <para>\n> + Returns the number of buffer fetches for table or index, in the current transaction\n\nThis should be \"number of buffer fetched\", no?\n\n> + </indexterm>\n> + <function>pg_stat_get_xact_blocks_hit</function> ( <type>oid</type> )\n> + <returnvalue>bigint</returnvalue>\n> + </para>\n> + <para>\n> + Returns the number of buffer hits for table or index, in the current transaction\n> + </para></entry>\n\nThis one looks OK to me too.\n--\nMichael",
"msg_date": "Wed, 22 Mar 2023 10:16:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "At Wed, 22 Mar 2023 10:16:12 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Mar 20, 2023 at 11:57:31AM +0100, Drouvot, Bertrand wrote:\n> > \"Buffer\" sounds more appropriate to me, so the attached has been done that way.\n> \n> This choice is OK for me.\n> \n> > + <indexterm>\n> > + <primary>pg_stat_get_xact_blocks_fetched</primary>\n> > + </indexterm>\n> > + <function>pg_stat_get_xact_blocks_fetched</function> ( <type>oid</type> )\n> > + <returnvalue>bigint</returnvalue>\n> > + </para>\n> > + <para>\n> > + Returns the number of buffer fetches for table or index, in the current transaction\n> \n> This should be \"number of buffer fetched\", no?\n\nIn the original description, \"buffer fetches\" appears to be a plural\nform of a compound noun and correct, similar to \"buffer hits\"\nmentioned later. If we reword it, I think it should be \"number of\nbuffers fetched\".\n\n> > + </indexterm>\n> > + <function>pg_stat_get_xact_blocks_hit</function> ( <type>oid</type> )\n> > + <returnvalue>bigint</returnvalue>\n> > + </para>\n> > + <para>\n> > + Returns the number of buffer hits for table or index, in the current transaction\n> > + </para></entry>\n> \n> This one looks OK to me too.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 22 Mar 2023 11:37:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 11:37:03AM +0900, Kyotaro Horiguchi wrote:\n> In the original description, \"buffer fetches\" appears to be a plural\n> form of a compound noun and correct, similar to \"buffer hits\"\n> mentioned later. If we reword it, I think it should be \"number of\n> buffers fetched\".\n\nUsing the plural makes sense, yes.\n--\nMichael",
"msg_date": "Wed, 22 Mar 2023 13:45:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 3/22/23 5:45 AM, Michael Paquier wrote:\n> On Wed, Mar 22, 2023 at 11:37:03AM +0900, Kyotaro Horiguchi wrote:\n>> In the original description, \"buffer fetches\" appears to be a plural\n>> form of a compound noun and correct, similar to \"buffer hits\"\n>> mentioned later. If we reword it, I think it should be \"number of\n>> buffers fetched\".\n> \n> Using the plural makes sense, yes.\n\nYeah, \"buffer fetches\" is similar to \"buffer hits\".\n\nFor consistency, ISTM than renaming it to \"buffers fetched\" would also mean\nrenaming \"buffer hits\" to \"buffers hit\". But then it would not be consistent\nwith the documentation for things like pg_statio_all_tables.heap_blks_hit, idx_blks_hit, toast_blks_hit,\ntidx_blks_hit, pg_statio_all_indexes.idx_blks_hit, pg_statio_all_sequences.blks_hit\nwhere \"Number of buffer hits\" is used).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 22 Mar 2023 07:44:23 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 3/22/23 7:44 AM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 3/22/23 5:45 AM, Michael Paquier wrote:\n>> On Wed, Mar 22, 2023 at 11:37:03AM +0900, Kyotaro Horiguchi wrote:\n>>> In the original description, \"buffer fetches\" appears to be a plural\n>>> form of a compound noun and correct, similar to \"buffer hits\"\n>>> mentioned later. If we reword it, I think it should be \"number of\n>>> buffers fetched\".\n>>\n>> Using the plural makes sense, yes.\n> \n> Yeah, \"buffer fetches\" is similar to \"buffer hits\".\n> \n> For consistency, ISTM than renaming it to \"buffers fetched\" would also mean\n> renaming \"buffer hits\" to \"buffers hit\". But then it would not be consistent\n> with the documentation for things like pg_statio_all_tables.heap_blks_hit, idx_blks_hit, toast_blks_hit,\n> tidx_blks_hit, pg_statio_all_indexes.idx_blks_hit, pg_statio_all_sequences.blks_hit\n> where \"Number of buffer hits\" is used).\n> \n\nThat said, please find enclosed V2 with \"buffers fetched\" suggested above (and no changes to\n\"buffer hits\" to keep consistency with the other part of the documentation mentioned up-thread).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 22 Mar 2023 09:20:25 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 09:20:25AM +0100, Drouvot, Bertrand wrote:\n> That said, please find enclosed V2 with \"buffers fetched\" suggested\n> above (and no changes to \"buffer hits\" to keep consistency with the\n> other part of the documentation mentioned up-thread).\n\nThanks. Applied and backpatched that. In 12 and 11, the style of the\ntable for these functions was a bit different.\n--\nMichael",
"msg_date": "Wed, 22 Mar 2023 18:34:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 6:58 AM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> Hi,\n>\n> On 3/20/23 12:43 AM, Michael Paquier wrote:\n> > At the\n> > end, documenting both still sounds like the best move to me.\n>\n> Agree.\n>\n> Please find attached v1-0001-pg_stat_get_xact_blocks_fetched-and_hit-doc.patch doing so.\n>\n> I did not put the exact same wording as the one being removed in ddfc2d9, as:\n>\n> - For pg_stat_get_xact_blocks_hit(), I think it's better to be closer to say the\n> pg_statio_all_tables.heap_blks_hit definition.\n>\n> - For pg_stat_get_xact_blocks_fetched(), I think that using \"buffer\" is better (than block) as at the\n> end it's related to pgstat_count_buffer_read().\n>\n> At the end there is a choice to be made for both for the wording between block and buffer. Indeed their\n> counters get incremented in \"buffer\" macros while retrieved in those \"blocks\" functions.\n>\n> \"Buffer\" sounds more appropriate to me, so the attached has been done that way.\n\nApologies as I know this docs update has already been committed, but\nbuffers fetched and blocks fetched both feel weird to me. If you have a\ncache hit, you don't end up really \"fetching\" anything at all (since\npgstat_count_buffer_read() is called before ReadBuffer_common() and we\ndon't know if it is a hit or miss yet). And, I would normally associate\nfetching with fetching a block into a buffer. It seems like this counter\nis really reflecting the number of buffers acquired or used.\n\ntuples_fetched makes more sense because a tuple is \"fetched\" into a\nslot.\n\nThis isn't really the fault of this patch since that member was already\ncalled blocks_fetched.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 22 Mar 2023 14:21:12 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 02:21:12PM -0400, Melanie Plageman wrote:\n> Apologies as I know this docs update has already been committed, but\n> buffers fetched and blocks fetched both feel weird to me. If you have a\n> cache hit, you don't end up really \"fetching\" anything at all (since\n> pgstat_count_buffer_read() is called before ReadBuffer_common() and we\n> don't know if it is a hit or miss yet). And, I would normally associate\n> fetching with fetching a block into a buffer. It seems like this counter\n> is really reflecting the number of buffers acquired or used.\n\nWell, it is the number of times we've requested a block read, though\nit may not actually be a read if the block was in the cache already.\n\n> This isn't really the fault of this patch since that member was already\n> called blocks_fetched.\n\nThe original documentation of these functions added by 46aa77c refers\nto \"block fetch requests\" and \"block requests found in cache\", so that\nwould not be right either based on your opinion here. If you find\n\"fetch\" to be incorrect in this context, here is another idea:\n- \"block read requests\" for blocks_fetched().\n- \"block read requested but actually found in cache\" for blocks_hit().\n\nAll the system views care only about the difference between both\ncounters to count the number of physical reads really done.\n--\nMichael",
"msg_date": "Thu, 23 Mar 2023 07:42:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 02:13:45PM +0100, Drouvot, Bertrand wrote:\n> On 3/16/23 12:46 PM, Michael Paquier wrote:\n>>> I don't think we should pay any particular attention to those 2 ones\n>>> as anyway nothing prevent the 7 others to be called outside of the\n>>> pg_stat_xact_all_tables view.\n>> \n>> I am not quite sure, TBH. Did you look at the difference with a long\n>> chain of subtrans, like savepoints? The ODBC driver \"loves\" producing\n>> a lot of savepoints, for example.\n> \n> No, I did not measure the impact.\n\nI have been thinking again about this particular point, and I would be\nfine with an additional boolean flag to compute the subtrans data in\nthe global counter when only necessary. This would not make the\nmacros for the stat functions that much more complicated, while being\nsure that we don't do unnecessary computations when we know that we\ndon't need them..\n--\nMichael",
"msg_date": "Sat, 25 Mar 2023 11:56:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-25 11:56:22 +0900, Michael Paquier wrote:\n> On Thu, Mar 16, 2023 at 02:13:45PM +0100, Drouvot, Bertrand wrote:\n> > On 3/16/23 12:46 PM, Michael Paquier wrote:\n> >>> I don't think we should pay any particular attention to those 2 ones\n> >>> as anyway nothing prevent the 7 others to be called outside of the\n> >>> pg_stat_xact_all_tables view.\n> >> \n> >> I am not quite sure, TBH. Did you look at the difference with a long\n> >> chain of subtrans, like savepoints? The ODBC driver \"loves\" producing\n> >> a lot of savepoints, for example.\n> > \n> > No, I did not measure the impact.\n> \n> I have been thinking again about this particular point, and I would be\n> fine with an additional boolean flag to compute the subtrans data in\n> the global counter when only necessary. This would not make the\n> macros for the stat functions that much more complicated, while being\n> sure that we don't do unnecessary computations when we know that we\n> don't need them..\n\nI don't understand what we're optimizing for here. These functions are very\nvery very far from being a hot path. The xact functions are barely ever\nused. Compared to the cost of query evaluation the cost of iterating throught\nhe subxacts is neglegible.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 24 Mar 2023 20:00:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 08:00:44PM -0700, Andres Freund wrote:\n> I don't understand what we're optimizing for here. These functions are very\n> very very far from being a hot path. The xact functions are barely ever\n> used. Compared to the cost of query evaluation the cost of iterating throught\n> he subxacts is neglegible.\n\nI was wondering about that, and I see why I'm wrong. I have quickly\ngone up to 10k subtransactions, and while I was seeing what looks like\ndifference of 8~10% in runtime when looking at\npg_stat_xact_all_tables, the overval runtime was still close enough\n(5.8ms vs 6.4ms). At this scale, possible that it was some noise,\nthese seemed repeatable still not to worry about.\n\nAnyway, I was looking at this patch, and I still feel that it is a bit\nincorrect to have the copy of PgStat_TableStatus returned by\nfind_tabstat_entry() to point to the same list of subtransaction data\nas the pending entry found, while the counters are incremented. This\ncould lead to mistakes if the copy from find_tabstat_entry() is used\nin an unexpected way in the future. The current callers are OK, but\nthis does not give me a warm feeling :/\n--\nMichael",
"msg_date": "Mon, 27 Mar 2023 15:35:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 6:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Mar 22, 2023 at 02:21:12PM -0400, Melanie Plageman wrote:\n> > Apologies as I know this docs update has already been committed, but\n> > buffers fetched and blocks fetched both feel weird to me. If you have a\n> > cache hit, you don't end up really \"fetching\" anything at all (since\n> > pgstat_count_buffer_read() is called before ReadBuffer_common() and we\n> > don't know if it is a hit or miss yet). And, I would normally associate\n> > fetching with fetching a block into a buffer. It seems like this counter\n> > is really reflecting the number of buffers acquired or used.\n>\n> Well, it is the number of times we've requested a block read, though\n> it may not actually be a read if the block was in the cache already.\n>\n> > This isn't really the fault of this patch since that member was already\n> > called blocks_fetched.\n>\n> The original documentation of these functions added by 46aa77c refers\n> to \"block fetch requests\" and \"block requests found in cache\", so that\n> would not be right either based on your opinion here. If you find\n> \"fetch\" to be incorrect in this context, here is another idea:\n> - \"block read requests\" for blocks_fetched().\n> - \"block read requested but actually found in cache\" for blocks_hit().\n\nI do like/prefer \"block read requests\" and\n\"blocks requested found in cache\"\nThough, now I fear my initial complaint may have been a bit pedantic.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 27 Mar 2023 10:24:46 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 10:24:46AM -0400, Melanie Plageman wrote:\n> I do like/prefer \"block read requests\" and\n> \"blocks requested found in cache\"\n> Though, now I fear my initial complaint may have been a bit pedantic.\n\nThat's fine. Let's ask for extra opinions, then.\n\nSo, have others an opinion to share here?\n--\nMichael",
"msg_date": "Tue, 28 Mar 2023 07:38:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "At Tue, 28 Mar 2023 07:38:25 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Mar 27, 2023 at 10:24:46AM -0400, Melanie Plageman wrote:\n> > I do like/prefer \"block read requests\" and\n> > \"blocks requested found in cache\"\n> > Though, now I fear my initial complaint may have been a bit pedantic.\n> \n> That's fine. Let's ask for extra opinions, then.\n> \n> So, have others an opinion to share here?\n\nI do not have a strong preference for the wording, but consistency is\nimportant. IMHO simply swapping out a few words won't really improve\nthings.\n\nI found that commit ddfc2d9a37 removed the descriptions for\npg_stat_get_blocks_fetched and pg_stat_get_blocks_hit. Right before\nthat commit, monitoring.sgml had these lines:\n\n- <function>pg_stat_get_blocks_fetched</function> minus\n- <function>pg_stat_get_blocks_hit</function> gives the number of kernel\n- <function>read()</> calls issued for the table, index, or\n- database; the number of actual physical reads is usually\n- lower due to kernel-level buffering. The <literal>*_blks_read</>\n- statistics columns use this subtraction, i.e., fetched minus hit.\n\nThe commit then added the following sentence to the description for\npg_statio_all_tables.heap_blks_read.\n\n> <entry>Number of disk blocks read from this table.\n> This value can also be returned by directly calling\n> the <function>pg_stat_get_blocks_fetched</function> and\n> <function>pg_stat_get_blocks_hit</function> functions and\n> subtracting the results.</entry>\n\nLater, in 5f2b089387 it twas revised as:\n+ <entry>Number of disk blocks read in this database</entry>\n\nThis revision lost the explantion regarding the relationship among\nfetch, hit and read, as it became hidden in the views' definitions.\n\nAs the result, in the current state, it doesn't make sense to just add\na description for pg_stat_get_xact_blocks_fetched().\n\nThe confusion stems from the inconsistency between the views and\nunderlying functions related to block reads and hits. If we add\ndescriptions for the two functions, we should also explain their\nrelationship. Otherwise, it might be better to add the functions\npg_stat_*_blocks_read() instead.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 28 Mar 2023 12:36:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 12:36:15PM +0900, Kyotaro Horiguchi wrote:\n> I found that commit ddfc2d9a37 removed the descriptions for\n> pg_stat_get_blocks_fetched and pg_stat_get_blocks_hit. Right before\n> that commit, monitoring.sgml had these lines:\n> \n> - <function>pg_stat_get_blocks_fetched</function> minus\n> - <function>pg_stat_get_blocks_hit</function> gives the number of kernel\n> - <function>read()</> calls issued for the table, index, or\n> - database; the number of actual physical reads is usually\n> - lower due to kernel-level buffering. The <literal>*_blks_read</>\n> - statistics columns use this subtraction, i.e., fetched minus hit.\n> \n> The commit then added the following sentence to the description for\n> pg_statio_all_tables.heap_blks_read.\n>\n> Later, in 5f2b089387 it twas revised as:\n> + <entry>Number of disk blocks read in this database</entry>\n\nYeah, maybe adding something like that at the bottom of the table for\nstat functions, telling that the difference is the number of read()\ncalls, may help. Perhaps also adding a mention that these are used in\nnone of the existing system views..\n\n> The confusion stems from the inconsistency between the views and\n> underlying functions related to block reads and hits. If we add\n> descriptions for the two functions, we should also explain their\n> relationship. Otherwise, it might be better to add the functions\n> pg_stat_*_blocks_read() instead.\n\nI am not sure that we really need to get down to that as this holds\nthe same meaning as the current system views showing read as the\ndifference between fetched and hit.\n--\nMichael",
"msg_date": "Tue, 28 Mar 2023 14:23:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 3/28/23 7:23 AM, Michael Paquier wrote:\n> On Tue, Mar 28, 2023 at 12:36:15PM +0900, Kyotaro Horiguchi wrote:\n>> I found that commit ddfc2d9a37 removed the descriptions for\n>> pg_stat_get_blocks_fetched and pg_stat_get_blocks_hit. Right before\n>> that commit, monitoring.sgml had these lines:\n>>\n>> - <function>pg_stat_get_blocks_fetched</function> minus\n>> - <function>pg_stat_get_blocks_hit</function> gives the number of kernel\n>> - <function>read()</> calls issued for the table, index, or\n>> - database; the number of actual physical reads is usually\n>> - lower due to kernel-level buffering. The <literal>*_blks_read</>\n>> - statistics columns use this subtraction, i.e., fetched minus hit.\n>>\n>> The commit then added the following sentence to the description for\n>> pg_statio_all_tables.heap_blks_read.\n>>\n>> Later, in 5f2b089387 it twas revised as:\n>> + <entry>Number of disk blocks read in this database</entry>\n> \n> Yeah, maybe adding something like that at the bottom of the table for\n> stat functions, telling that the difference is the number of read()\n> calls, may help. Perhaps also adding a mention that these are used in\n> none of the existing system views..\n> \n>> The confusion stems from the inconsistency between the views and\n>> underlying functions related to block reads and hits. If we add\n>> descriptions for the two functions, we should also explain their\n>> relationship. \n\nI agree that adding more explanation would help and avoid confusion.\n\nWhat about something like?\n\nfor pg_stat_get_xact_blocks_fetched(): \"block read requests for table or index, in the current\ntransaction. This number minus pg_stat_get_xact_blocks_hit() gives the number of kernel\nread() calls.\"\n\npg_stat_get_xact_blocks_hit(): \"block read requests for table or index, in the current\ntransaction, found in cache (not triggering kernel read() calls)\".\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Mar 2023 07:49:45 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 07:49:45AM +0200, Drouvot, Bertrand wrote:\n> What about something like?\n> \n> for pg_stat_get_xact_blocks_fetched(): \"block read requests for table or index, in the current\n> transaction. This number minus pg_stat_get_xact_blocks_hit() gives the number of kernel\n> read() calls.\"\n> \n> pg_stat_get_xact_blocks_hit(): \"block read requests for table or index, in the current\n> transaction, found in cache (not triggering kernel read() calls)\".\n\nSomething among these lines within the table would be also OK by me.\nHoriguchi-san or Melanie-san, perhaps you have counter-proposals?\n--\nMichael",
"msg_date": "Tue, 28 Mar 2023 15:16:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "At Tue, 28 Mar 2023 15:16:36 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Mar 28, 2023 at 07:49:45AM +0200, Drouvot, Bertrand wrote:\n> > What about something like?\n> > \n> > for pg_stat_get_xact_blocks_fetched(): \"block read requests for table or index, in the current\n> > transaction. This number minus pg_stat_get_xact_blocks_hit() gives the number of kernel\n> > read() calls.\"\n> > \n> > pg_stat_get_xact_blocks_hit(): \"block read requests for table or index, in the current\n> > transaction, found in cache (not triggering kernel read() calls)\".\n> \n> Something among these lines within the table would be also OK by me.\n> Horiguchi-san or Melanie-san, perhaps you have counter-proposals?\n\nNo. Fine by me, except that \"block read requests\" seems to suggest\nkernel read() calls, maybe because it's not clear whether \"block\"\nrefers to our buffer blocks or file blocks to me.. If it is generally\nclear, I'm fine with the proposal.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 28 Mar 2023 17:43:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 05:43:26PM +0900, Kyotaro Horiguchi wrote:\n> No. Fine by me, except that \"block read requests\" seems to suggest\n> kernel read() calls, maybe because it's not clear whether \"block\"\n> refers to our buffer blocks or file blocks to me.. If it is generally\n> clear, I'm fine with the proposal.\n\nOkay. Would somebody like to draft a patch?\n--\nMichael",
"msg_date": "Wed, 29 Mar 2023 09:09:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 3/29/23 2:09 AM, Michael Paquier wrote:\n> On Tue, Mar 28, 2023 at 05:43:26PM +0900, Kyotaro Horiguchi wrote:\n>> No. Fine by me, except that \"block read requests\" seems to suggest\n>> kernel read() calls, maybe because it's not clear whether \"block\"\n>> refers to our buffer blocks or file blocks to me.. If it is generally\n>> clear, I'm fine with the proposal.\n> \n> Okay. Would somebody like to draft a patch?\n\nPlease find a draft attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 29 Mar 2023 07:44:20 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 07:44:20AM +0200, Drouvot, Bertrand wrote:\n> Please find a draft attached.\n\nThis addition looks OK for me. Thanks for the patch!\n--\nMichael",
"msg_date": "Tue, 4 Apr 2023 21:04:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Tue, Apr 04, 2023 at 09:04:34PM +0900, Michael Paquier wrote:\n> This addition looks OK for me. Thanks for the patch!\n\nOkay, finally done. One part that was still not complete to me in\nlight of the information ddfc2d9 has removed is that the number of\nphysical reads could be lower than the reported number depending on\nwhat the kernel cache holds. So I've added this sentence, while on\nit.\n--\nMichael",
"msg_date": "Wed, 5 Apr 2023 08:05:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 3/27/23 8:35 AM, Michael Paquier wrote:\n> On Fri, Mar 24, 2023 at 08:00:44PM -0700, Andres Freund wrote:\n>> I don't understand what we're optimizing for here. These functions are very\n>> very very far from being a hot path. The xact functions are barely ever\n>> used. Compared to the cost of query evaluation the cost of iterating throught\n>> he subxacts is neglegible.\n> \n> I was wondering about that, and I see why I'm wrong. I have quickly\n> gone up to 10k subtransactions, and while I was seeing what looks like\n> difference of 8~10% in runtime when looking at\n> pg_stat_xact_all_tables, the overval runtime was still close enough\n> (5.8ms vs 6.4ms). At this scale, possible that it was some noise,\n> these seemed repeatable still not to worry about.\n> \n> Anyway, I was looking at this patch, and I still feel that it is a bit\n> incorrect to have the copy of PgStat_TableStatus returned by\n> find_tabstat_entry() to point to the same list of subtransaction data\n> as the pending entry found, while the counters are incremented. This\n> could lead to mistakes if the copy from find_tabstat_entry() is used\n> in an unexpected way in the future. The current callers are OK, but\n> this does not give me a warm feeling :/\n\nFWIW, please find attached V7 (mandatory rebase).\n\nIt would allow to also define:\n\n- pg_stat_get_xact_tuples_inserted\n- pg_stat_get_xact_tuples_updated\n- pg_stat_get_xact_tuples_deleted\n\nas macros, joining others pg_stat_get_xact_*() that are already\ndefined as macros.\n\nThe concern you raised above has not been addressed, meaning that\nfind_tabstat_entry() still return a copy of PgStat_TableStatus.\n\nBy \"used in an unexpected way in the future\", what do you mean exactly? Do you mean\nthe caller forgetting it is working on a copy and then could work with\n\"stale\" counters?\n\nTrying to understand to see if I should invest time to try to address your concern\nor leave those 3 functions as they are currently before moving back to the\n\"Split index and table statistics into different types of stats\" work [1].\n\n\n[1]: https://www.postgresql.org/message-id/flat/f572abe7-a1bb-e13b-48c7-2ca150546822@gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 26 Oct 2023 10:04:25 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Thu, Oct 26, 2023 at 10:04:25AM +0200, Drouvot, Bertrand wrote:\n> By \"used in an unexpected way in the future\", what do you mean exactly? Do you mean\n> the caller forgetting it is working on a copy and then could work with\n> \"stale\" counters?\n\n(Be careful about the code indentation.)\n\nThe part that I found disturbing is here:\n+ tabentry = (PgStat_TableStatus *) entry_ref->pending;\n+ tablestatus = palloc(sizeof(PgStat_TableStatus));\n+ *tablestatus = *tabentry;\n\nThis causes tablestatus->trans to point to the same location as\ntabentry->trans, but wouldn't it be better to set tablestatus->trans\nto NULL instead for the copy returned to the caller?\n--\nMichael",
"msg_date": "Fri, 27 Oct 2023 15:07:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 10/27/23 8:07 AM, Michael Paquier wrote:\n> \n> The part that I found disturbing is here:\n> + tabentry = (PgStat_TableStatus *) entry_ref->pending;\n> + tablestatus = palloc(sizeof(PgStat_TableStatus));\n> + *tablestatus = *tabentry;\n> \n> This causes tablestatus->trans to point to the same location as\n> tabentry->trans, but wouldn't it be better to set tablestatus->trans\n> to NULL instead for the copy returned to the caller?\n\nOh I see, yeah I do agree to set tablestatus->trans to NULL to avoid\nany undesired interference with tabentry->trans.\n\nDone in V8 attached (pgindent has been run on pgstatfuncs.c and\npgstat_relation.c).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 27 Oct 2023 09:45:52 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 09:45:52AM +0200, Drouvot, Bertrand wrote:\n> Oh I see, yeah I do agree to set tablestatus->trans to NULL to avoid\n> any undesired interference with tabentry->trans.\n> \n> Done in V8 attached (pgindent has been run on pgstatfuncs.c and\n> pgstat_relation.c).\n\nLGTM.\n--\nMichael",
"msg_date": "Fri, 27 Oct 2023 16:50:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 09:45:52AM +0200, Drouvot, Bertrand wrote:\n> Done in V8 attached (pgindent has been run on pgstatfuncs.c and\n> pgstat_relation.c).\n\nAnd applied that after editing a bit the comments.\n--\nMichael",
"msg_date": "Mon, 30 Oct 2023 08:25:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On 2023-10-30 08:25:17 +0900, Michael Paquier wrote:\n> On Fri, Oct 27, 2023 at 09:45:52AM +0200, Drouvot, Bertrand wrote:\n> > Done in V8 attached (pgindent has been run on pgstatfuncs.c and\n> > pgstat_relation.c).\n> \n> And applied that after editing a bit the comments.\n\nThank you both!\n\n\n",
"msg_date": "Tue, 7 Nov 2023 20:24:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
}
] |
[
{
"msg_contents": "Hi,\n\n\nI was looking at bug mentioned at \nhttps://www.postgresql.org/message-id/flat/201010112055.o9BKtZf7011251%40wwwmaster.postgresql.org\n\nIssue appears to be in gbt_inet_compress which doesn't store inet \ndetails like ip_family and netmask details in inetKEY and\n\ngbt_inet_consistent which doesn't have enough info (as gbt_inet_compress \ndidn't store them) and\n\nuses vague convert_network_to_scalar for performing operations.\n\nLooking at reference implementation for inet in \nsrc/backend/utils/adt/network_gist.c, if we add missing inet data\n\n(as seen in GistInetKey) into inetKEY and refactor gbt_inet_consistent \nto use network functions instead of using convert_network_to_scalar, \nmentioned bugs might be alleviated.\n\nI just to know if this is right direction to approach this problem?\n\n\nThanks,\n\nAnkit\n\n\n\n",
"msg_date": "Thu, 9 Feb 2023 21:06:21 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "[Question] Revamping btree gist implementation of inet"
}
] |
[
{
"msg_contents": "As per the discussion at [1], here's a patchset to integrate\npg_bsd_indent into our main source tree, so that people don't\nhave to pull down a separate repo to get this tool.\n\n0001 is a verbatim import of the current pg_bsd_indent repo contents.\nI felt committing this separately is useful for traceability.\n\n0002 adjusts the copyright notices to 3-clause BSD style,\nso that we don't get complaints about our tree containing\ncopyrights inconsistent with the main Postgres license.\n\n0003 is the first non-boring bit: it updates the Makefile\nand some other things to account for now being an in-tree\nbuild not out-of-tree. Also, since 0002 already meant that\nthe README isn't exactly like upstream's, I got rid of the\nseparate README.pg_bsd_indent file and merged that info\ninto README.\n\n0004 is the patch discussed at [2] to improve pgindent's\nhandling of multiline variable initializations.\n\n0003 lacks meson support (anyone want to help with that?)\nbut otherwise these seem committable to me. I'd anticipate\npushing 0001-0003 shortly but holding 0004 until we are ready\nto do the post-March-CF pgindent run. (Come to think of it,\n0004 had probably better include a pg_bsd_indent version\nbump too.)\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20230123001518.6hxyiczhn4kadvmf%40awork3.anarazel.de\n[2] https://www.postgresql.org/message-id/flat/20230120013137.7ky7nl4e4zjorrfa%40awork3.anarazel.de",
"msg_date": "Thu, 09 Feb 2023 13:30:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-09 13:30:30 -0500, Tom Lane wrote:\n> 0003 lacks meson support (anyone want to help with that?)\n\nI'll give it a go, unless somebody else wants to.\n\n\nDo we expect pg_bsd_indent to build / work on windows, right now? If it\ndoesn't, do we want to make that a hard requirement?\n\nI'll have CI test that, once I added meson support.\n\n\n> I'd anticipate pushing 0001-0003 shortly but holding 0004 until we are ready\n> to do the post-March-CF pgindent run. (Come to think of it, 0004 had\n> probably better include a pg_bsd_indent version bump too.)\n\nHow large is the diff that creates? If it's not super-widespread, it might be\nok to do that earlier. I wouldn't mind not seeing that uglyness every time I\nrun pgindent on a patch... Although I guess post-March-CF isn't that far away\nat this point :)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Feb 2023 13:02:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-09 13:30:30 -0500, Tom Lane wrote:\n>> 0003 lacks meson support (anyone want to help with that?)\n\n> I'll give it a go, unless somebody else wants to.\n\nThanks.\n\n> Do we expect pg_bsd_indent to build / work on windows, right now?\n\nIt would be desirable, for sure. I've not noticed anything remarkably\nunportable in the code, so probably it's just a matter of getting the\nbuild infrastructure to build it. I suppose that we aren't going to\nupdate the src/tools/msvc scripts anymore, so getting meson to handle\nit should be enough??\n\n>> I'd anticipate pushing 0001-0003 shortly but holding 0004 until we are ready\n>> to do the post-March-CF pgindent run. (Come to think of it, 0004 had\n>> probably better include a pg_bsd_indent version bump too.)\n\n> How large is the diff that creates? If it's not super-widespread, it might be\n> ok to do that earlier.\n\nIt wasn't that big; I posted it in the thread discussing that change.\n\nI think the real issue might just be that, assuming we bump the\npg_bsd_indent version number, that in itself would force interested\ncommitters to update their copy Right Now. I'd rather give a little\nnotice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Feb 2023 16:42:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-09 13:02:02 -0800, Andres Freund wrote:\n> On 2023-02-09 13:30:30 -0500, Tom Lane wrote:\n> > 0003 lacks meson support (anyone want to help with that?)\n>\n> I'll give it a go, unless somebody else wants to.\n\nDid that in the attached.\n\nI didn't convert the test though, due to the duplicating it'd create. Perhaps\nwe should just move it to a shell script? Or maybe it just doesn't matter\nenough to bother with?\n\n\n> Do we expect pg_bsd_indent to build / work on windows, right now? If it\n> doesn't, do we want to make that a hard requirement?\n\n> I'll have CI test that, once I added meson support.\n\nIt doesn't build as-is with msvc, but does build with mingw. Failure:\nhttps://cirrus-ci.com/task/6290206869946368?logs=build#L1573\n\n\"cl\" \"-Isrc\\tools/pg_bsd_indent\\pg_bsd_indent.exe.p\" \"-Isrc\\tools/pg_bsd_indent\" \"-I..\\src\\tools\\pg_bsd_indent\" \"-Isrc\\include\" \"-I..\\src\\include\" \"-Ic:\\openssl\\1.1\\include\" \"-I..\\src\\include\\port\\win32\" \"-I..\\src\\include\\port\\win32_msvc\" \"/MDd\" \"/nologo\" \"/showIncludes\" \"/utf-8\" \"/W2\" \"/Od\" \"/Zi\" \"/DWIN32\" \"/DWINDOWS\" \"/D__WINDOWS__\" \"/D__WIN32__\" \"/D_CRT_SECURE_NO_DEPRECATE\" \"/D_CRT_NONSTDC_NO_DEPRECATE\" \"/wd4018\" \"/wd4244\" \"/wd4273\" \"/wd4101\" \"/wd4102\" \"/wd4090\" \"/wd4267\" \"/Fdsrc\\tools/pg_bsd_indent\\pg_bsd_indent.exe.p\\args.c.pdb\" /Fosrc/tools/pg_bsd_indent/pg_bsd_indent.exe.p/args.c.obj \"/c\" ../src/tools/pg_bsd_indent/args.c\n../src/tools/pg_bsd_indent/args.c(179): error C2065: 'PATH_MAX': undeclared identifier\n../src/tools/pg_bsd_indent/args.c(179): error C2057: expected constant expression\n../src/tools/pg_bsd_indent/args.c(179): error C2466: cannot allocate an array of constant size 0\n../src/tools/pg_bsd_indent/args.c(179): error C2133: 'fname': unknown size\n../src/tools/pg_bsd_indent/args.c(183): warning C4034: sizeof returns 0\n../src/tools/pg_bsd_indent/args.c(185): warning C4034: sizeof returns 0\n[1557/2161] Compiling C object src/tools/pg_bsd_indent/pg_bsd_indent.exe.p/err.c.obj\n[1558/2161] Precompiling header src/interfaces/ecpg/ecpglib/libecpg.dll.p/meson_pch-c.c\n[1559/2161] Compiling C object src/tools/pg_bsd_indent/pg_bsd_indent.exe.p/indent.c.obj\nFAILED: src/tools/pg_bsd_indent/pg_bsd_indent.exe.p/indent.c.obj\n\"cl\" \"-Isrc\\tools/pg_bsd_indent\\pg_bsd_indent.exe.p\" \"-Isrc\\tools/pg_bsd_indent\" \"-I..\\src\\tools\\pg_bsd_indent\" \"-Isrc\\include\" \"-I..\\src\\include\" \"-Ic:\\openssl\\1.1\\include\" \"-I..\\src\\include\\port\\win32\" \"-I..\\src\\include\\port\\win32_msvc\" \"/MDd\" \"/nologo\" \"/showIncludes\" \"/utf-8\" \"/W2\" \"/Od\" \"/Zi\" \"/DWIN32\" \"/DWINDOWS\" \"/D__WINDOWS__\" \"/D__WIN32__\" \"/D_CRT_SECURE_NO_DEPRECATE\" \"/D_CRT_NONSTDC_NO_DEPRECATE\" \"/wd4018\" \"/wd4244\" \"/wd4273\" \"/wd4101\" \"/wd4102\" \"/wd4090\" \"/wd4267\" \"/Fdsrc\\tools/pg_bsd_indent\\pg_bsd_indent.exe.p\\indent.c.pdb\" /Fosrc/tools/pg_bsd_indent/pg_bsd_indent.exe.p/indent.c.obj \"/c\" ../src/tools/pg_bsd_indent/indent.c\n../src/tools/pg_bsd_indent/indent.c(63): error C2065: 'MAXPATHLEN': undeclared identifier\n../src/tools/pg_bsd_indent/indent.c(63): error C2057: expected constant expression\n../src/tools/pg_bsd_indent/indent.c(63): error C2466: cannot allocate an array of constant size 0\n\nThis specific issue at least should be easily fixable.\n\n\nFreebsd emits a compiler warning:\n\n[21:37:50.909] In file included from ../src/tools/pg_bsd_indent/indent.c:54:\n[21:37:50.909] ../src/tools/pg_bsd_indent/indent.h:31:9: warning: 'nitems' macro redefined [-Wmacro-redefined]\n[21:37:50.909] #define nitems(array) (sizeof (array) / sizeof (array[0]))\n[21:37:50.909] ^\n[21:37:50.909] /usr/include/sys/param.h:306:9: note: previous definition is here\n[21:37:50.909] #define nitems(x) (sizeof((x)) / sizeof((x)[0]))\n[21:37:50.909] ^\n[21:37:50.911] 1 warning generated.\n\n\n\nTo we really want to require users to install pg_bsd_indent into PATH? Seems\nlike we ought to have a build target to invoke pgindent with a path to\npg_bsd_indent or such? But I guess we can address that later.\n\n\n\nIndependent of this specific patch: You seem to be generating your patch\nseries by invoking git show and redirecting that to a file? How do you apply a\nseries of such patches, while maintaining the commit messages? When git\nformat-patch is used, I can just use git am, but that doesn't work with your\npatch series.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 9 Feb 2023 13:55:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Did that in the attached.\n\nThanks.\n\n> I didn't convert the test though, due to the duplicating it'd create. Perhaps\n> we should just move it to a shell script? Or maybe it just doesn't matter\n> enough to bother with?\n\nWe could move it to a shell script perhaps, but that seems pretty\nlow-priority.\n\n> It doesn't build as-is with msvc, but does build with mingw. Failure:\n> https://cirrus-ci.com/task/6290206869946368?logs=build#L1573\n\nThanks, I'll take a look at these things.\n\n> To we really want to require users to install pg_bsd_indent into PATH? Seems\n> like we ought to have a build target to invoke pgindent with a path to\n> pg_bsd_indent or such? But I guess we can address that later.\n\nFor the moment I was just interested in maintaining the current workflow.\nI know people muttered about having some sort of build target that'd\nindent the whole tree from scratch after building pg_bsd_indent, but it's\nnot very clear to me how that'd work with e.g. VPATH configurations.\n\n(I think you can already tell pgindent to use a specific pg_bsd_indent,\nif your gripe is just about wanting to use a prebuilt copy that you\ndon't want to keep in PATH for some reason.)\n\n> Independent of this specific patch: You seem to be generating your patch\n> series by invoking git show and redirecting that to a file?\n\nYeah, it's pretty low-tech. I'm not in the habit of posting multi-patch\nseries very often, so I haven't really bothered to use format-patch.\n(I gave up on \"git am\" long ago as being too fragile, and always\nuse good ol' \"patch\" to apply patches, so I don't think about things\nlike whether it'd automatically absorb commit messages. I pretty much\nnever use anyone else's commit message verbatim anyway ...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Feb 2023 17:12:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-09 13:55:32 -0800, Andres Freund wrote:\n> ../src/tools/pg_bsd_indent/args.c(179): error C2065: 'PATH_MAX': undeclared identifier\n> ../src/tools/pg_bsd_indent/args.c(179): error C2057: expected constant expression\n> ../src/tools/pg_bsd_indent/args.c(179): error C2466: cannot allocate an array of constant size 0\n> ../src/tools/pg_bsd_indent/args.c(179): error C2133: 'fname': unknown size\n> ../src/tools/pg_bsd_indent/args.c(183): warning C4034: sizeof returns 0\n> ../src/tools/pg_bsd_indent/args.c(185): warning C4034: sizeof returns 0\n> [1557/2161] Compiling C object src/tools/pg_bsd_indent/pg_bsd_indent.exe.p/err.c.obj\n> [1558/2161] Precompiling header src/interfaces/ecpg/ecpglib/libecpg.dll.p/meson_pch-c.c\n> [1559/2161] Compiling C object src/tools/pg_bsd_indent/pg_bsd_indent.exe.p/indent.c.obj\n> FAILED: src/tools/pg_bsd_indent/pg_bsd_indent.exe.p/indent.c.obj\n> \"cl\" \"-Isrc\\tools/pg_bsd_indent\\pg_bsd_indent.exe.p\" \"-Isrc\\tools/pg_bsd_indent\" \"-I..\\src\\tools\\pg_bsd_indent\" \"-Isrc\\include\" \"-I..\\src\\include\" \"-Ic:\\openssl\\1.1\\include\" \"-I..\\src\\include\\port\\win32\" \"-I..\\src\\include\\port\\win32_msvc\" \"/MDd\" \"/nologo\" \"/showIncludes\" \"/utf-8\" \"/W2\" \"/Od\" \"/Zi\" \"/DWIN32\" \"/DWINDOWS\" \"/D__WINDOWS__\" \"/D__WIN32__\" \"/D_CRT_SECURE_NO_DEPRECATE\" \"/D_CRT_NONSTDC_NO_DEPRECATE\" \"/wd4018\" \"/wd4244\" \"/wd4273\" \"/wd4101\" \"/wd4102\" \"/wd4090\" \"/wd4267\" \"/Fdsrc\\tools/pg_bsd_indent\\pg_bsd_indent.exe.p\\indent.c.pdb\" /Fosrc/tools/pg_bsd_indent/pg_bsd_indent.exe.p/indent.c.obj \"/c\" ../src/tools/pg_bsd_indent/indent.c\n> ../src/tools/pg_bsd_indent/indent.c(63): error C2065: 'MAXPATHLEN': undeclared identifier\n> ../src/tools/pg_bsd_indent/indent.c(63): error C2057: expected constant expression\n> ../src/tools/pg_bsd_indent/indent.c(63): error C2466: cannot allocate an array of constant size 0\n> \n> This specific issue at least should be easily fixable.\n\nThe trivial fix of using MAXPGPATH made it build, without warnings. That\ndoesn't say anything about actually working. So I guess porting the test would\nmake sense.\n\nOpinions on whether it would make sense as a shell script?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Feb 2023 14:14:50 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The trivial fix of using MAXPGPATH made it build, without warnings. That\n> doesn't say anything about actually working. So I guess porting the test would\n> make sense.\n\n> Opinions on whether it would make sense as a shell script?\n\nHmmm .. a shell script would be fine by me, but it won't help in\ntesting a Windows build. Maybe we need to make it a Perl script?\n\nBTW, the attachments to your previous message are identical to what\nI previously posted --- did you attach the wrong set of diffs?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Feb 2023 17:19:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-09 17:19:22 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > The trivial fix of using MAXPGPATH made it build, without warnings. That\n> > doesn't say anything about actually working. So I guess porting the test would\n> > make sense.\n> \n> > Opinions on whether it would make sense as a shell script?\n> \n> Hmmm .. a shell script would be fine by me, but it won't help in\n> testing a Windows build. Maybe we need to make it a Perl script?\n\nAt least for casual testing a shell script actually mostly works, due to git\nit's easy enough to have a sh.exe around... Not something I'd necessarily want\nto make a hard dependency, but for something like this it might suffice. Of\ncourse perl would be more dependable...\n\n\n> BTW, the attachments to your previous message are identical to what\n> I previously posted --- did you attach the wrong set of diffs?\n\nI attached an extra patch, in addition to yours. I also attached yours so that\ncfbot could continue to work, if you registered this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Feb 2023 15:10:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-09 17:19:22 -0500, Tom Lane wrote:\n>> Hmmm .. a shell script would be fine by me, but it won't help in\n>> testing a Windows build. Maybe we need to make it a Perl script?\n\n> At least for casual testing a shell script actually mostly works, due to git\n> it's easy enough to have a sh.exe around... Not something I'd necessarily want\n> to make a hard dependency, but for something like this it might suffice. Of\n> course perl would be more dependable...\n\nYeah, also less question about whether it works on Windows.\nI'll see about moving that into Perl. It's short enough.\n\n>> BTW, the attachments to your previous message are identical to what\n>> I previously posted --- did you attach the wrong set of diffs?\n\n> I attached an extra patch, in addition to yours.\n\nD'oh, I didn't notice that :-(\n\n> I also attached yours so that\n> cfbot could continue to work, if you registered this.\n\nI thought about registering it, but that won't teach us anything unless\nwe make it built-by-default, which was not my intention. I guess we\ncould temporarily include it in the build.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Feb 2023 18:19:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-09 18:19:06 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-02-09 17:19:22 -0500, Tom Lane wrote:\n> >> Hmmm .. a shell script would be fine by me, but it won't help in\n> >> testing a Windows build. Maybe we need to make it a Perl script?\n> \n> > At least for casual testing a shell script actually mostly works, due to git\n> > it's easy enough to have a sh.exe around... Not something I'd necessarily want\n> > to make a hard dependency, but for something like this it might suffice. Of\n> > course perl would be more dependable...\n> \n> Yeah, also less question about whether it works on Windows.\n> I'll see about moving that into Perl. It's short enough.\n\nCool.\n\n\n> > I also attached yours so that\n> > cfbot could continue to work, if you registered this.\n> \n> I thought about registering it, but that won't teach us anything unless\n> we make it built-by-default, which was not my intention. I guess we\n> could temporarily include it in the build.\n\nThe meson patch I sent did build it by default, that's why I saw the windows\nfailure and the freebsd warnings. If we don't want that, we'd need to add\n build_by_default: false\n\nI'm fine either way. It's barely noticeable compared to the rest of postgres.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Feb 2023 16:08:41 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-09 18:19:06 -0500, Tom Lane wrote:\n>> I thought about registering it, but that won't teach us anything unless\n>> we make it built-by-default, which was not my intention. I guess we\n>> could temporarily include it in the build.\n\n> The meson patch I sent did build it by default, that's why I saw the windows\n> failure and the freebsd warnings. If we don't want that, we'd need to add\n> build_by_default: false\n\n> I'm fine either way. It's barely noticeable compared to the rest of postgres.\n\nYeah, build-by-default isn't really a big deal. Install-by-default\nis more of a problem...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Feb 2023 19:20:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "On 2023-02-09 19:20:37 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'm fine either way. It's barely noticeable compared to the rest of postgres.\n> \n> Yeah, build-by-default isn't really a big deal. Install-by-default\n> is more of a problem...\n\nPerhaps we should install it, just not in bin/, but alongside pgxs/, similar\nto pg_regress et al?\n\n\n",
"msg_date": "Thu, 9 Feb 2023 16:35:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Perhaps we should install it, just not in bin/, but alongside pgxs/, similar\n> to pg_regress et al?\n\nFor my own purposes, I really don't want it anywhere in the --prefix\ntree. That's not necessarily present when I'm using the program.\n\n(Hmm, clarify: it wouldn't matter if install sticks it under pgxs,\nbut I don't want to be forced to use it from there.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Feb 2023 19:42:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Here's a v3 of this patchset, incorporating your meson fixes as\nwell as patches for the portability problems you noted.\n\nI ended up converting the test infrastructure into a TAP test,\nwhich kind of feels like overkill; but the Meson system doesn't\nseem to provide any lower-overhead way to run a test.\n\nI've not touched the issue of whether and where to install\npg_bsd_indent; for now, neither build system will do so.\n\nAlso, for now both build systems *will* run tests on it,\nalthough I'm not sure if plugging it into \"make check-world\"\nis enough to cause the cfbot to do so, and I'm pretty sure\nthat the buildfarm won't notice that.\n\nI'll let the cfbot loose on this, and if it runs the tests\nsuccessfully I plan to go ahead and push. We can resolve\nthe installation question later. We might want to back off\ntesting too once we're satisfied about portability.\n\n(I left out the 0004 declaration-formatting patch for now, btw.)\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 11 Feb 2023 18:54:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "I wrote:\n> I'll let the cfbot loose on this, and if it runs the tests\n> successfully I plan to go ahead and push.\n\ncfbot didn't like that ...\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 11 Feb 2023 19:33:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-11 18:54:00 -0500, Tom Lane wrote:\n> I ended up converting the test infrastructure into a TAP test,\n> which kind of feels like overkill; but the Meson system doesn't\n> seem to provide any lower-overhead way to run a test.\n\nFWIW, The default way to indicate failures in a test is the exit\ncode. Obviously that allows less detailed reporting, but other than that, it\nworks (that's how we test pg_regress today).\n\n\n> Also, for now both build systems *will* run tests on it,\n> although I'm not sure if plugging it into \"make check-world\"\n> is enough to cause the cfbot to do so, and I'm pretty sure\n> that the buildfarm won't notice that.\n\nThat's sufficient for cfbot, on the CI task still using autoconf. And for\nmeson it'll also suffice.\n\nIt actually already ran:\nhttps://cirrus-ci.com/build/5984572702195712\n\nThe windows test failure is a transient issue independent of the patch\n(something went wrong with image permissions). However the linux autoconf one\nisn't:\nhttps://api.cirrus-ci.com/v1/artifact/task/5482952532951040/log/src/tools/pg_bsd_indent/tmp_check/log/regress_log_001_pg_bsd_indent\n\n# Running: pg_bsd_indent --version\nCommand 'pg_bsd_indent' not found in /tmp/cirrus-ci-build/tmp_install/usr/local/pgsql/bin, /tmp/cirrus-ci-build/src/tools/pg_bsd_indent, /usr/local/sbin, /usr/local/bin, /usr/sbin, /usr/bin, /sbin, /bin at /tmp/cirrus-ci-build/src/tools/pg_bsd_indent/../../../src/test/perl/PostgreSQL/Test/Utils.pm line 832.\n\nI guess there might be a missing dependency? PATH looks sufficient.\n\n\n> I'll let the cfbot loose on this, and if it runs the tests\n> successfully I plan to go ahead and push. We can resolve\n> the installation question later. We might want to back off\n> testing too once we're satisfied about portability.\n\n> (I left out the 0004 declaration-formatting patch for now, btw.)\n\nMakes sense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Feb 2023 16:42:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> # Running: pg_bsd_indent --version\n> Command 'pg_bsd_indent' not found in /tmp/cirrus-ci-build/tmp_install/usr/local/pgsql/bin, /tmp/cirrus-ci-build/src/tools/pg_bsd_indent, /usr/local/sbin, /usr/local/bin, /usr/sbin, /usr/bin, /sbin, /bin at /tmp/cirrus-ci-build/src/tools/pg_bsd_indent/../../../src/test/perl/PostgreSQL/Test/Utils.pm line 832.\n\n> I guess there might be a missing dependency? PATH looks sufficient.\n\nYeah, I expected that \"check\" would have a dependency on \"all\",\nbut apparently it doesn't (and I'd missed this because I had\npg_bsd_indent installed elsewhere in my PATH :-(). New build\nrunning now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Feb 2023 20:04:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Hmmm ... ci autoconf build is now happy, but the Windows run complains\nthat none of the output files match. I'm betting that this is a\nWindows-newline problem, since I now see that indent.c opens both the\ninput and output files in default (text) mode. I'm inclined to\nchange it to open the output file in binary mode while leaving the\ninput in text, which should have the effect of stripping \\r if it's\npresent.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Feb 2023 20:32:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "I wrote:\n> Hmmm ... ci autoconf build is now happy, but the Windows run complains\n> that none of the output files match. I'm betting that this is a\n> Windows-newline problem, since I now see that indent.c opens both the\n> input and output files in default (text) mode. I'm inclined to\n> change it to open the output file in binary mode while leaving the\n> input in text, which should have the effect of stripping \\r if it's\n> present.\n\nSo let's see if that theory is correct at all ...\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 11 Feb 2023 20:43:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "On Sun, Feb 12, 2023 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Hmmm ... ci autoconf build is now happy, but the Windows run complains\n> > that none of the output files match. I'm betting that this is a\n> > Windows-newline problem, since I now see that indent.c opens both the\n> > input and output files in default (text) mode. I'm inclined to\n> > change it to open the output file in binary mode while leaving the\n> > input in text, which should have the effect of stripping \\r if it's\n> > present.\n>\n> So let's see if that theory is correct at all ...\n\n(Since I happened to be tinkering on cfbot while you posted these, I\nnoticed that cfbot took over 50 minutes to start processing the v4.\nThe problem was upstream: the time in the second-last column of\nhttps://commitfest.postgresql.org/42/ didn't change for that whole\ntime, even though the archives had your new email. Cf castles, sand;\nI should probably get a better trigger mechanism :-) I like that page\nbecause it lets me poll one single end point once per minute to learn\nabout changes across all threads, but I am not sure what sort of\ntechnology connects the archives to the CF app, and how it can fail.)\n\n\n",
"msg_date": "Sun, 12 Feb 2023 15:14:37 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
},
{
"msg_contents": "Hello,\n\nI've discovered that the pg_bsd_indent test added here makes an\nASAN-instrumented build fail on:\nASAN_OPTIONS=detect_leaks=0:strict_string_checks=1 make check-world\nas follows:\n# +++ tap check in src/tools/pg_bsd_indent +++\nt/001_pg_bsd_indent.pl .. 1/?\n# Failed test 'pg_bsd_indent succeeds on binary'\n# at t/001_pg_bsd_indent.pl line 41.\n\n# Failed test 'pg_bsd_indent output matches for binary'\n# at t/001_pg_bsd_indent.pl line 50.\n\n# Failed test 'pg_bsd_indent succeeds on comments'\n# at t/001_pg_bsd_indent.pl line 41.\n\n# Failed test 'pg_bsd_indent output matches for comments'\n# at t/001_pg_bsd_indent.pl line 50.\nt/001_pg_bsd_indent.pl .. 10/?\n...\nregress_log_001_pg_bsd_indent contains:\n# Running: pg_bsd_indent .../src/tools/pg_bsd_indent/tests/binary.0 binary.out \n-P.../src/tools/pg_bsd_indent/tests/binary.pro\n=================================================================\n==2124067==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60600000005a at pc 0x560aa8972dae bp \n0x7ffebd080460 sp 0x7ffebd07fc08\nREAD of size 40 at 0x60600000005a thread T0\n #0 0x560aa8972dad in __interceptor_strspn.part.0 (.../src/tools/pg_bsd_indent/pg_bsd_indent+0x5edad)\n #1 0x560aa8a3f495 in lexi .../src/tools/pg_bsd_indent/lexi.c:258\n #2 0x560aa8a3451d in main .../src/tools/pg_bsd_indent/indent.c:269\n #3 0x7f6ffab1ad8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58\n #4 0x7f6ffab1ae3f in __libc_start_main_impl ../csu/libc-start.c:392\n #5 0x560aa894b7d4 in _start (.../src/tools/pg_bsd_indent/pg_bsd_indent+0x377d4)\n\n0x60600000005a is located 0 bytes to the right of 58-byte region [0x606000000020,0x60600000005a)\nallocated by thread T0 here:\n #0 0x560aa89e4980 in __interceptor_realloc.part.0 (.../src/tools/pg_bsd_indent/pg_bsd_indent+0xd0980)\n #1 0x560aa8a3d24e in fill_buffer .../src/tools/pg_bsd_indent/io.c:365\n\nSUMMARY: AddressSanitizer: heap-buffer-overflow (.../src/tools/pg_bsd_indent/pg_bsd_indent+0x5edad) in \n__interceptor_strspn.part.0\n...\n\nI understand that that code is almost as ancient as me and it works as\nintended (fill_buffer() doesn't null-terminate a buffer), but may be this is\nworth fixing in the Postgres source tree to keep the whole testing baseline\nhigh (if someone finds strict_string_checks useful). If so, perhaps\nsomething like the attached will do.\n\nBest regards,\nAlexander",
"msg_date": "Wed, 7 Jun 2023 11:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Importing pg_bsd_indent into our source tree"
}
] |
[
{
"msg_contents": "Currently, meson has a test suite named \"setup\". According to the\nWiki, this is needed to get something equivalent to \"make check\", by\nrunning \"meson test -v --suite setup --suite regress\".\n\nSome questions about this:\n\n* Isn't it confusing that we have a suite by that name, given that we\nalso need to use the unrelated --setup flag for some nearby testing\nrecipes?\n\n* Why do we actually need a \"setup\" suite?\n\nOffhand it appears that a simple \"meson test -v --suite regress\" works\njust as well. Have I missed something?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 9 Feb 2023 11:01:31 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Minor meson gripe"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-09 11:01:31 -0800, Peter Geoghegan wrote:\n> Currently, meson has a test suite named \"setup\". According to the\n> Wiki, this is needed to get something equivalent to \"make check\", by\n> running \"meson test -v --suite setup --suite regress\".\n\nYep.\n\n\n> Some questions about this:\n>\n> * Isn't it confusing that we have a suite by that name, given that we\n> also need to use the unrelated --setup flag for some nearby testing\n> recipes?\n\nHm. I don't find it particularly confusing, but I don't think I'm a good judge\nof that, too close.\n\n\n> * Why do we actually need a \"setup\" suite?\n>\n> Offhand it appears that a simple \"meson test -v --suite regress\" works\n> just as well. Have I missed something?\n\nIt'll work, but only if you have run setup before. And it'll not use changed C\ncode.\n\nThe setup suite creates the installation in tmp_install/. So if you haven't\nrun the tests before, it'll fail due to that missing. If you have run it\nbefore, but have changed code, it'll not get used.\n\n\nThe background for the issue is that while meson test supports dependencies\nfor each test, and will build exactly the required dependencies if you run\nindividual tests with meson test, it unfortunately also adds all the test\ndependencies to the default ninja target.\n\nThat's mostly for historical reasons, because initially meson didn't support\ndependencies for tests. There's recent work on changing that though.\n\nCreating the temp installation every time you run 'ninja' would not be\nnice. On slower machines it can take quite a while.\n\n\nI think medium term we should just stop requiring a temporary install to run\ntests, it's substantial, unnecessary, overhead, and it requires us to build\nway too much to run basic tests. It'd not take a whole lot to make that work:\n\n- a search path for finding extensions, which'd be very useful for other\n reasons as well\n\n- a way to tell 'postgres', 'initdb' etc, which use find_other_exec(), that\n they should use PATH\n \n- a way to tell initdb where to find things like postgres.bki, postgres where\n it can find timezone data, etc.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Feb 2023 12:56:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Minor meson gripe"
},
{
"msg_contents": "On Thu, Feb 9, 2023 at 12:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > * Isn't it confusing that we have a suite by that name, given that we\n> > also need to use the unrelated --setup flag for some nearby testing\n> > recipes?\n>\n> Hm. I don't find it particularly confusing, but I don't think I'm a good judge\n> of that, too close.\n\n> It'll work, but only if you have run setup before. And it'll not use changed C\n> code.\n\nI see. It's not that confusing on its own, but it does cause confusion\nonce you consider how things fit together. Suppose I want to do the\nequivalent of running the amcheck tests -- the tests that run when\n\"make check\" runs from contrib/amcheck with an autoconf build. That\nlooks like this with our current meson workflow:\n\nmeson test -v --suite setup --suite amcheck\n\nNow consider what I have to run to get the equivalent of a \"make\ninstallcheck\" run from the contrib/amcheck directory:\n\nmeson test -v --setup running --suite amcheck-running\n\nNotice that I have to specify \"--suite setup\" in the first example,\nwhereas I have to specify \"--setup running\" in the second example\ninstead -- at the same point in. Also notice the rest of the details\nalmost match. This makes it quite natural to wonder if \"--suite setup\"\nis related to \"--setup running\" in some way, which is not the case at\nall. They're two wholly unrelated concepts.\n\nWhy not change the suite name to tmp_install? That immediately reminds\nme of what's really going on here, since I'm used to seeing that\ndirectory name. And it clashes with \"--suite setup\" in a way that\nseems useful.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 9 Feb 2023 15:34:34 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Minor meson gripe"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-09 15:34:34 -0800, Peter Geoghegan wrote:\n> Why not change the suite name to tmp_install? That immediately reminds\n> me of what's really going on here, since I'm used to seeing that\n> directory name. And it clashes with \"--suite setup\" in a way that\n> seems useful.\n\nThe individual test is actually named tmp_install. I thought it might be\nuseful to add further things to the setup \"stage\", hence the more general\nname.\n\nI e.g. have a not-quite-done patch that creates a \"template initdb\", which\npg_regress and tap tests automatically use (except if non-default options are\nrequired), which quite noticably reduces test times (*). But something needs to\ncreate the template initdb, and initdb doesn't run without an installation, so\nwe need to run it after the temp_install.\n\nOf course we could wrap that into one \"test\", but it seemed nicer to see if\nyou fail during installation, or during initdb. So for that I added a separate\ntest, that is also part of the setup suite.\n\nOf course we could still name the suite tmp_install (or to be consistent with\nthe confusing make naming, have a temp-install target, which creates the\ntmp_install directory :)). I guess that'd still be less confusing?\n\n\nI'm not at all wedded to the \"setup\" name.\n\nGreetings,\n\nAndres Freund\n\n\n* approximate test time improvements:\n\n local:\n 688.67user 154.44system 1:08.29elapsed 1234%CPU (0avgtext+0avgdata 138984maxresident)k\n ->\n 172.37user 109.43system 1:00.12elapsed 468%CPU (0avgtext+0avgdata 139168maxresident)k\n\n The 4x reduction in CPU cycles is pretty bonkers. To bad wall clock time\n doesn't improve that much - we end up waiting for isolationtester,\n pg_upgrade tests to finish.\n\n CI freebsd: 06:30 -> 05:00\n CI linux, sanitize-address: 5:40->4:30\n CI linux, sanitize-undefined,alignment: 3:00->2:15\n CI windows 12:20 -> 10:00\n CI macos: 10:20 -> 08:00\n\n I expect it to very substantially speed up the valgrind builfarm animal. By\n far the most cycles are spent below initdb.\n\n https://github.com/anarazel/postgres/tree/initdb-caching\n\n\n",
"msg_date": "Thu, 9 Feb 2023 16:33:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Minor meson gripe"
},
{
"msg_contents": "On Thu, Feb 9, 2023 at 4:33 PM Andres Freund <andres@anarazel.de> wrote:\n> The individual test is actually named tmp_install. I thought it might be\n> useful to add further things to the setup \"stage\", hence the more general\n> name.\n\nI did notice that, but only after sitting with my initial confusion for a while.\n\n> I e.g. have a not-quite-done patch that creates a \"template initdb\", which\n> pg_regress and tap tests automatically use (except if non-default options are\n> required), which quite noticably reduces test times (*). But something needs to\n> create the template initdb, and initdb doesn't run without an installation, so\n> we need to run it after the temp_install.\n>\n> Of course we could wrap that into one \"test\", but it seemed nicer to see if\n> you fail during installation, or during initdb. So for that I added a separate\n> test, that is also part of the setup suite.\n\nBut what are the chances that the setup / tmp_install \"test\" will\nactually fail? It's almost a test in name only.\n\n> Of course we could still name the suite tmp_install (or to be consistent with\n> the confusing make naming, have a temp-install target, which creates the\n> tmp_install directory :)). I guess that'd still be less confusing?\n\nYes, that definitely seems like an improvement. I don't care about the\ntiny inconsistency that this creates.\n\nI wonder if this could be addressed by adding another custom test\nsetup, like --setup running, used whenever you want to just run one or\ntwo tests against an ad-hoc temporary installation? Offhand it seems\nas if add_test_setup() could support that requirement?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 9 Feb 2023 17:00:48 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Minor meson gripe"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-09 17:00:48 -0800, Peter Geoghegan wrote:\n> On Thu, Feb 9, 2023 at 4:33 PM Andres Freund <andres@anarazel.de> wrote:\n> > I e.g. have a not-quite-done patch that creates a \"template initdb\", which\n> > pg_regress and tap tests automatically use (except if non-default options are\n> > required), which quite noticably reduces test times (*). But something needs to\n> > create the template initdb, and initdb doesn't run without an installation, so\n> > we need to run it after the temp_install.\n> >\n> > Of course we could wrap that into one \"test\", but it seemed nicer to see if\n> > you fail during installation, or during initdb. So for that I added a separate\n> > test, that is also part of the setup suite.\n> \n> But what are the chances that the setup / tmp_install \"test\" will\n> actually fail? It's almost a test in name only.\n\nI've seen more failures than I'd like. Permission errors, conflicting names,\nand similar things. But mainly that was a reference to running initdb, which\nI've broken temporarily many a time.\n\n\n> > Of course we could still name the suite tmp_install (or to be consistent with\n> > the confusing make naming, have a temp-install target, which creates the\n> > tmp_install directory :)). I guess that'd still be less confusing?\n> \n> Yes, that definitely seems like an improvement. I don't care about the\n> tiny inconsistency that this creates.\n\nThen lets do that - feel free to push something, or send something for\nreview. Otherwise I'll try to get to it, but I owe a few people work before\nthis...\n\n\n> I wonder if this could be addressed by adding another custom test\n> setup, like --setup running, used whenever you want to just run one or\n> two tests against an ad-hoc temporary installation? Offhand it seems\n> as if add_test_setup() could support that requirement?\n\nWhat precisely do you mean with \"ad-hoc temporary installation\"?\n\nI was wondering about adding a different setup that'd use the \"real\"\ninstallation to run tests. But perhaps that's something different than what\nyou have in mind?\n\nThe only restriction I see wrt add_test_setup() is that it's not entirely\ntrivial to use a \"runtime-variable\" path to an installation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Feb 2023 17:17:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Minor meson gripe"
},
{
"msg_contents": "On Thu, Feb 9, 2023 at 5:17 PM Andres Freund <andres@anarazel.de> wrote:\n> I've seen more failures than I'd like. Permission errors, conflicting names,\n> and similar things. But mainly that was a reference to running initdb, which\n> I've broken temporarily many a time.\n\nWe've all temporarily broken initdb literally thousands of times, I'm sure.\n\n> > > Of course we could still name the suite tmp_install (or to be consistent with\n> > > the confusing make naming, have a temp-install target, which creates the\n> > > tmp_install directory :)). I guess that'd still be less confusing?\n> >\n> > Yes, that definitely seems like an improvement. I don't care about the\n> > tiny inconsistency that this creates.\n>\n> Then lets do that - feel free to push something, or send something for\n> review. Otherwise I'll try to get to it, but I owe a few people work before\n> this...\n\nI'll try to get to it soon. Note that I've been adding new stuff to\nthe meson Wiki page, in the hope of saving other people the trouble of\nfiguring some of these details out for themselves. You might want to\ntake a look at that at some point.\n\n> > I wonder if this could be addressed by adding another custom test\n> > setup, like --setup running, used whenever you want to just run one or\n> > two tests against an ad-hoc temporary installation? Offhand it seems\n> > as if add_test_setup() could support that requirement?\n>\n> What precisely do you mean with \"ad-hoc temporary installation\"?\n\nI mean \"what make check does\".\n\n> I was wondering about adding a different setup that'd use the \"real\"\n> installation to run tests. But perhaps that's something different than what\n> you have in mind?\n\nI wasn't thinking about the installation. Actually, I was, but the\nthought went no further than \"I wish I didn't have to think about the\ninstallation\". I liked that autoconf had \"make check\" and \"make\ninstallcheck\" variants that worked in a low context way.\n\nIt's great that \"meson test\" runs all of the tests very quickly --\nthat should be maintained, even at some cost elsewhere. And it would\nbe nice to do away with the tmp_install thing. But as long as we have\nit, it would be nice to make the way that we run a subset of test\nsuites against a running server similar to the way that we run a\nsubset of test suites against a throwaway installation (ala \"make\ncheck\").\n\n> The only restriction I see wrt add_test_setup() is that it's not entirely\n> trivial to use a \"runtime-variable\" path to an installation.\n\nI personally have no problem with that, though of course I could have\neasily overlooked something.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 9 Feb 2023 17:42:34 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Minor meson gripe"
}
] |
[
{
"msg_contents": "I just found myself carefully counting the zeros in a call to pg_usleep().\nBesides getting my eyes checked, perhaps there should be a wrapper called\npg_ssleep() than can be used for multisecond sleeps. Or maybe the\nUSECS_PER_SEC macro should be used more widely. I attached a patch for the\nformer approach. I don't have a strong opinion, but I do think it's worth\nimproving readability a bit here.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 9 Feb 2023 12:59:29 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_usleep for multisecond delays"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-09 12:59:29 -0800, Nathan Bossart wrote:\n> I just found myself carefully counting the zeros in a call to pg_usleep().\n> Besides getting my eyes checked, perhaps there should be a wrapper called\n> pg_ssleep() than can be used for multisecond sleeps. Or maybe the\n> USECS_PER_SEC macro should be used more widely. I attached a patch for the\n> former approach. I don't have a strong opinion, but I do think it's worth\n> improving readability a bit here.\n\npg_usleep() should pretty much never used for sleeps that long, at least in\nthe backend - depending on the platform they're not interruptible. Most of the\nthings changed here are debugging tools, but even so, it's e.g. pretty\nannoying. E.g. you can't normally shut down while a backend is in\npre_auth_delay.\n\nSo I'm not sure it's the right direction to make pg_usleep() easier to use...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Feb 2023 13:30:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "On Thu, Feb 09, 2023 at 01:30:27PM -0800, Andres Freund wrote:\n> On 2023-02-09 12:59:29 -0800, Nathan Bossart wrote:\n>> I just found myself carefully counting the zeros in a call to pg_usleep().\n>> Besides getting my eyes checked, perhaps there should be a wrapper called\n>> pg_ssleep() than can be used for multisecond sleeps. Or maybe the\n>> USECS_PER_SEC macro should be used more widely. I attached a patch for the\n>> former approach. I don't have a strong opinion, but I do think it's worth\n>> improving readability a bit here.\n> \n> pg_usleep() should pretty much never used for sleeps that long, at least in\n> the backend - depending on the platform they're not interruptible. Most of the\n> things changed here are debugging tools, but even so, it's e.g. pretty\n> annoying. E.g. you can't normally shut down while a backend is in\n> pre_auth_delay.\n> \n> So I'm not sure it's the right direction to make pg_usleep() easier to use...\n\nHm... We might be able to use WaitLatch() for some of these.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 9 Feb 2023 14:51:14 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "At Thu, 9 Feb 2023 14:51:14 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Thu, Feb 09, 2023 at 01:30:27PM -0800, Andres Freund wrote:\n> > So I'm not sure it's the right direction to make pg_usleep() easier to use...\n> Hm... We might be able to use WaitLatch() for some of these.\n\nAnd I think we are actully going to reduce or eliminate the use of\npg_sleep().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 10 Feb 2023 12:00:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "On Thu, Feb 09, 2023 at 02:51:14PM -0800, Nathan Bossart wrote:\n> Hm... We might be able to use WaitLatch() for some of these.\n\nPerhaps less than you think as a bit of work has been done in the last\nfew years to reduce the gap and make such code paths more responsive,\nstill I would not be surprised to find a couple of these..\n\nI would let the portions of the code related to things like\npre_auth_delay or XLOG_REPLAY_DELAY as they are, though, without an\nextra pg_Xsleep() implementation.\n--\nMichael",
"msg_date": "Fri, 10 Feb 2023 12:04:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "On 2023-Feb-09, Nathan Bossart wrote:\n\n> I just found myself carefully counting the zeros in a call to pg_usleep().\n> Besides getting my eyes checked, perhaps there should be a wrapper called\n> pg_ssleep() than can be used for multisecond sleeps. Or maybe the\n> USECS_PER_SEC macro should be used more widely. I attached a patch for the\n> former approach. I don't have a strong opinion, but I do think it's worth\n> improving readability a bit here.\n\nHmm, it seems about half the patch would go away if you were to add a\nPostAuthDelaySleep() function.\n\n> @@ -2976,7 +2976,7 @@ _bt_pendingfsm_finalize(Relation rel, BTVacState *vstate)\n> \t * never be effective without some other backend concurrently consuming an\n> \t * XID.\n> \t */\n> -\tpg_usleep(5000000L);\n> +\tpg_ssleep(5);\n\nMaybe for these cases where a WaitLatch is not desired, it'd be simpler\nto do pg_usleep (5L * 1000 * 1000);\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\" (Fotis)\n (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)\n\n\n",
"msg_date": "Fri, 10 Feb 2023 09:30:26 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 3:30 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Maybe for these cases where a WaitLatch is not desired, it'd be simpler\n> to do pg_usleep (5L * 1000 * 1000);\n\nI somehow feel that we should be trying to get rid of cases where\nWaitLatch is not desired.\n\nThat's probably overly simplistic - there might be cases where the\ncaller isn't just polling and has a really legitimate need to wait for\n5 seconds of wall clock time. But even in that case, it seems like we\nwant to respond to barriers and interrupts during that time, in almost\nall cases.\n\nI wonder if we should have a wrapper around WaitLatch() that documents\nthat if the latch is set before the time expires, it will reset the\nlatch and try again to wait for the remaining time, after checking for\ninterrupts etc.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 Feb 2023 09:58:25 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I somehow feel that we should be trying to get rid of cases where\n> WaitLatch is not desired.\n\n+1\n\n> I wonder if we should have a wrapper around WaitLatch() that documents\n> that if the latch is set before the time expires, it will reset the\n> latch and try again to wait for the remaining time, after checking for\n> interrupts etc.\n\nResetting the latch seems not very friendly for general-purpose use\n... although I guess we could set it again on the way out.\n\nBTW, we have an existing pg_sleep() function that deals with all\nof this except re-setting the latch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Feb 2023 10:18:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 10:18:34AM -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> I wonder if we should have a wrapper around WaitLatch() that documents\n>> that if the latch is set before the time expires, it will reset the\n>> latch and try again to wait for the remaining time, after checking for\n>> interrupts etc.\n> \n> Resetting the latch seems not very friendly for general-purpose use\n> ... although I guess we could set it again on the way out.\n> \n> BTW, we have an existing pg_sleep() function that deals with all\n> of this except re-setting the latch.\n\nThat seems workable. I think it might also need to accept a function\npointer for custom interrupt checking (e.g., archiver code should use\nHandlePgArchInterrupts()). I'll put something together if that sounds\nalright.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 10 Feb 2023 10:51:20 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 10:51:20AM -0800, Nathan Bossart wrote:\n> On Fri, Feb 10, 2023 at 10:18:34AM -0500, Tom Lane wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> I wonder if we should have a wrapper around WaitLatch() that documents\n>>> that if the latch is set before the time expires, it will reset the\n>>> latch and try again to wait for the remaining time, after checking for\n>>> interrupts etc.\n>> \n>> Resetting the latch seems not very friendly for general-purpose use\n>> ... although I guess we could set it again on the way out.\n>> \n>> BTW, we have an existing pg_sleep() function that deals with all\n>> of this except re-setting the latch.\n> \n> That seems workable. I think it might also need to accept a function\n> pointer for custom interrupt checking (e.g., archiver code should use\n> HandlePgArchInterrupts()). I'll put something together if that sounds\n> alright.\n\nHere is a work-in-progress patch. I quickly scanned through my previous\npatch and applied this new function everywhere it seemed safe to use (which\nunfortunately is rather limited).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 10 Feb 2023 12:00:37 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Fri, Feb 10, 2023 at 10:51:20AM -0800, Nathan Bossart wrote:\n>> On Fri, Feb 10, 2023 at 10:18:34AM -0500, Tom Lane wrote:\n>>> BTW, we have an existing pg_sleep() function that deals with all\n>>> of this except re-setting the latch.\n\n> Here is a work-in-progress patch. I quickly scanned through my previous\n> patch and applied this new function everywhere it seemed safe to use (which\n> unfortunately is rather limited).\n\nA quick grep for pg_usleep with large intervals finds rather more\nthan you touched:\n\ncontrib/auth_delay/auth_delay.c: 46: \t\tpg_usleep(1000L * auth_delay_milliseconds);\nsrc/backend/access/nbtree/nbtpage.c: 2979: \tpg_usleep(5000000L);\nsrc/backend/access/transam/xlog.c: 5109: \t\tpg_usleep(60000000L);\nsrc/backend/libpq/pqcomm.c: 717: \t\tpg_usleep(100000L);\t\t/* wait 0.1 sec */\nsrc/backend/postmaster/bgwriter.c: 199: \t\tpg_usleep(1000000L);\nsrc/backend/postmaster/checkpointer.c: 313: \t\tpg_usleep(1000000L);\nsrc/backend/postmaster/checkpointer.c: 1009: \t\tpg_usleep(100000L);\t\t/* wait 0.1 sec, then retry */\nsrc/backend/postmaster/postmaster.c: 4295: \t\tpg_usleep(PreAuthDelay * 1000000L);\nsrc/backend/postmaster/walwriter.c: 192: \t\tpg_usleep(1000000L);\nsrc/backend/postmaster/bgworker.c: 762: \t\tpg_usleep(PostAuthDelay * 1000000L);\nsrc/backend/postmaster/pgarch.c: 456: \t\t\t\tpg_usleep(1000000L);\nsrc/backend/postmaster/pgarch.c: 488: \t\t\t\tpg_usleep(1000000L);\t/* wait a bit before retrying */\nsrc/backend/postmaster/autovacuum.c: 444: \t\tpg_usleep(PostAuthDelay * 1000000L);\nsrc/backend/postmaster/autovacuum.c: 564: \t\tpg_usleep(1000000L);\nsrc/backend/postmaster/autovacuum.c: 690: \t\t\t\tpg_usleep(1000000L);\t/* 1s */\nsrc/backend/postmaster/autovacuum.c: 1711: \t\t\tpg_usleep(PostAuthDelay * 1000000L);\nsrc/backend/storage/ipc/procarray.c: 3799: \t\tpg_usleep(100 * 1000L); /* 100ms */\nsrc/backend/utils/init/postinit.c: 985: \t\t\tpg_usleep(PostAuthDelay * 1000000L);\nsrc/backend/utils/init/postinit.c: 1192: \t\tpg_usleep(PostAuthDelay * 1000000L);\n\nDid you have reasons for excluding the rest of these?\n\nTaking a step back, I think it might be a mistake to try to share code\nwith the SQL-exposed function; at least, that is causing some API\ndecisions that feel odd. I have mixed emotions about both the use\nof double as the sleep-time datatype, and about the choice of seconds\n(rather than, say, msec) as the unit. Admittedly this is not all bad\n--- for example, several of the calls I list above are delaying for\n100ms, which we can easily accommodate in this scheme as \"0.1\", and\nmaybe it'd be a good idea to hit up the stuff waiting for 10ms too.\nIt still seems unusual for an internal support function though.\nI haven't done the math on it, but are we limiting the precision\nof the sleep (due to roundoff error) in any way that would matter?\n\nA bigger issue is that we almost certainly ought to pass through\na wait event code instead of allowing all these cases to be\nWAIT_EVENT_PG_SLEEP.\n\nI'd skip the unconditional latch reset you added to pg_sleep_sql.\nI realize that's bug-compatible with what happens now, but I still\nthink it's expending extra code to do what might well be the\nwrong thing.\n\nWe should update the header comment for pg_usleep to direct people\nto this new function.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Mar 2023 12:28:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 12:28:28PM -0500, Tom Lane wrote:\n> A quick grep for pg_usleep with large intervals finds rather more\n> than you touched:\n> \n> [...]\n> \n> Did you have reasons for excluding the rest of these?\n\nI'm still looking into each of these, but my main concerns were 1) ensuring\nlatch support had been set up and 2) figuring out the right interrupt\nfunction to use. Thus far, I don't think latch support is a problem\nbecause InitializeLatchSupport() is called quite early. However, I'm not\nas confident with the interrupt functions yet. Some of these multisecond\nsleeps are done very early before the signal handlers are set up. Others\nare done within the sigsetjmp() block. And at least one is in a code path\nthat both the startup process and single-user mode touch.\n\nAt the moment, I'm thinking about either removing the check_interrupts\nfunction pointer argument altogether or making it optional for code paths\nwhere we might not want any interrupt handling to run. In the former\napproach, we could simply call WaitLatch() without a latch. While this\nwouldn't do any interrupt handling, we'd still get custom wait event\nsupport and better responsiveness when the postmaster dies, which is\nstrictly better than what's there today. And many of these sleeps are in\nuncommon or debug paths, so delaying interrupt handling might be okay. In\nthe latter approach, we would have interrupt handling, but I'm worried that\nwould be easy to get wrong. I think it would be nice to have interrupt\nhandling if possible, so I'm still (over)thinking about this.\n\nI agree with the rest of your comments.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 Mar 2023 14:16:31 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "On Mon, 13 Mar 2023 at 17:17, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Fri, Mar 10, 2023 at 12:28:28PM -0500, Tom Lane wrote:\n> > A quick grep for pg_usleep with large intervals finds rather more\n> > than you touched:\n> >\n> > [...]\n> >\n> > Did you have reasons for excluding the rest of these?\n>\n> I'm still looking into each of these, but my main concerns were 1) ensuring\n> latch support had been set up and 2) figuring out the right interrupt\n> function to use. Thus far, I don't think latch support is a problem\n> because InitializeLatchSupport() is called quite early. However, I'm not\n> as confident with the interrupt functions yet. Some of these multisecond\n> sleeps are done very early before the signal handlers are set up. Others\n> are done within the sigsetjmp() block. And at least one is in a code path\n> that both the startup process and single-user mode touch.\n\nIs this still a WIP? Is it targeting this release? There are only a\nfew days left before the feature freeze. I'm guessing it should just\nmove to the next CF for the next release?\n\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 3 Apr 2023 16:33:27 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "On Mon, Apr 03, 2023 at 04:33:27PM -0400, Gregory Stark (as CFM) wrote:\n> Is this still a WIP? Is it targeting this release? There are only a\n> few days left before the feature freeze. I'm guessing it should just\n> move to the next CF for the next release?\n\nI moved it to the next commitfest and marked the target version as v17.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 3 Apr 2023 20:31:57 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "> On 4 Apr 2023, at 05:31, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> On Mon, Apr 03, 2023 at 04:33:27PM -0400, Gregory Stark (as CFM) wrote:\n>> Is this still a WIP? Is it targeting this release? There are only a\n>> few days left before the feature freeze. I'm guessing it should just\n>> move to the next CF for the next release?\n> \n> I moved it to the next commitfest and marked the target version as v17.\n\nHave you had any chance to revisit this such that there is a new patch to\nreview, or should it be returned/moved for now?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 10:12:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 10:12:36AM +0200, Daniel Gustafsson wrote:\n> Have you had any chance to revisit this such that there is a new patch to\n> review, or should it be returned/moved for now?\n\nNot yet. I moved it to the next commitfest.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 10 Jul 2023 10:39:19 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 02:16:31PM -0700, Nathan Bossart wrote:\n> At the moment, I'm thinking about either removing the check_interrupts\n> function pointer argument altogether or making it optional for code paths\n> where we might not want any interrupt handling to run. In the former\n> approach, we could simply call WaitLatch() without a latch. While this\n> wouldn't do any interrupt handling, we'd still get custom wait event\n> support and better responsiveness when the postmaster dies, which is\n> strictly better than what's there today. And many of these sleeps are in\n> uncommon or debug paths, so delaying interrupt handling might be okay. In\n> the latter approach, we would have interrupt handling, but I'm worried that\n> would be easy to get wrong. I think it would be nice to have interrupt\n> handling if possible, so I'm still (over)thinking about this.\n\nI started on the former approach (work-in-progress patch attached), but I\nfigured I'd ask whether folks are alright with this before I spend more\ntime on it. Many of the sleeps in question are relatively short, are\nintended for debugging, or are meant to prevent errors from repeating as\nfast as possible, so I don't know if we should bother adding interrupt\nhandling support.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 26 Jul 2023 16:41:25 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_usleep for multisecond delays"
},
{
"msg_contents": "At Wed, 26 Jul 2023 16:41:25 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Mon, Mar 13, 2023 at 02:16:31PM -0700, Nathan Bossart wrote:\n> I started on the former approach (work-in-progress patch attached), but I\n> figured I'd ask whether folks are alright with this before I spend more\n> time on it. Many of the sleeps in question are relatively short, are\n> intended for debugging, or are meant to prevent errors from repeating as\n> fast as possible, so I don't know if we should bother adding interrupt\n> handling support.\n\nIt is responsive to an immediate shutdown. I think that's fine, as\nthings might become overly complex if we aim for it to respond to fast\nshutdown as well.\n\nThe pg_msleep() implemented in the patch accepts a wait event type,\nand some event types other than WAIT_EVENT_PG_SLEEP are passed to it.\n\nWAIT_EVENT_CHECKPOINTER_MAIN:\n\n - At checkpointer.c:309, it is in a long-jump'ed block, where out of\n the main loop.\n\n - At checkpointer.c:1005: RequestCheckpoint is not executed on checkpointer.\n\nWAIT_EVENT_ARCHIVER_MAIN:\n - At pgarch.c:453,485: They are not at the main-loop level idle-waiting.\n\nWAIT_EVENT_PRE/POST_AUTH_DELAY:\n\n - These are really useless since they're not seen anywhere. Client\n backends don't show up in pg_stat_activity until this sleep\n finishes. (We could use ps-display instead, but...)\n\nWAIT_EVENT_VACUUM_DELAY:\n\n - This behaves the same as it did before the change. Actually, we\n don't want to use WAIT_EVENT_PG_SLEEP for it.\n\nSo, we have at least one sensible use case for the parameter, which\nseems to be sufficient reason.\n\nI'm not sure if others will agree, but I'm leaning towards providing a\ndedicated WaitEventSet for the new sleep function.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 27 Jul 2023 11:50:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_usleep for multisecond delays"
}
] |
[
{
"msg_contents": "Would there be interest in a variant of PQgetCopyData() that re-uses the\nsame buffer for a new row, rather than allocating a new buffer on each\niteration?\n\nI tried it on a toy benchmark, and it reduced client-side CPU time by about\n12%. (Less of a difference in wall-clock time of course; the client was\nonly using the CPU for a bit over half the time.)\n\n\nJeroen\n\nWould there be interest in a variant of PQgetCopyData() that re-uses the same buffer for a new row, rather than allocating a new buffer on each iteration?I tried it on a toy benchmark, and it reduced client-side CPU time by about 12%. (Less of a difference in wall-clock time of course; the client was only using the CPU for a bit over half the time.)Jeroen",
"msg_date": "Fri, 10 Feb 2023 00:25:45 +0100",
"msg_from": "Jeroen Vermeulen <jtvjtv@gmail.com>",
"msg_from_op": true,
"msg_subject": "libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 3:43 PM Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n>\n> Would there be interest in a variant of PQgetCopyData() that re-uses the same buffer for a new row, rather than allocating a new buffer on each iteration?\n>\n> I tried it on a toy benchmark, and it reduced client-side CPU time by about 12%. (Less of a difference in wall-clock time of course; the client was only using the CPU for a bit over half the time.)\n\nInteresting. It might improve logical replication performance too as\nit uses COPY protocol.\n\nDo you mind sharing a patch, test case that you used and steps to\nverify the benefit?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 10 Feb 2023 15:55:53 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "Here's the patch (as a PR just to make it easy to read):\nhttps://github.com/jtv/postgres/pull/1\n\nI don't have an easily readable benchmark yet, since I've been timing the\npotential impact on libpqxx. But can do that next.\n\n\nJeroen\n\nOn Fri, Feb 10, 2023, 11:26 Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Fri, Feb 10, 2023 at 3:43 PM Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n> >\n> > Would there be interest in a variant of PQgetCopyData() that re-uses the\n> same buffer for a new row, rather than allocating a new buffer on each\n> iteration?\n> >\n> > I tried it on a toy benchmark, and it reduced client-side CPU time by\n> about 12%. (Less of a difference in wall-clock time of course; the client\n> was only using the CPU for a bit over half the time.)\n>\n> Interesting. It might improve logical replication performance too as\n> it uses COPY protocol.\n>\n> Do you mind sharing a patch, test case that you used and steps to\n> verify the benefit?\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n\nHere's the patch (as a PR just to make it easy to read): https://github.com/jtv/postgres/pull/1I don't have an easily readable benchmark yet, since I've been timing the potential impact on libpqxx. But can do that next.JeroenOn Fri, Feb 10, 2023, 11:26 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Fri, Feb 10, 2023 at 3:43 PM Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n>\n> Would there be interest in a variant of PQgetCopyData() that re-uses the same buffer for a new row, rather than allocating a new buffer on each iteration?\n>\n> I tried it on a toy benchmark, and it reduced client-side CPU time by about 12%. (Less of a difference in wall-clock time of course; the client was only using the CPU for a bit over half the time.)\n\nInteresting. It might improve logical replication performance too as\nit uses COPY protocol.\n\nDo you mind sharing a patch, test case that you used and steps to\nverify the benefit?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 10 Feb 2023 11:32:00 +0100",
"msg_from": "Jeroen Vermeulen <jtvjtv@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "OK, I've updated the PR with a benchmark (in the main directory).\n\nOn this benchmark I'm seeing about a 24% reduction in \"user\" CPU time, and\na 8% reduction in \"system\" CPU time. (Almost no reduction in wall-clock\ntime.)\n\n\nJeroen\n\nOn Fri, 10 Feb 2023 at 11:32, Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n\n> Here's the patch (as a PR just to make it easy to read):\n> https://github.com/jtv/postgres/pull/1\n>\n> I don't have an easily readable benchmark yet, since I've been timing the\n> potential impact on libpqxx. But can do that next.\n>\n>\n> Jeroen\n>\n> On Fri, Feb 10, 2023, 11:26 Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n>> On Fri, Feb 10, 2023 at 3:43 PM Jeroen Vermeulen <jtvjtv@gmail.com>\n>> wrote:\n>> >\n>> > Would there be interest in a variant of PQgetCopyData() that re-uses\n>> the same buffer for a new row, rather than allocating a new buffer on each\n>> iteration?\n>> >\n>> > I tried it on a toy benchmark, and it reduced client-side CPU time by\n>> about 12%. (Less of a difference in wall-clock time of course; the client\n>> was only using the CPU for a bit over half the time.)\n>>\n>> Interesting. It might improve logical replication performance too as\n>> it uses COPY protocol.\n>>\n>> Do you mind sharing a patch, test case that you used and steps to\n>> verify the benefit?\n>>\n>> --\n>> Bharath Rupireddy\n>> PostgreSQL Contributors Team\n>> RDS Open Source Databases\n>> Amazon Web Services: https://aws.amazon.com\n>>\n>\n\nOK, I've updated the PR with a benchmark (in the main directory).On this benchmark I'm seeing about a 24% reduction in \"user\" CPU time, and a 8% reduction in \"system\" CPU time. (Almost no reduction in wall-clock time.)JeroenOn Fri, 10 Feb 2023 at 11:32, Jeroen Vermeulen <jtvjtv@gmail.com> wrote:Here's the patch (as a PR just to make it easy to read): https://github.com/jtv/postgres/pull/1I don't have an easily readable benchmark yet, since I've been timing the potential impact on libpqxx. But can do that next.JeroenOn Fri, Feb 10, 2023, 11:26 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Fri, Feb 10, 2023 at 3:43 PM Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n>\n> Would there be interest in a variant of PQgetCopyData() that re-uses the same buffer for a new row, rather than allocating a new buffer on each iteration?\n>\n> I tried it on a toy benchmark, and it reduced client-side CPU time by about 12%. (Less of a difference in wall-clock time of course; the client was only using the CPU for a bit over half the time.)\n\nInteresting. It might improve logical replication performance too as\nit uses COPY protocol.\n\nDo you mind sharing a patch, test case that you used and steps to\nverify the benefit?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 10 Feb 2023 13:19:29 +0100",
"msg_from": "Jeroen Vermeulen <jtvjtv@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 5:49 PM Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n>\n> OK, I've updated the PR with a benchmark (in the main directory).\n>\n> On this benchmark I'm seeing about a 24% reduction in \"user\" CPU time, and a 8% reduction in \"system\" CPU time. (Almost no reduction in wall-clock time.)\n\nI can help run some logical replication performance benchmarks\ntomorrow. Would you mind cleaning the PR and providing the changes\n(there are multiple commits in the PR) as a single patch here for the\nsake of ease of review and test?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Feb 2023 19:18:14 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "Instead of implementing new growable buffer logic in this patch. It\nseems much nicer to reuse the already existing PQExpBuffer type for\nthis patch.\n\n\nOn Mon, 27 Feb 2023 at 14:48, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Feb 10, 2023 at 5:49 PM Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n> >\n> > OK, I've updated the PR with a benchmark (in the main directory).\n> >\n> > On this benchmark I'm seeing about a 24% reduction in \"user\" CPU time, and a 8% reduction in \"system\" CPU time. (Almost no reduction in wall-clock time.)\n>\n> I can help run some logical replication performance benchmarks\n> tomorrow. Would you mind cleaning the PR and providing the changes\n> (there are multiple commits in the PR) as a single patch here for the\n> sake of ease of review and test?\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n\n\n",
"msg_date": "Mon, 27 Feb 2023 15:33:42 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "Done. Thanks for looking!\n\nJelte Fennema pointed out that I should probably be using PQExpBuffer for\nthis. I'll look into that later; this is a proof of concept, not a\nproduction-ready API proposal.\n\n\nJeroen\n\nOn Mon, 27 Feb 2023 at 14:48, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Fri, Feb 10, 2023 at 5:49 PM Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n> >\n> > OK, I've updated the PR with a benchmark (in the main directory).\n> >\n> > On this benchmark I'm seeing about a 24% reduction in \"user\" CPU time,\n> and a 8% reduction in \"system\" CPU time. (Almost no reduction in\n> wall-clock time.)\n>\n> I can help run some logical replication performance benchmarks\n> tomorrow. Would you mind cleaning the PR and providing the changes\n> (there are multiple commits in the PR) as a single patch here for the\n> sake of ease of review and test?\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n\nDone. Thanks for looking!Jelte Fennema pointed out that I should probably be using PQExpBuffer for this. I'll look into that later; this is a proof of concept, not a production-ready API proposal.JeroenOn Mon, 27 Feb 2023 at 14:48, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Fri, Feb 10, 2023 at 5:49 PM Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n>\n> OK, I've updated the PR with a benchmark (in the main directory).\n>\n> On this benchmark I'm seeing about a 24% reduction in \"user\" CPU time, and a 8% reduction in \"system\" CPU time. (Almost no reduction in wall-clock time.)\n\nI can help run some logical replication performance benchmarks\ntomorrow. Would you mind cleaning the PR and providing the changes\n(there are multiple commits in the PR) as a single patch here for the\nsake of ease of review and test?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 27 Feb 2023 17:08:09 +0100",
"msg_from": "Jeroen Vermeulen <jtvjtv@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "Update: in the latest iteration, I have an alternative that reduces CPU\ntime by more than half, compared to PQgetCopyData().\n\nAnd the code is simpler, too, both in the client and in libpq itself. The\none downside is that it breaks with libpq's existing API style.\n\nPR for easy discussion: https://github.com/jtv/postgres/pull/1\n\n\nJeroen\n\nOn Mon, 27 Feb 2023 at 17:08, Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n\n> Done. Thanks for looking!\n>\n> Jelte Fennema pointed out that I should probably be using PQExpBuffer for\n> this. I'll look into that later; this is a proof of concept, not a\n> production-ready API proposal.\n>\n>\n> Jeroen\n>\n> On Mon, 27 Feb 2023 at 14:48, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n>> On Fri, Feb 10, 2023 at 5:49 PM Jeroen Vermeulen <jtvjtv@gmail.com>\n>> wrote:\n>> >\n>> > OK, I've updated the PR with a benchmark (in the main directory).\n>> >\n>> > On this benchmark I'm seeing about a 24% reduction in \"user\" CPU time,\n>> and a 8% reduction in \"system\" CPU time. (Almost no reduction in\n>> wall-clock time.)\n>>\n>> I can help run some logical replication performance benchmarks\n>> tomorrow. Would you mind cleaning the PR and providing the changes\n>> (there are multiple commits in the PR) as a single patch here for the\n>> sake of ease of review and test?\n>>\n>> --\n>> Bharath Rupireddy\n>> PostgreSQL Contributors Team\n>> RDS Open Source Databases\n>> Amazon Web Services: https://aws.amazon.com\n>>\n>\n\nUpdate: in the latest iteration, I have an alternative that reduces CPU time by more than half, compared to PQgetCopyData().And the code is simpler, too, both in the client and in libpq itself. The one downside is that it breaks with libpq's existing API style.PR for easy discussion: https://github.com/jtv/postgres/pull/1JeroenOn Mon, 27 Feb 2023 at 17:08, Jeroen Vermeulen <jtvjtv@gmail.com> wrote:Done. Thanks for looking!Jelte Fennema pointed out that I should probably be using PQExpBuffer for this. I'll look into that later; this is a proof of concept, not a production-ready API proposal.JeroenOn Mon, 27 Feb 2023 at 14:48, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Fri, Feb 10, 2023 at 5:49 PM Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n>\n> OK, I've updated the PR with a benchmark (in the main directory).\n>\n> On this benchmark I'm seeing about a 24% reduction in \"user\" CPU time, and a 8% reduction in \"system\" CPU time. (Almost no reduction in wall-clock time.)\n\nI can help run some logical replication performance benchmarks\ntomorrow. Would you mind cleaning the PR and providing the changes\n(there are multiple commits in the PR) as a single patch here for the\nsake of ease of review and test?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 1 Mar 2023 15:23:45 +0100",
"msg_from": "Jeroen Vermeulen <jtvjtv@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "> On 1 Mar 2023, at 15:23, Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n\n> PR for easy discussion: https://github.com/jtv/postgres/pull/1\n\nThe process for discussing work on pgsql-hackers is to attach the patch to the\nemail and discuss it inline in the thread. That way all versions of the patch\nas well as the discussion is archived and searchable. \n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 2 Mar 2023 13:38:37 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "My apologies. The wiki said to discuss early, even before writing the code\nif possible, but I added a link to the PR for those who really wanted to\nsee the details.\n\nI'm attaching a diff now. This is not a patch, it's just a discussion\npiece.\n\nThe problem was that PQgetCopyData loops use a lot of CPU time, and this\nalternative reduces that by a lot.\n\n\nJeroen\n\nOn Thu, 2 Mar 2023 at 13:38, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 1 Mar 2023, at 15:23, Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n>\n> > PR for easy discussion: https://github.com/jtv/postgres/pull/1\n>\n> The process for discussing work on pgsql-hackers is to attach the patch to\n> the\n> email and discuss it inline in the thread. That way all versions of the\n> patch\n> as well as the discussion is archived and searchable.\n>\n> --\n> Daniel Gustafsson\n>\n>",
"msg_date": "Thu, 2 Mar 2023 20:44:48 +0100",
"msg_from": "Jeroen Vermeulen <jtvjtv@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "On Thu, 2 Mar 2023 at 20:45, Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n> I'm attaching a diff now. This is not a patch, it's just a discussion piece.\n\nDid you try with PQExpBuffer? I still think that probably fits better\nin the API design of libpq. Obviously if it's significantly slower\nthan the callback approach in this patch then it's worth considering\nusing the callback approach. Overall it definitely seems reasonable to\nme to have an API that doesn't do an allocation per row.\n\n\n",
"msg_date": "Fri, 3 Mar 2023 16:52:23 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "Thank you.\n\nI meant to try PQExpBuffer — as you say, it fits much better with existing\nlibpq style. But then I hit on the callback idea which was even more\nefficient, by a fair margin. It was also considerably simpler both inside\nlibpq and in the client code, eliminating all sorts of awkward questions\nabout who deallocates the buffer in which situations. So I ditched the\n\"dynamic buffer\" concept and went with the callback.\n\nOne other possible alternative suggests itself: instead of taking a\ncallback and a context pointer, the function could probably just return a\nstruct: status/size, and buffer. Then the caller would have to figure out\nwhether there's a line in the buffer, and if so, process it. It seems like\nmore work for the client code, but it may make the compiler's optimisation\nwork easier.\n\n\nJeroen\n\nOn Fri, 3 Mar 2023 at 16:52, Jelte Fennema <postgres@jeltef.nl> wrote:\n\n> On Thu, 2 Mar 2023 at 20:45, Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n> > I'm attaching a diff now. This is not a patch, it's just a discussion\n> piece.\n>\n> Did you try with PQExpBuffer? I still think that probably fits better\n> in the API design of libpq. Obviously if it's significantly slower\n> than the callback approach in this patch then it's worth considering\n> using the callback approach. Overall it definitely seems reasonable to\n> me to have an API that doesn't do an allocation per row.\n>\n\nThank you.I meant to try PQExpBuffer — as you say, it fits much better with existing libpq style. But then I hit on the callback idea which was even more efficient, by a fair margin. It was also considerably simpler both inside libpq and in the client code, eliminating all sorts of awkward questions about who deallocates the buffer in which situations. So I ditched the \"dynamic buffer\" concept and went with the callback.One other possible alternative suggests itself: instead of taking a callback and a context pointer, the function could probably just return a struct: status/size, and buffer. Then the caller would have to figure out whether there's a line in the buffer, and if so, process it. It seems like more work for the client code, but it may make the compiler's optimisation work easier.JeroenOn Fri, 3 Mar 2023 at 16:52, Jelte Fennema <postgres@jeltef.nl> wrote:On Thu, 2 Mar 2023 at 20:45, Jeroen Vermeulen <jtvjtv@gmail.com> wrote:\n> I'm attaching a diff now. This is not a patch, it's just a discussion piece.\n\nDid you try with PQExpBuffer? I still think that probably fits better\nin the API design of libpq. Obviously if it's significantly slower\nthan the callback approach in this patch then it's worth considering\nusing the callback approach. Overall it definitely seems reasonable to\nme to have an API that doesn't do an allocation per row.",
"msg_date": "Fri, 3 Mar 2023 17:16:05 +0100",
"msg_from": "Jeroen Vermeulen <jtvjtv@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "Jelte Fennema <postgres@jeltef.nl> writes:\n> Did you try with PQExpBuffer? I still think that probably fits better\n> in the API design of libpq.\n\nIf you mean exposing PQExpBuffer to users of libpq-fe.h, I'd be very\nseriously against that. I realize that libpq exposes it at an ABI\nlevel, but that doesn't mean we want non-Postgres code to use it.\nI also don't see what it'd add for this particular use-case.\n\nOne thing I don't care for at all in the proposed API spec is the bit\nabout how the handler function can scribble on the passed buffer.\nLet's not do that. Declare it const char *, or maybe better const void *.\n\nRather than duplicating most of pqGetCopyData3, I'd suggest revising\nit to take a callback, where the callback is either user-supplied\nor is supplied by PQgetCopyData to emulate the existing behavior.\nThis would both avoid duplicate coding and provide a simple check that\nyou've made a usable callback API (in particular, one that can do\nsomething sane for error cases).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Mar 2023 11:33:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "On Fri, 3 Mar 2023 at 17:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> If you mean exposing PQExpBuffer to users of libpq-fe.h, I'd be very\n> seriously against that. I realize that libpq exposes it at an ABI\n> level, but that doesn't mean we want non-Postgres code to use it.\n> I also don't see what it'd add for this particular use-case.\n>\n\nFair enough. Never even got around to checking whether it was in the API\nalready.\n\n\n\n> One thing I don't care for at all in the proposed API spec is the bit\n> about how the handler function can scribble on the passed buffer.\n> Let's not do that. Declare it const char *, or maybe better const void *.\n>\n\nPersonally I would much prefer \"char\" over \"void\" here:\n* It really is a char buffer, containing text.\n* If there is to be any type punning, best have it explicit.\n* Reduces risk of getting the two pointer arguments the wrong way around.\n\nAs for const, I would definitely have preferred that. But if the caller\nneeds a zero-terminated string, forcing them into a memcpy() would kind of\ndefeat the purpose.\n\nI even tried poking a terminating zero in there from inside the function,\nbut that made the code significantly less efficient. Optimiser\nassumptions, I suppose.\n\n\nRather than duplicating most of pqGetCopyData3, I'd suggest revising\n> it to take a callback, where the callback is either user-supplied\n> or is supplied by PQgetCopyData to emulate the existing behavior.\n> This would both avoid duplicate coding and provide a simple check that\n> you've made a usable callback API (in particular, one that can do\n> something sane for error cases).\n>\n\nCan do that, sure. I'll also try benchmarking a variant that doesn't take\na callback at all, but gives you the buffer pointer in addition to the\nsize/status return. I don't generally like callbacks.\n\n\nJeroen\n\nOn Fri, 3 Mar 2023 at 17:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\nIf you mean exposing PQExpBuffer to users of libpq-fe.h, I'd be very\nseriously against that. I realize that libpq exposes it at an ABI\nlevel, but that doesn't mean we want non-Postgres code to use it.\nI also don't see what it'd add for this particular use-case.Fair enough. Never even got around to checking whether it was in the API already. \nOne thing I don't care for at all in the proposed API spec is the bit\nabout how the handler function can scribble on the passed buffer.\nLet's not do that. Declare it const char *, or maybe better const void *. Personally I would much prefer \"char\" over \"void\" here:* It really is a char buffer, containing text.* If there is to be any type punning, best have it explicit.* Reduces risk of getting the two pointer arguments the wrong way around.As for const, I would definitely have preferred that. But if the caller needs a zero-terminated string, forcing them into a memcpy() would kind of defeat the purpose.I even tried poking a terminating zero in there from inside the function, but that made the code significantly less efficient. Optimiser assumptions, I suppose.\nRather than duplicating most of pqGetCopyData3, I'd suggest revising\nit to take a callback, where the callback is either user-supplied\nor is supplied by PQgetCopyData to emulate the existing behavior.\nThis would both avoid duplicate coding and provide a simple check that\nyou've made a usable callback API (in particular, one that can do\nsomething sane for error cases).Can do that, sure. I'll also try benchmarking a variant that doesn't take a callback at all, but gives you the buffer pointer in addition to the size/status return. I don't generally like callbacks.Jeroen",
"msg_date": "Fri, 3 Mar 2023 18:04:22 +0100",
"msg_from": "Jeroen Vermeulen <jtvjtv@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "Jeroen Vermeulen <jtvjtv@gmail.com> writes:\n> On Fri, 3 Mar 2023 at 17:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Let's not do that. Declare it const char *, or maybe better const void *.\n\n> Personally I would much prefer \"char\" over \"void\" here:\n> * It really is a char buffer, containing text.\n\nNot in binary-mode COPY.\n\n> As for const, I would definitely have preferred that. But if the caller\n> needs a zero-terminated string, forcing them into a memcpy() would kind of\n> defeat the purpose.\n\nI'm willing to grant that avoiding malloc-and-free is worth the trouble.\nI'm not willing to allow applications to scribble on libpq buffers to\navoid memcpy. Even your not-a-patch patch fails to make the case that\nthis is essential, because you could have used fwrite() instead of\nprintf() (which would be significantly faster yet btw, printf formatting\nain't cheap).\n\n> Can do that, sure. I'll also try benchmarking a variant that doesn't take\n> a callback at all, but gives you the buffer pointer in addition to the\n> size/status return. I don't generally like callbacks.\n\nUm ... that would require an assumption that libpq neither changes nor\nmoves that buffer before returning to the caller. I don't much like\nthat either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Mar 2023 12:14:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "On Fri, 3 Mar 2023 at 17:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If you mean exposing PQExpBuffer to users of libpq-fe.h, I'd be very\n> seriously against that. I realize that libpq exposes it at an ABI\n> level, but that doesn't mean we want non-Postgres code to use it.\n\nThe code comment in the pqexpbuffer.h header suggests that client\napplications are fine too use the API to:\n\n> * This module is essentially the same as the backend's StringInfo data type,\n> * but it is intended for use in frontend libpq and client applications.\n\nI know both pg_auto_failover and pgcopydb use it quite a lot.\n\n\n",
"msg_date": "Fri, 3 Mar 2023 18:49:22 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "Jelte Fennema <postgres@jeltef.nl> writes:\n> On Fri, 3 Mar 2023 at 17:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If you mean exposing PQExpBuffer to users of libpq-fe.h, I'd be very\n>> seriously against that. I realize that libpq exposes it at an ABI\n>> level, but that doesn't mean we want non-Postgres code to use it.\n\n> The code comment in the pqexpbuffer.h header suggests that client\n> applications are fine too use the API to:\n\nOur own client apps, sure. But you have to buy into the whole Postgres\ncompilation environment to use PQExpBuffer. (If you don't believe me,\njust try including pqexpbuffer.h by itself.) That's a non-starter\nfor most clients.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Mar 2023 12:55:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "On Fri, 3 Mar 2023 at 18:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jeroen Vermeulen <jtvjtv@gmail.com> writes:\n> > On Fri, 3 Mar 2023 at 17:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Let's not do that. Declare it const char *, or maybe better const void\n> *.\n>\n> > Personally I would much prefer \"char\" over \"void\" here:\n> > * It really is a char buffer, containing text.\n>\n> Not in binary-mode COPY.\n>\n\nTrue. And in that case zero-termination doesn't matter much either. But\noverall libpq's existing choice seems reasonable.\n\n\n> As for const, I would definitely have preferred that. But if the caller\n> > needs a zero-terminated string, forcing them into a memcpy() would kind\n> of\n> > defeat the purpose.\n>\n> I'm willing to grant that avoiding malloc-and-free is worth the trouble.\n> I'm not willing to allow applications to scribble on libpq buffers to\n> avoid memcpy. Even your not-a-patch patch fails to make the case that\n> this is essential, because you could have used fwrite() instead of\n> printf() (which would be significantly faster yet btw, printf formatting\n> ain't cheap).\n>\n\nYour house, your rules. For my own use-case \"const\" is just peachy.\n\nThe printf() is just the simplest example that sprang to mind though.\nThere may be other use-cases out there involving libraries that require\nzero-terminated strings, and I figured an ability to set a sentinel could\nhelp those.\n\n\n> Can do that, sure. I'll also try benchmarking a variant that doesn't take\n> > a callback at all, but gives you the buffer pointer in addition to the\n> > size/status return. I don't generally like callbacks.\n>\n> Um ... that would require an assumption that libpq neither changes nor\n> moves that buffer before returning to the caller. I don't much like\n> that either.\n>\n\nNot an assumption about _before returning to the caller_ I guess, because\nthe function would be on top of that anyway. The concern would be libpq\nchanging or moving the buffer _before the caller is done with the line._\nWhich would require some kind of clear rule about what invalidates the\nbuffer. Yes, that is easier with the callback.\n\n\nJeroen\n\nOn Fri, 3 Mar 2023 at 18:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:Jeroen Vermeulen <jtvjtv@gmail.com> writes:\n> On Fri, 3 Mar 2023 at 17:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Let's not do that. Declare it const char *, or maybe better const void *.\n\n> Personally I would much prefer \"char\" over \"void\" here:\n> * It really is a char buffer, containing text.\n\nNot in binary-mode COPY.True. And in that case zero-termination doesn't matter much either. But overall libpq's existing choice seems reasonable.\n> As for const, I would definitely have preferred that. But if the caller\n> needs a zero-terminated string, forcing them into a memcpy() would kind of\n> defeat the purpose.\n\nI'm willing to grant that avoiding malloc-and-free is worth the trouble.\nI'm not willing to allow applications to scribble on libpq buffers to\navoid memcpy. Even your not-a-patch patch fails to make the case that\nthis is essential, because you could have used fwrite() instead of\nprintf() (which would be significantly faster yet btw, printf formatting\nain't cheap). Your house, your rules. For my own use-case \"const\" is just peachy.The printf() is just the simplest example that sprang to mind though. There may be other use-cases out there involving libraries that require zero-terminated strings, and I figured an ability to set a sentinel could help those.\n> Can do that, sure. I'll also try benchmarking a variant that doesn't take\n> a callback at all, but gives you the buffer pointer in addition to the\n> size/status return. I don't generally like callbacks.\n\nUm ... that would require an assumption that libpq neither changes nor\nmoves that buffer before returning to the caller. I don't much like\nthat either. Not an assumption about _before returning to the caller_ I guess, because the function would be on top of that anyway. The concern would be libpq changing or moving the buffer _before the caller is done with the line._ Which would require some kind of clear rule about what invalidates the buffer. Yes, that is easier with the callback.Jeroen",
"msg_date": "Fri, 3 Mar 2023 19:25:48 +0100",
"msg_from": "Jeroen Vermeulen <jtvjtv@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "Jeroen Vermeulen <jtvjtv@gmail.com> writes:\n> The printf() is just the simplest example that sprang to mind though.\n> There may be other use-cases out there involving libraries that require\n> zero-terminated strings, and I figured an ability to set a sentinel could\n> help those.\n\nWell, since it won't help for binary COPY, I'm skeptical that this is\nsomething we should cater to. Anybody who's sufficiently worried about\nperformance to be trying to remove malloc/free cycles ought to be\ninterested in binary COPY as well.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Mar 2023 13:53:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
},
{
"msg_contents": "Interested, yes. But there may be reasons not to do that. Last time I\nlooked the binary format wasn't documented.\n\nAnyway, I tried re-implementing pqGetCopyData3() using the callback.\nWasn't hard, but I did have to add a way for the callback to return an\nerror. Kept it pretty dumb for now, hoping a sensible rule will become\nobvious later.\n\nSaw no obvious performance impact, so that's good.\n\n\nJeroen\n\nOn Fri, 3 Mar 2023 at 19:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jeroen Vermeulen <jtvjtv@gmail.com> writes:\n> > The printf() is just the simplest example that sprang to mind though.\n> > There may be other use-cases out there involving libraries that require\n> > zero-terminated strings, and I figured an ability to set a sentinel could\n> > help those.\n>\n> Well, since it won't help for binary COPY, I'm skeptical that this is\n> something we should cater to. Anybody who's sufficiently worried about\n> performance to be trying to remove malloc/free cycles ought to be\n> interested in binary COPY as well.\n>\n> regards, tom lane\n>",
"msg_date": "Fri, 3 Mar 2023 20:30:37 +0100",
"msg_from": "Jeroen Vermeulen <jtvjtv@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq: PQgetCopyData() and allocation overhead"
}
] |
[
{
"msg_contents": "pgstat: Track more detailed relation IO statistics\n\nCommit 28e626bde00 introduced the infrastructure for tracking more detailed IO\nstatistics. This commit adds the actual collection of the new IO statistics\nfor relations and temporary relations. See aforementioned commit for goals and\nhigh-level design.\n\nThe changes in this commit are fairly straight-forward. The bulk of the change\nis to passing sufficient information to the callsites of pgstat_count_io_op().\n\nA somewhat unsightly detail is that it currently is hard to find a better\nplace to count fsyncs than in md.c, whereas the other pgstat_count_io_op()\ncalls are in bufmgr.c/localbuf.c. As the number of fsyncs is tied to md.c\nimplementation details, it's not obvious there is a better answer.\n\nAuthor: Melanie Plageman <melanieplageman@gmail.com>\nReviewed-by: Andres Freund <andres@anarazel.de>\nDiscussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/f30d62c2fc60acfa62d3b83a73dc9bf7f83cfe2f\n\nModified Files\n--------------\nsrc/backend/storage/buffer/bufmgr.c | 110 +++++++++++++++++++++++++++++-----\nsrc/backend/storage/buffer/freelist.c | 58 +++++++++++++-----\nsrc/backend/storage/buffer/localbuf.c | 13 +++-\nsrc/backend/storage/smgr/md.c | 24 ++++++++\nsrc/include/storage/buf_internals.h | 8 ++-\nsrc/include/storage/bufmgr.h | 7 ++-\n6 files changed, 184 insertions(+), 36 deletions(-)",
"msg_date": "Fri, 10 Feb 2023 06:24:44 +0000",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pgsql: pgstat: Track more detailed relation IO statistics"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 1:24 AM Andres Freund <andres@anarazel.de> wrote:\n> pgstat: Track more detailed relation IO statistics\n\nI assume there's more commits coming here, but for the record, the\nprevious patch introduced a reference to pg_stat_io in the\ndocumentation, but just a brief reference and not full details. This\npatch introduced a reference to it in the comments. But it doesn't\nactually exist yet.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 Feb 2023 09:23:35 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: pgstat: Track more detailed relation IO statistics"
}
] |
[
{
"msg_contents": "Right now, it says that the default locale_provider is libc; but\nactually it's the same as the template from which the database is\ncreated.\n\nDoc patch attached.\n\nI also adjusted the wording of both CREATE DATABASE and CREATE\nCOLLATION to be more definite that there are two providers.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Fri, 10 Feb 2023 11:05:55 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Doc fix for CREATE DATABASE"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 11:05:55AM -0800, Jeff Davis wrote:\n> Right now, it says that the default locale_provider is libc; but\n> actually it's the same as the template from which the database is\n> created.\n> \n> Doc patch attached.\n> \n> I also adjusted the wording of both CREATE DATABASE and CREATE\n> COLLATION to be more definite that there are two providers.\n\nLooks reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 10 Feb 2023 13:22:43 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc fix for CREATE DATABASE"
}
] |
[
{
"msg_contents": "Hi\n\nJust a small note - I executed VACUUM ANALYZE on one customer's database,\nand I had to cancel it after a few hours, because it had more than 20GB RAM\n(almost all physical RAM). The memory leak is probably not too big. This\ndatabase is a little bit unusual. This one database has more than 1 800\n000 tables. and the same number of indexes.\n\nRegards\n\nPavel\n\nHiJust a small note - I executed VACUUM ANALYZE on one customer's database, and I had to cancel it after a few hours, because it had more than 20GB RAM (almost all physical RAM). The memory leak is probably not too big. This database is a little bit unusual. This one database has more than 1 800 000 tables. and the same number of indexes.RegardsPavel",
"msg_date": "Fri, 10 Feb 2023 21:09:06 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "possible memory leak in VACUUM ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-10 21:09:06 +0100, Pavel Stehule wrote:\n> Just a small note - I executed VACUUM ANALYZE on one customer's database,\n> and I had to cancel it after a few hours, because it had more than 20GB RAM\n> (almost all physical RAM).\n\nJust to make sure: You're certain this was an actual memory leak, not just\nvacuum ending up having referenced all of shared_buffers? Unless you use huge\npages, RSS increases over time, as a process touched more and more pages in\nshared memory. Of course that couldn't explain rising above shared_buffers +\noverhead.\n\n\n> The memory leak is probably not too big. This database is a little bit\n> unusual. This one database has more than 1 800 000 tables. and the same\n> number of indexes.\n\nIf you have 1.8 million tables in a single database, what you saw might just\nhave been the size of the relation and catalog caches.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Feb 2023 12:18:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: possible memory leak in VACUUM ANALYZE"
},
{
"msg_contents": "pá 10. 2. 2023 v 21:18 odesílatel Andres Freund <andres@anarazel.de> napsal:\n\n> Hi,\n>\n> On 2023-02-10 21:09:06 +0100, Pavel Stehule wrote:\n> > Just a small note - I executed VACUUM ANALYZE on one customer's database,\n> > and I had to cancel it after a few hours, because it had more than 20GB\n> RAM\n> > (almost all physical RAM).\n>\n> Just to make sure: You're certain this was an actual memory leak, not just\n> vacuum ending up having referenced all of shared_buffers? Unless you use\n> huge\n> pages, RSS increases over time, as a process touched more and more pages in\n> shared memory. Of course that couldn't explain rising above\n> shared_buffers +\n> overhead.\n>\n>\n> > The memory leak is probably not too big. This database is a little bit\n> > unusual. This one database has more than 1 800 000 tables. and the same\n> > number of indexes.\n>\n> If you have 1.8 million tables in a single database, what you saw might\n> just\n> have been the size of the relation and catalog caches.\n>\n\ncan be\n\nRegards\n\nPavel\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\npá 10. 2. 2023 v 21:18 odesílatel Andres Freund <andres@anarazel.de> napsal:Hi,\n\nOn 2023-02-10 21:09:06 +0100, Pavel Stehule wrote:\n> Just a small note - I executed VACUUM ANALYZE on one customer's database,\n> and I had to cancel it after a few hours, because it had more than 20GB RAM\n> (almost all physical RAM).\n\nJust to make sure: You're certain this was an actual memory leak, not just\nvacuum ending up having referenced all of shared_buffers? Unless you use huge\npages, RSS increases over time, as a process touched more and more pages in\nshared memory. Of course that couldn't explain rising above shared_buffers +\noverhead.\n\n\n> The memory leak is probably not too big. This database is a little bit\n> unusual. This one database has more than 1 800 000 tables. and the same\n> number of indexes.\n\nIf you have 1.8 million tables in a single database, what you saw might just\nhave been the size of the relation and catalog caches.can beRegardsPavel \n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 10 Feb 2023 21:23:11 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: possible memory leak in VACUUM ANALYZE"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 09:23:11PM +0100, Pavel Stehule wrote:\n> p� 10. 2. 2023 v 21:18 odes�latel Andres Freund <andres@anarazel.de> napsal:\n> >\n> > On 2023-02-10 21:09:06 +0100, Pavel Stehule wrote:\n> > > Just a small note - I executed VACUUM ANALYZE on one customer's database,\n> > > and I had to cancel it after a few hours, because it had more than 20GB RAM\n> > > (almost all physical RAM).\n> >\n> > Just to make sure: You're certain this was an actual memory leak, not just\n> > vacuum ending up having referenced all of shared_buffers? Unless you use huge\n> > pages, RSS increases over time, as a process touched more and more pages in\n> > shared memory. Of course that couldn't explain rising above\n> > shared_buffers + overhead.\n> >\n> > > The memory leak is probably not too big. This database is a little bit\n> > > unusual. This one database has more than 1 800 000 tables. and the same\n> > > number of indexes.\n> >\n> > If you have 1.8 million tables in a single database, what you saw might just\n> > have been the size of the relation and catalog caches.\n> \n> can be\n\nWell, how big was shared_buffers on that instance ?\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 10 Feb 2023 16:01:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: possible memory leak in VACUUM ANALYZE"
},
{
"msg_contents": "pá 10. 2. 2023 v 23:01 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Fri, Feb 10, 2023 at 09:23:11PM +0100, Pavel Stehule wrote:\n> > pá 10. 2. 2023 v 21:18 odesílatel Andres Freund <andres@anarazel.de>\n> napsal:\n> > >\n> > > On 2023-02-10 21:09:06 +0100, Pavel Stehule wrote:\n> > > > Just a small note - I executed VACUUM ANALYZE on one customer's\n> database,\n> > > > and I had to cancel it after a few hours, because it had more than\n> 20GB RAM\n> > > > (almost all physical RAM).\n> > >\n> > > Just to make sure: You're certain this was an actual memory leak, not\n> just\n> > > vacuum ending up having referenced all of shared_buffers? Unless you\n> use huge\n> > > pages, RSS increases over time, as a process touched more and more\n> pages in\n> > > shared memory. Of course that couldn't explain rising above\n> > > shared_buffers + overhead.\n> > >\n> > > > The memory leak is probably not too big. This database is a little\n> bit\n> > > > unusual. This one database has more than 1 800 000 tables. and the\n> same\n> > > > number of indexes.\n> > >\n> > > If you have 1.8 million tables in a single database, what you saw\n> might just\n> > > have been the size of the relation and catalog caches.\n> >\n> > can be\n>\n> Well, how big was shared_buffers on that instance ?\n>\n\n20GB RAM\n20GB swap\n2GB shared buffers\n\n\n\n>\n> --\n> Justin\n>\n\npá 10. 2. 2023 v 23:01 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Fri, Feb 10, 2023 at 09:23:11PM +0100, Pavel Stehule wrote:\n> pá 10. 2. 2023 v 21:18 odesílatel Andres Freund <andres@anarazel.de> napsal:\n> >\n> > On 2023-02-10 21:09:06 +0100, Pavel Stehule wrote:\n> > > Just a small note - I executed VACUUM ANALYZE on one customer's database,\n> > > and I had to cancel it after a few hours, because it had more than 20GB RAM\n> > > (almost all physical RAM).\n> >\n> > Just to make sure: You're certain this was an actual memory leak, not just\n> > vacuum ending up having referenced all of shared_buffers? Unless you use huge\n> > pages, RSS increases over time, as a process touched more and more pages in\n> > shared memory. Of course that couldn't explain rising above\n> > shared_buffers + overhead.\n> >\n> > > The memory leak is probably not too big. This database is a little bit\n> > > unusual. This one database has more than 1 800 000 tables. and the same\n> > > number of indexes.\n> >\n> > If you have 1.8 million tables in a single database, what you saw might just\n> > have been the size of the relation and catalog caches.\n> \n> can be\n\nWell, how big was shared_buffers on that instance ?20GB RAM20GB swap2GB shared buffers \n\n-- \nJustin",
"msg_date": "Sat, 11 Feb 2023 07:06:45 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: possible memory leak in VACUUM ANALYZE"
},
{
"msg_contents": "On Sat, Feb 11, 2023 at 07:06:45AM +0100, Pavel Stehule wrote:\n> p� 10. 2. 2023 v 23:01 odes�latel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> > On Fri, Feb 10, 2023 at 09:23:11PM +0100, Pavel Stehule wrote:\n> > > p� 10. 2. 2023 v 21:18 odes�latel Andres Freund <andres@anarazel.de> napsal:\n> > > > On 2023-02-10 21:09:06 +0100, Pavel Stehule wrote:\n> > > > > Just a small note - I executed VACUUM ANALYZE on one customer's database,\n> > > > > and I had to cancel it after a few hours, because it had more than 20GB RAM\n> > > > > (almost all physical RAM).\n> > > >\n> > > > Just to make sure: You're certain this was an actual memory leak, not just\n> > > > vacuum ending up having referenced all of shared_buffers? Unless you use huge\n> > > > pages, RSS increases over time, as a process touched more and more pages in\n> > > > shared memory. Of course that couldn't explain rising above\n> > > > shared_buffers + overhead.\n> > > >\n> > > > > The memory leak is probably not too big. This database is a little bit\n> > > > > unusual. This one database has more than 1 800 000 tables. and the same\n> > > > > number of indexes.\n> > > >\n> > > > If you have 1.8 million tables in a single database, what you saw might just\n> > > > have been the size of the relation and catalog caches.\n> > >\n> > > can be\n> >\n> > Well, how big was shared_buffers on that instance ?\n> \n> 20GB RAM\n> 20GB swap\n> 2GB shared buffers\n\nThanks; so that can't explain using more than 2GB + a bit of overhead.\n\nCan you reproduce the problem and figure out which relation was being\nprocessed, or if the memory use is growing across relations?\npg_stat_progress_analyze/vacuum would be one thing to check.\nDoes VACUUM alone trigger the issue ? What about ANALYZE ?\n\nWas parallel vacuum happening (are there more than one index per table) ?\n\nDo you have any extended stats objects or non-default stats targets ?\nWhat server version is it? What OS? Extensions? Non-btree indexes?\n\nBTW I'm interested about this because I have an VM instance running v15\nwhich has been killed more than a couple times in the last 6 months, and\nI haven't been able to diagnose why. But autovacuum/analyze could\nexplain it. On this one particular instance, we don't have many\nrelations, though...\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 11 Feb 2023 00:53:48 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: possible memory leak in VACUUM ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-11 00:53:48 -0600, Justin Pryzby wrote:\n> On Sat, Feb 11, 2023 at 07:06:45AM +0100, Pavel Stehule wrote:\n> > p� 10. 2. 2023 v 23:01 odes�latel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> > > On Fri, Feb 10, 2023 at 09:23:11PM +0100, Pavel Stehule wrote:\n> > > > p� 10. 2. 2023 v 21:18 odes�latel Andres Freund <andres@anarazel.de> napsal:\n> > > > > On 2023-02-10 21:09:06 +0100, Pavel Stehule wrote:\n> > > > > > Just a small note - I executed VACUUM ANALYZE on one customer's database,\n> > > > > > and I had to cancel it after a few hours, because it had more than 20GB RAM\n> > > > > > (almost all physical RAM).\n> > > > >\n> > > > > Just to make sure: You're certain this was an actual memory leak, not just\n> > > > > vacuum ending up having referenced all of shared_buffers? Unless you use huge\n> > > > > pages, RSS increases over time, as a process touched more and more pages in\n> > > > > shared memory. Of course that couldn't explain rising above\n> > > > > shared_buffers + overhead.\n> > > > >\n> > > > > > The memory leak is probably not too big. This database is a little bit\n> > > > > > unusual. This one database has more than 1 800 000 tables. and the same\n> > > > > > number of indexes.\n> > > > >\n> > > > > If you have 1.8 million tables in a single database, what you saw might just\n> > > > > have been the size of the relation and catalog caches.\n> > > >\n> > > > can be\n> > >\n> > > Well, how big was shared_buffers on that instance ?\n> > \n> > 20GB RAM\n> > 20GB swap\n> > 2GB shared buffers\n> \n> Thanks; so that can't explain using more than 2GB + a bit of overhead.\n\nI think my theory of 1.8 million relcache / catcache entries is pretty good...\n\nI'd do the vacuum analyze again, interrupt once memory usage is high, and\ncheck\n SELECT * FROM pg_backend_memory_contexts ORDER BY total_bytes DESC\n\n\n> BTW I'm interested about this because I have an VM instance running v15\n> which has been killed more than a couple times in the last 6 months, and\n> I haven't been able to diagnose why. But autovacuum/analyze could\n> explain it. On this one particular instance, we don't have many\n> relations, though...\n\nKilled in what way? OOM?\n\nIf you'd set up strict overcommit you'd get a nice memory dump in the log...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Feb 2023 23:18:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: possible memory leak in VACUUM ANALYZE"
},
{
"msg_contents": "so 11. 2. 2023 v 7:53 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Sat, Feb 11, 2023 at 07:06:45AM +0100, Pavel Stehule wrote:\n> > pá 10. 2. 2023 v 23:01 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n> > > On Fri, Feb 10, 2023 at 09:23:11PM +0100, Pavel Stehule wrote:\n> > > > pá 10. 2. 2023 v 21:18 odesílatel Andres Freund <andres@anarazel.de>\n> napsal:\n> > > > > On 2023-02-10 21:09:06 +0100, Pavel Stehule wrote:\n> > > > > > Just a small note - I executed VACUUM ANALYZE on one customer's\n> database,\n> > > > > > and I had to cancel it after a few hours, because it had more\n> than 20GB RAM\n> > > > > > (almost all physical RAM).\n> > > > >\n> > > > > Just to make sure: You're certain this was an actual memory leak,\n> not just\n> > > > > vacuum ending up having referenced all of shared_buffers? Unless\n> you use huge\n> > > > > pages, RSS increases over time, as a process touched more and more\n> pages in\n> > > > > shared memory. Of course that couldn't explain rising above\n> > > > > shared_buffers + overhead.\n> > > > >\n> > > > > > The memory leak is probably not too big. This database is a\n> little bit\n> > > > > > unusual. This one database has more than 1 800 000 tables. and\n> the same\n> > > > > > number of indexes.\n> > > > >\n> > > > > If you have 1.8 million tables in a single database, what you saw\n> might just\n> > > > > have been the size of the relation and catalog caches.\n> > > >\n> > > > can be\n> > >\n> > > Well, how big was shared_buffers on that instance ?\n> >\n> > 20GB RAM\n> > 20GB swap\n> > 2GB shared buffers\n>\n> Thanks; so that can't explain using more than 2GB + a bit of overhead.\n>\n> Can you reproduce the problem and figure out which relation was being\n> processed, or if the memory use is growing across relations?\n> pg_stat_progress_analyze/vacuum would be one thing to check.\n> Does VACUUM alone trigger the issue ? What about ANALYZE ?\n>\n\nI executed VACUUM and ANALYZE separately, and memory grew up in both cases\nwith similar speed.\n\nalmost all tables has less than 120 pages, and less than thousand tuples\n\nand almost all tables has twenty fields + one geometry field + gist index\n\nthe size of pg_attribute has 6GB and pg_class has about 2GB\n\nUnfortunately, that is customer's production server, and I have not full\naccess there so I cannot to do deeper investigation\n\n\n\n>\n> Was parallel vacuum happening (are there more than one index per table) ?\n>\n\nprobably not - almost all tables are small\n\n\n>\n> Do you have any extended stats objects or non-default stats targets ?\n> What server version is it? What OS? Extensions? Non-btree indexes?\n>\n\nPostgreSQL 14 on linux with installed PostGIS\n\n\n>\n> BTW I'm interested about this because I have an VM instance running v15\n> which has been killed more than a couple times in the last 6 months, and\n> I haven't been able to diagnose why. But autovacuum/analyze could\n> explain it. On this one particular instance, we don't have many\n> relations, though...\n>\n> --\n> Justin\n>\n\nso 11. 2. 2023 v 7:53 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Sat, Feb 11, 2023 at 07:06:45AM +0100, Pavel Stehule wrote:\n> pá 10. 2. 2023 v 23:01 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> > On Fri, Feb 10, 2023 at 09:23:11PM +0100, Pavel Stehule wrote:\n> > > pá 10. 2. 2023 v 21:18 odesílatel Andres Freund <andres@anarazel.de> napsal:\n> > > > On 2023-02-10 21:09:06 +0100, Pavel Stehule wrote:\n> > > > > Just a small note - I executed VACUUM ANALYZE on one customer's database,\n> > > > > and I had to cancel it after a few hours, because it had more than 20GB RAM\n> > > > > (almost all physical RAM).\n> > > >\n> > > > Just to make sure: You're certain this was an actual memory leak, not just\n> > > > vacuum ending up having referenced all of shared_buffers? Unless you use huge\n> > > > pages, RSS increases over time, as a process touched more and more pages in\n> > > > shared memory. Of course that couldn't explain rising above\n> > > > shared_buffers + overhead.\n> > > >\n> > > > > The memory leak is probably not too big. This database is a little bit\n> > > > > unusual. This one database has more than 1 800 000 tables. and the same\n> > > > > number of indexes.\n> > > >\n> > > > If you have 1.8 million tables in a single database, what you saw might just\n> > > > have been the size of the relation and catalog caches.\n> > >\n> > > can be\n> >\n> > Well, how big was shared_buffers on that instance ?\n> \n> 20GB RAM\n> 20GB swap\n> 2GB shared buffers\n\nThanks; so that can't explain using more than 2GB + a bit of overhead.\n\nCan you reproduce the problem and figure out which relation was being\nprocessed, or if the memory use is growing across relations?\npg_stat_progress_analyze/vacuum would be one thing to check.\nDoes VACUUM alone trigger the issue ? What about ANALYZE ?I executed VACUUM and ANALYZE separately, and memory grew up in both cases with similar speed.almost all tables has less than 120 pages, and less than thousand tuplesand almost all tables has twenty fields + one geometry field + gist indexthe size of pg_attribute has 6GB and pg_class has about 2GBUnfortunately, that is customer's production server, and I have not full access there so I cannot to do deeper investigation \n\nWas parallel vacuum happening (are there more than one index per table) ?probably not - almost all tables are small \n\nDo you have any extended stats objects or non-default stats targets ?\nWhat server version is it? What OS? Extensions? Non-btree indexes?PostgreSQL 14 on linux with installed PostGIS \n\nBTW I'm interested about this because I have an VM instance running v15\nwhich has been killed more than a couple times in the last 6 months, and\nI haven't been able to diagnose why. But autovacuum/analyze could\nexplain it. On this one particular instance, we don't have many\nrelations, though...\n\n-- \nJustin",
"msg_date": "Sat, 11 Feb 2023 08:20:37 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: possible memory leak in VACUUM ANALYZE"
}
] |
[
{
"msg_contents": ": Andres Freund <andres@anarazel.de>\nReferences: <b9e1f543-ee93-8168-d530-d961708ad9d3@gmail.com>\n <20230210.113242.699878230551547182.horikyota.ntt@gmail.com>\n <5420b28c-d33f-d25d-9f47-b06b8a2372ba@gmail.com>\nMIME-Version: 1.0\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\nIn-Reply-To: <5420b28c-d33f-d25d-9f47-b06b8a2372ba@gmail.com>\n\nHi,\n\nOn 2023-02-10 16:50:32 +0100, Drouvot, Bertrand wrote:\n> On 2/10/23 3:32 AM, Kyotaro Horiguchi wrote:\n> > The summarization is needed only by\n> > few callers but now that cost is imposed to the all callers along with\n> > additional palloc()/pfree() calls. That doesn't seem reasonable.\n> > \n> \n> I agree that's not the best approach.....\n\nI think it's completely fine to do unnecessary reconciliation for the _xact_\nfunctions. They're not that commonly used, and very rarely is there a huge\nnumber of relations with lots of pending data across lots of subtransactions.\n\n\n> Let me come back with another proposal (thinking to increment reconciled\n> counters in pgstat_count_heap_insert(), pgstat_count_heap_delete() and\n> pgstat_count_heap_update()).\n\nThose are the performance crucial functions, we shouldn't do any additional\nwork there if we can avoid it. Shifting cost from the \"looking at\ntransactional stats\" side to the collecting stats side is the opposite of what\nwe should.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Feb 2023 12:15:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Reconcile stats in find_tabstat_entry() and get rid of\n PgStat_BackendFunctionEntry"
}
] |
[
{
"msg_contents": "Outer-join removal does this:\n\n /*\n * Mark the rel as \"dead\" to show it is no longer part of the join tree.\n * (Removing it from the baserel array altogether seems too risky.)\n */\n rel->reloptkind = RELOPT_DEADREL;\n\nwhich apparently I thought was a good idea in 2010 (cf b78f6264e),\nbut looking at it now it just seems like an invitation to fail to\ndetect bugs. We've had a couple of recent reports of indxpath.c\nfailing like this:\n\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig(at)entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f0b57bc6859 in __GI_abort () at abort.c:79\n#2 0x0000555ec56d3ff3 in ExceptionalCondition (conditionName=0x555ec5887ff0\n\"outer_rel->rows > 0\", fileName=0x555ec5887f2c \"indxpath.c\",\nlineNumber=1909) at assert.c:66\n#3 0x0000555ec538a67a in get_loop_count (root=0x555ec5f72680, cur_relid=3,\nouter_relids=0x555ec5f93960) at indxpath.c:1909\n#4 0x0000555ec5388b5e in build_index_paths (root=0x555ec5f72680,\nrel=0x555ec5f8f648, index=0x555ec5f8ca90, clauses=0x7fffeea57480,\nuseful_predicate=false, scantype=ST_BITMAPSCAN, skip_nonnative_saop=0x0,\nskip_lower_saop=0x0) at indxpath.c:957\n\nThis is pretty impenetrable at first glance, but what it boils\ndown to is something accessing a \"dead\" rel and taking its contents\nat face value. Fortunately the contents triggered an assertion,\nbut that hardly seems like something to count on to detect bugs.\n\nI think it's time to clean this up by removing the rel from the\nplanner data structures altogether. The attached passes check-world,\nand if it does trigger any problems I would say that's a clear\nsign of bugs elsewhere.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 10 Feb 2023 15:50:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Killing off removed rels properly"
},
{
"msg_contents": "On Sat, Feb 11, 2023 at 4:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I think it's time to clean this up by removing the rel from the\n> planner data structures altogether. The attached passes check-world,\n> and if it does trigger any problems I would say that's a clear\n> sign of bugs elsewhere.\n\n\n+1. The patch looks good to me. One minor comment is that we should\nalso remove the comments about RELOPT_DEADREL in pathnodes.h.\n\n * Lastly, there is a RelOptKind for \"dead\" relations, which are base rels\n * that we have proven we don't need to join after all.\n\nThanks\nRichard\n\nOn Sat, Feb 11, 2023 at 4:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nI think it's time to clean this up by removing the rel from the\nplanner data structures altogether. The attached passes check-world,\nand if it does trigger any problems I would say that's a clear\nsign of bugs elsewhere.+1. The patch looks good to me. One minor comment is that we shouldalso remove the comments about RELOPT_DEADREL in pathnodes.h. * Lastly, there is a RelOptKind for \"dead\" relations, which are base rels * that we have proven we don't need to join after all.ThanksRichard",
"msg_date": "Mon, 13 Feb 2023 17:57:49 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Killing off removed rels properly"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Sat, Feb 11, 2023 at 4:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think it's time to clean this up by removing the rel from the\n>> planner data structures altogether. The attached passes check-world,\n>> and if it does trigger any problems I would say that's a clear\n>> sign of bugs elsewhere.\n\n> +1. The patch looks good to me.\n\nPushed, thanks for looking at it!\n\n> One minor comment is that we should\n> also remove the comments about RELOPT_DEADREL in pathnodes.h.\n\nYeah, I noticed that shortly after posting the patch :-(. Searching\nthe optimizer code for other references to \"dead\" rels turned up another\nplace where the comments need fixed, namely match_foreign_keys_to_quals\n... which is someplace I should have thought to check before, given the\nreference to it in remove_rel_from_query. Its code is fine as-is though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Feb 2023 13:39:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Killing off removed rels properly"
},
{
"msg_contents": "Hello Tom,\n\n13.02.2023 21:39, Tom Lane wrote:\n> Richard Guo <guofenglinux@gmail.com> writes:\n>> On Sat, Feb 11, 2023 at 4:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I think it's time to clean this up by removing the rel from the\n>>> planner data structures altogether. The attached passes check-world,\n>>> and if it does trigger any problems I would say that's a clear\n>>> sign of bugs elsewhere.\nAfter this change the following query triggers an assert:\n\nCREATE TABLE tt (tid integer PRIMARY KEY) PARTITION BY LIST (tid);\nCREATE TABLE ttp PARTITION OF tt DEFAULT;\nCREATE TABLE st (sid integer);\n\nMERGE INTO tt USING st ON tt.tid = st.sid WHEN NOT MATCHED THEN INSERT \nVALUES (st.sid);\n...\n#5 0x0000556fe84647eb in ExceptionalCondition \n(conditionName=0x556fe8619a46 \"operation != CMD_MERGE\",\n fileName=0x556fe8618b73 \"createplan.c\", lineNumber=7121) at assert.c:66\n#6 0x0000556fe8126502 in make_modifytable (root=0x556fe945be40, \nsubplan=0x556fe9474420, operation=CMD_MERGE,\n canSetTag=true, nominalRelation=1, rootRelation=1, \npartColsUpdated=false, resultRelations=0x556fe9475bb0,\n updateColnosLists=0x0, withCheckOptionLists=0x0, \nreturningLists=0x0, rowMarks=0x0, onconflict=0x0,\n mergeActionLists=0x556fe9475c00, epqParam=0) at createplan.c:7121\n#7 0x0000556fe811d479 in create_modifytable_plan (root=0x556fe945be40, \nbest_path=0x556fe9474a40)\n at createplan.c:2820\n#8 0x0000556fe811912a in create_plan_recurse (root=0x556fe945be40, \nbest_path=0x556fe9474a40, flags=1)\n at createplan.c:530\n#9 0x0000556fe8118ca8 in create_plan (root=0x556fe945be40, \nbest_path=0x556fe9474a40) at createplan.c:347\n#10 0x0000556fe812d4fd in standard_planner (parse=0x556fe937c2d0,\n query_string=0x556fe937b178 \"MERGE INTO tt USING st ON tt.tid = \nst.sid WHEN NOT MATCHED THEN INSERT VALUES (st.sid);\", \ncursorOptions=2048, boundParams=0x0) at planner.c:418\n...\n\nIt seems that before e9a20e451 the other branch of the following \ncondition in make_modifytable() was executed:\n /*\n * If possible, we want to get the FdwRoutine from our \nRelOptInfo for\n * the table. But sometimes we don't have a RelOptInfo and \nmust get\n * it the hard way. (In INSERT, the target relation is not \nscanned,\n * so it's not a baserel; and there are also corner cases for\n * updatable views where the target rel isn't a baserel.)\n */\n if (rti < root->simple_rel_array_size &&\n root->simple_rel_array[rti] != NULL)\n {\n...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 20 Feb 2023 14:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Killing off removed rels properly"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> After this change the following query triggers an assert:\n\n> CREATE TABLE tt (tid integer PRIMARY KEY) PARTITION BY LIST (tid);\n> CREATE TABLE ttp PARTITION OF tt DEFAULT;\n> CREATE TABLE st (sid integer);\n\n> MERGE INTO tt USING st ON tt.tid = st.sid WHEN NOT MATCHED THEN INSERT \n> VALUES (st.sid);\n\nHmph. Yeah, I think that's just wrong: the cases of found-a-baserel\nand didn't-find-a-baserel should be treating MERGE-rejection identically.\nThis is probably broken even before e9a20e451.\n\nThanks for the report!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Feb 2023 11:33:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Killing off removed rels properly"
},
{
"msg_contents": "Hmm, there's something else going on here. After getting rid of the\nassertion failure, I see that the plan looks like\n\n# explain MERGE INTO tt USING st ON tt.tid = st.sid WHEN NOT MATCHED THEN INSERT \nVALUES (st.sid);\n QUERY PLAN \n-------------------------------------------------------------\n Merge on tt (cost=0.00..35.50 rows=0 width=0)\n -> Seq Scan on st (cost=0.00..35.50 rows=2550 width=10)\n(2 rows)\n\nwhich is fairly nonsensical and doesn't match v15's plan:\n\n Merge on tt (cost=0.15..544.88 rows=0 width=0)\n Merge on ttp tt_1\n -> Nested Loop Left Join (cost=0.15..544.88 rows=32512 width=14)\n -> Seq Scan on st (cost=0.00..35.50 rows=2550 width=4)\n -> Index Scan using ttp_pkey on ttp tt_1 (cost=0.15..0.19 rows=1 widt\nh=14)\n Index Cond: (tid = st.sid)\n\nIt looks like we're somehow triggering the elide-a-left-join code\nwhen we shouldn't? That explains why the target table's RelOptInfo\nhas gone missing and broken make_modifytable's expectations.\nThat code is still unnecessarily fragile so I intend to rearrange it,\nbut there's more to do here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Feb 2023 11:53:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Killing off removed rels properly"
},
{
"msg_contents": "I wrote:\n> Hmm, there's something else going on here. After getting rid of the\n> assertion failure, I see that the plan looks like\n\n> # explain MERGE INTO tt USING st ON tt.tid = st.sid WHEN NOT MATCHED THEN INSERT \n> VALUES (st.sid);\n> QUERY PLAN \n> -------------------------------------------------------------\n> Merge on tt (cost=0.00..35.50 rows=0 width=0)\n> -> Seq Scan on st (cost=0.00..35.50 rows=2550 width=10)\n> (2 rows)\n\n> which is fairly nonsensical and doesn't match v15's plan:\n\n> Merge on tt (cost=0.15..544.88 rows=0 width=0)\n> Merge on ttp tt_1\n> -> Nested Loop Left Join (cost=0.15..544.88 rows=32512 width=14)\n> -> Seq Scan on st (cost=0.00..35.50 rows=2550 width=4)\n> -> Index Scan using ttp_pkey on ttp tt_1 (cost=0.15..0.19 rows=1 widt\n> h=14)\n> Index Cond: (tid = st.sid)\n\n> It looks like we're somehow triggering the elide-a-left-join code\n> when we shouldn't?\n\nA quick bisect session shows that this broke at\n\n3c569049b7b502bb4952483d19ce622ff0af5fd6 is the first bad commit\ncommit 3c569049b7b502bb4952483d19ce622ff0af5fd6\nAuthor: David Rowley <drowley@postgresql.org>\nDate: Mon Jan 9 17:15:08 2023 +1300\n\n Allow left join removals and unique joins on partitioned tables\n \nbut I suspect that that's merely exposed a pre-existing deficiency\nin MERGE planning. ttp should not have been a candidate for join\nremoval, because the plan should require fetching (at least) its\nctid. I suspect that somebody cowboy-coded the MERGE support in\nsuch a way that the required row identity vars don't get added to\nrelation targetlists, or at least not till too late to stop join\nremoval. I've not run it to earth though.\n\nBut while I'm looking at this, 3c569049b seems kind of broken on\nits own terms. Why is it populating so little of the IndexOptInfo\nfor a partitioned index? I realize that we're not going to directly\nplan anything using such an index, but not populating fields like\nsortopfamily seems like it's at least leaving stuff on the table,\nand at worst making faulty decisions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Feb 2023 13:11:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Killing off removed rels properly"
},
{
"msg_contents": "I wrote:\n>> It looks like we're somehow triggering the elide-a-left-join code\n>> when we shouldn't?\n\nSo the reason why we see this with a partitioned target table and not\nwith a regular target table reduces to this bit in preprocess_targetlist:\n\n /*\n * For non-inherited UPDATE/DELETE/MERGE, register any junk column(s)\n * needed to allow the executor to identify the rows to be updated or\n * deleted. In the inheritance case, we do nothing now, leaving this to\n * be dealt with when expand_inherited_rtentry() makes the leaf target\n * relations. (But there might not be any leaf target relations, in which\n * case we must do this in distribute_row_identity_vars().)\n */\n if ((command_type == CMD_UPDATE || command_type == CMD_DELETE ||\n command_type == CMD_MERGE) &&\n !target_rte->inh)\n {\n /* row-identity logic expects to add stuff to processed_tlist */\n root->processed_tlist = tlist;\n add_row_identity_columns(root, result_relation,\n target_rte, target_relation);\n tlist = root->processed_tlist;\n }\n\nThis happens before join removal, so that we see use of the row identity\ncolumns of a regular table as a reason not to do join removal. But\nexpand_inherited_rtentry will happen after that, too late to stop join\nremoval.\n\nI think the best fix is just to teach join removal that it mustn't\nremove the result relation, as attached (needs a regression test).\n\nI'm a little inclined to back-patch this, even though I think it's\nprobably unreachable in v15. It's a cheap enough safety measure,\nand at the very least it will save a few cycles deciding that we\ncan't remove the target table of a MERGE.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 20 Feb 2023 14:13:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Killing off removed rels properly"
},
{
"msg_contents": "I wrote:\n> But while I'm looking at this, 3c569049b seems kind of broken on\n> its own terms. Why is it populating so little of the IndexOptInfo\n> for a partitioned index? I realize that we're not going to directly\n> plan anything using such an index, but not populating fields like\n> sortopfamily seems like it's at least leaving stuff on the table,\n> and at worst making faulty decisions.\n\nI fixed the other issues discussed in this thread, but along the\nway I grew even more concerned about 3c569049b, because I discovered\nthat it's changed plans in more ways than what its commit message\nsuggests. For example, given the setup\n\nCREATE TABLE pa_target (tid integer PRIMARY KEY) PARTITION BY LIST (tid);\nCREATE TABLE pa_targetp PARTITION OF pa_target DEFAULT;\nCREATE TABLE pa_source (sid integer);\n\nthen I get this as of commit 3c569049b7^:\n\n# explain select * from pa_source s left join pa_target t on s.sid = t.tid;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.15..544.88 rows=32512 width=8)\n -> Seq Scan on pa_source s (cost=0.00..35.50 rows=2550 width=4)\n -> Index Only Scan using pa_targetp_pkey on pa_targetp t (cost=0.15..0.19 rows=1 width=4)\n Index Cond: (tid = s.sid)\n(4 rows)\n\nand this as of 3c569049b7 and later:\n\n# explain select * from pa_source s left join pa_target t on s.sid = t.tid;\n QUERY PLAN \n----------------------------------------------------------------------------\n Hash Left Join (cost=67.38..109.58 rows=2550 width=8)\n Hash Cond: (s.sid = t.tid)\n -> Seq Scan on pa_source s (cost=0.00..35.50 rows=2550 width=4)\n -> Hash (cost=35.50..35.50 rows=2550 width=4)\n -> Seq Scan on pa_targetp t (cost=0.00..35.50 rows=2550 width=4)\n(5 rows)\n\nNow, I'm not unhappy about that change: it's clearly a win that we now\nrealize we'll get at most one matching t row for each s row. What\nI'm unhappy about is that this means a partially-populated IndexOptInfo\nis being used for rowcount estimation and perhaps other things.\nThat seems like sheer folly. Even if it manages to not dump core\nfrom trying to access a missing field, there's a significant risk of\nwrong answers, now or in the future. Why was it done like that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Feb 2023 15:48:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Killing off removed rels properly"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is a reply to:\nhttps://www.postgresql.org/message-id/CAA4eK1%2BDB66cYRRVyGcaMm7%2BtQ_u%3Dq%3D%2BHWGjpu9X0pqMFWbsZQ%40mail.gmail.com\nsplit off, so patches to address some of my concerns don't confuse cfbot.\n\n\nOn 2023-02-09 11:21:41 +0530, Amit Kapila wrote:\n> On Thu, Feb 9, 2023 at 1:33 AM Andres Freund <andres@anarazel.de> wrote:\n\n> > Attached is a current, quite rough, prototype. It addresses some of the points\n> > raised, but far from all. There's also several XXXs/FIXMEs in it. I changed\n> > the file-ending to .txt to avoid hijacking the CF entry.\n> >\n> \n> I have started a separate thread to avoid such confusion. I hope that\n> is fine with you.\n\nIn abstract, yes - unfortunately just changing the subject isn't going to\nsuffice, I'm afraid. The In-Reply-To header was still referencing the old\nthread. The mail archive did see the threads as one, and I think that's what\ncfbot uses as the source.\n\n\nOn 2023-02-09 11:21:41 +0530, Amit Kapila wrote:\n> On Thu, Feb 9, 2023 at 1:33 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hacking on a rough prototype how I think this should rather look, I had a few\n> > questions / remarks:\n> >\n> > - We probably need to call UpdateProgress from a bunch of places in decode.c\n> > as well? Indicating that we're lagging by a lot, just because all\n> > transactions were in another database seems decidedly suboptimal.\n> >\n> \n> We can do that but I think in all those cases we will reach quickly\n> enough back to walsender logic (WalSndLoop - that will send keepalive\n> if required) that we don't need to worry. After processing each\n> record, the logic will return back to the main loop that will send\n> keepalive if required.\n\nFor keepalive processing yes, for syncrep and accurate lag tracking, I don't\nthink that suffices? We could do that in WalSndLoop() instead I guess, but\nwe'd have more information about when that's useful in decode.c.\n\n\n> Also, while reading WAL if we need to block, it will call WalSndWaitForWal()\n> which will send keepalive if required.\n\nThe fast-path prevents WalSndWaitForWal() from doing that in a lot of cases.\n\n\t/*\n\t * Fast path to avoid acquiring the spinlock in case we already know we\n\t * have enough WAL available. This is particularly interesting if we're\n\t * far behind.\n\t */\n\tif (RecentFlushPtr != InvalidXLogRecPtr &&\n\t\tloc <= RecentFlushPtr)\n\t\treturn RecentFlushPtr;\n\n\n> The patch calls update_progress in change_cb_wrapper and other\n> wrappers which will miss the case of DDLs that generates a lot of data\n> that is not processed by the plugin. I think for that we either need\n> to call update_progress from reorderbuffer.c similar to what the patch\n> has removed or we need some other way to address it. Do you have any\n> better idea?\n\nI don't mind calling something like update_progress() in the specific cases\nthat's needed, but I think those are just the\n if (!RelationIsLogicallyLogged(relation))\n if (relation->rd_rel->relrewrite && !rb->output_rewrites))\n\nTo me it makes a lot more sense to call update_progress() for those, rather\nthan generally.\n\n\nI think, independent of the update_progress calls, it'd be worth investing a\nbit of time into optimizing those cases, so that we don't put the changes into\nthe reorderbuffer in the first place. I think we find space for two flag bits\nto identify the cases in the WAL, rather than needing to access the catalog to\nfigure it out. If we don't find space, we could add an annotation the WAL\nrecord (making it bigger) for the two cases, because they're not the path most\nimportant to optimize.\n\n\n\n> > - Why should lag tracking only be updated at commit like points? That seems\n> > like it adds odd discontinuinities?\n> >\n> \n> We have previously experimented to call it from non-commit locations\n> but that turned out to give inaccurate information about Lag. See\n> email [1].\n\nThat seems like an issue with WALSND_LOGICAL_LAG_TRACK_INTERVAL_MS, not with\nreporting something more frequently. ISTM that\nWALSND_LOGICAL_LAG_TRACK_INTERVAL_MS just isn't a good proxy for when to\nupdate lag reporting for records that don't strictly need it. I think that\ndecision should be made based on the LSN, and be deterministic.\n\n\n> > - Aren't the wal_sender_timeout / 2 checks in WalSndUpdateProgress(),\n> > WalSndWriteData() missing wal_sender_timeout <= 0 checks?\n> >\n> \n> It seems we are checking that via\n> ProcessPendingWrites()->WalSndKeepaliveIfNecessary(). Do you think we\n> need to check it before as well?\n\nEither we don't need the precheck at all, or we should do it reliably. Right\nnow we'll have a higher overhead / some behavioural changes, if\nwal_sender_timeout is disabled. That doesn't make sense.\n\n\n> > - I don't really understand why f95d53edged55 added !end_xact to the if\n> > condition for ProcessPendingWrites(). Is the theory that we'll end up in an\n> > outer loop soon?\n> >\n> \n> Yes. For non-empty xacts, we will anyway send a commit message. For\n> empty (skipped) xacts, we will send for synchronous replication case\n> to avoid any delay.\n\nThat seems way too dependent on the behaviour of a specific output plugin,\nthere's plenty use cases where you'd not need a separate message emitted at\ncommit time. With what I proposed we would know whether we just wrote\nsomething, or not.\n\n\n> > > > I don't think the syncrep logic in WalSndUpdateProgress really works as-is -\n> > > > consider what happens if e.g. the origin filter filters out entire\n> > > > transactions. We'll afaics never get to WalSndUpdateProgress(). In some cases\n> > > > we'll be lucky because we'll return quickly to XLogSendLogical(), but not\n> > > > reliably.\n> > >\n> \n> Which case are you worried about? As mentioned in one of the previous\n> points I thought the timeout/keepalive handling in the callers should\n> be enough.\n\nWell, you added syncrep specific logic to WalSndUpdateProgress(). The same\nlogic isn't present in the higher level loops. If we do need that logic, we\nalso need to trigger it if the origin filter filters out the entire\ntransaction. If we don't need it, then we shouldn't have it in\nWalSndUpdateProgress() either.\n\n\n> How about renaming ProcessPendingWrites to WaitToSendPendingWrites or\n> WalSndWaitToSendPendingWrites?\n\nI don't like those much:\n\nWe're not really waiting for the data to be sent or such, we just want to give\nit to the kernel to be sent out. Contrast that to WalSndWaitForWal, where we\nactually are waiting for something to complete.\n\nI don't think 'write' is a great description either, although our existing\nterminology is somewhat muddled. We're waiting calling pq_flush() until\n!pq_is_send_pending().\n\nWalSndSendPending() or WalSndFlushPending()?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Feb 2023 13:04:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "Hi,\n\nReplying on the new thread. Original message at\nhttps://www.postgresql.org/message-id/CAA4eK1%2BH2m95HhzfpRkwv2-GtFwtbcVp7837X49%2Bvs0RXX3dBA%40mail.gmail.com\n\n\nOn 2023-02-09 15:54:19 +0530, Amit Kapila wrote:\n> One thing to note about the changes we are discussing here is that\n> some of the plugins like wal2json already call\n> OutputPluginUpdateProgress in their commit callback. They may need to\n> update it accordingly.\n\nIt was a fundamental mistake to add OutputPluginUpdateProgress(). I don't like\ncausing unnecessary breakage, but this seems necessary.\n\n\n> One difference I see with the patch is that I think we will end up\n> sending keepalive for empty prepared transactions even though we don't\n> skip sending begin/prepare messages for those.\n\nWith the proposed approach we reliably know whether a callback wrote\nsomething, so we can tune the behaviour here fairly easily.\n\nLikely WalSndUpdateProgress() should not do anything if\n did_write && !finished_xact.\n\n\n> The reason why we don't skip sending prepare for empty 2PC xacts is that if\n> the WALSender restarts after the PREPARE of a transaction and before the\n> COMMIT PREPARED of the same transaction then we won't be able to figure out\n> if we have skipped sending BEGIN/PREPARE of a transaction.\n\nIt's probably not a good idea to skip sending 2PC state changes anyway, at\nleast when used for replication, rather than CDC type use cases.\n\nBut I again think that that's not something the core system can assume.\n\nI'm sad that we went so far down a pretty obviously bad rabbit hole. Adding\nincrementally more of the progress calls to pgoutput, and knowing that\nwal2json also added some, should have run some pretty large alarm bells.\n\n\n> To skip sending prepare for empty xacts, we previously thought of some ideas\n> like (a) At commit-prepare time have a check on the subscriber-side to know\n> whether there is a corresponding prepare for it before actually doing\n> commit-prepare but that sounded costly. (b) somehow persist the information\n> whether the PREPARE for a xact is already sent and then use that information\n> for commit prepared but again that also didn't sound like a good idea.\n\nI don't think it's worth optimizing this. However, the explanation for why\nwe're not skipping empty prepared xacts needs to be added to\npgoutput_prepare_txn() etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Feb 2023 13:34:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Sat, Feb 11, 2023 at 3:04 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > One difference I see with the patch is that I think we will end up\n> > sending keepalive for empty prepared transactions even though we don't\n> > skip sending begin/prepare messages for those.\n>\n> With the proposed approach we reliably know whether a callback wrote\n> something, so we can tune the behaviour here fairly easily.\n>\n\nI would like to clarify a few things about the proposed approach. In\ncommit_cb_wrapper()/prepare_cb_wrapper(), the patch first did\nctx->did_write = false;, then call the commit/prepare callback (which\nwill call pgoutput_commit_txn()/pgoutput_prepare_txn()) and then call\nupdate_progress() which will make decisions based on ctx->did_write\nflag. Now, for this to work pgoutput_commit_txn/pgoutput_prepare_txn\nshould know that the transaction has performed some writes before that\ncall which is currently working because pgoutput is tracking the same\nvia sent_begin_txn. Is the intention here that we still track whether\nBEGIN () has been sent via pgoutput?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Feb 2023 08:22:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Sat, Feb 11, 2023 at 2:34 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-02-09 11:21:41 +0530, Amit Kapila wrote:\n> > On Thu, Feb 9, 2023 at 1:33 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hacking on a rough prototype how I think this should rather look, I had a few\n> > > questions / remarks:\n> > >\n> > > - We probably need to call UpdateProgress from a bunch of places in decode.c\n> > > as well? Indicating that we're lagging by a lot, just because all\n> > > transactions were in another database seems decidedly suboptimal.\n> > >\n> >\n> > We can do that but I think in all those cases we will reach quickly\n> > enough back to walsender logic (WalSndLoop - that will send keepalive\n> > if required) that we don't need to worry. After processing each\n> > record, the logic will return back to the main loop that will send\n> > keepalive if required.\n>\n> For keepalive processing yes, for syncrep and accurate lag tracking, I don't\n> think that suffices? We could do that in WalSndLoop() instead I guess, but\n> we'd have more information about when that's useful in decode.c.\n>\n\nYeah, I think one possibility to address that is to call\nupdate_progress() in DecodeCommit() and friends when we need to skip\nthe xact. We decide that in DecodeTXNNeedSkip. In the checks in that\nfunction, I am not sure whether we need to call it for the case where\nwe skip the xact because we decide that it was previously decoded.\n\n>\n> > The patch calls update_progress in change_cb_wrapper and other\n> > wrappers which will miss the case of DDLs that generates a lot of data\n> > that is not processed by the plugin. I think for that we either need\n> > to call update_progress from reorderbuffer.c similar to what the patch\n> > has removed or we need some other way to address it. Do you have any\n> > better idea?\n>\n> I don't mind calling something like update_progress() in the specific cases\n> that's needed, but I think those are just the\n> if (!RelationIsLogicallyLogged(relation))\n> if (relation->rd_rel->relrewrite && !rb->output_rewrites))\n>\n> To me it makes a lot more sense to call update_progress() for those, rather\n> than generally.\n>\n\nWon't it be better to call it wherever we don't invoke any wrapper\nfunction like for cases REORDER_BUFFER_CHANGE_INVALIDATION, sequence\nchanges, etc.? I was thinking that wherever we don't call the wrapper\nfunction which means we don't have a chance to invoke\nupdate_progress(), the timeout can happen if there are a lot of such\nmessages.\n\n>\n> I think, independent of the update_progress calls, it'd be worth investing a\n> bit of time into optimizing those cases, so that we don't put the changes into\n> the reorderbuffer in the first place. I think we find space for two flag bits\n> to identify the cases in the WAL, rather than needing to access the catalog to\n> figure it out. If we don't find space, we could add an annotation the WAL\n> record (making it bigger) for the two cases, because they're not the path most\n> important to optimize.\n>\n>\n>\n> > > - Why should lag tracking only be updated at commit like points? That seems\n> > > like it adds odd discontinuinities?\n> > >\n> >\n> > We have previously experimented to call it from non-commit locations\n> > but that turned out to give inaccurate information about Lag. See\n> > email [1].\n>\n> That seems like an issue with WALSND_LOGICAL_LAG_TRACK_INTERVAL_MS, not with\n> reporting something more frequently. ISTM that\n> WALSND_LOGICAL_LAG_TRACK_INTERVAL_MS just isn't a good proxy for when to\n> update lag reporting for records that don't strictly need it. I think that\n> decision should be made based on the LSN, and be deterministic.\n>\n>\n> > > - Aren't the wal_sender_timeout / 2 checks in WalSndUpdateProgress(),\n> > > WalSndWriteData() missing wal_sender_timeout <= 0 checks?\n> > >\n> >\n> > It seems we are checking that via\n> > ProcessPendingWrites()->WalSndKeepaliveIfNecessary(). Do you think we\n> > need to check it before as well?\n>\n> Either we don't need the precheck at all, or we should do it reliably. Right\n> now we'll have a higher overhead / some behavioural changes, if\n> wal_sender_timeout is disabled. That doesn't make sense.\n>\n\nFair enough, we can probably do it earlier.\n\n>\n> > How about renaming ProcessPendingWrites to WaitToSendPendingWrites or\n> > WalSndWaitToSendPendingWrites?\n>\n> I don't like those much:\n>\n> We're not really waiting for the data to be sent or such, we just want to give\n> it to the kernel to be sent out. Contrast that to WalSndWaitForWal, where we\n> actually are waiting for something to complete.\n>\n> I don't think 'write' is a great description either, although our existing\n> terminology is somewhat muddled. We're waiting calling pq_flush() until\n> !pq_is_send_pending().\n>\n> WalSndSendPending() or WalSndFlushPending()?\n>\n\nEither of those sounds fine.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Feb 2023 14:06:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-13 08:22:34 +0530, Amit Kapila wrote:\n> On Sat, Feb 11, 2023 at 3:04 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > > One difference I see with the patch is that I think we will end up\n> > > sending keepalive for empty prepared transactions even though we don't\n> > > skip sending begin/prepare messages for those.\n> >\n> > With the proposed approach we reliably know whether a callback wrote\n> > something, so we can tune the behaviour here fairly easily.\n> >\n> \n> I would like to clarify a few things about the proposed approach. In\n> commit_cb_wrapper()/prepare_cb_wrapper(), the patch first did\n> ctx->did_write = false;, then call the commit/prepare callback (which\n> will call pgoutput_commit_txn()/pgoutput_prepare_txn()) and then call\n> update_progress() which will make decisions based on ctx->did_write\n> flag. Now, for this to work pgoutput_commit_txn/pgoutput_prepare_txn\n> should know that the transaction has performed some writes before that\n> call which is currently working because pgoutput is tracking the same\n> via sent_begin_txn.\n\nI don't really see these as being related. What pgoutput does internally to\noptimize for some usecases shouldn't matter to the larger infrastructure.\n\n\n> Is the intention here that we still track whether BEGIN () has been sent via\n> pgoutput?\n\nYes. If somebody later wants to propose tracking this alongside a txn and\npassing that to the output plugin callbacks, we can do that. But that's\nindependent of fixing the broken architecture of the progress infrastructure.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 10:00:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-13 14:06:57 +0530, Amit Kapila wrote:\n> > > The patch calls update_progress in change_cb_wrapper and other\n> > > wrappers which will miss the case of DDLs that generates a lot of data\n> > > that is not processed by the plugin. I think for that we either need\n> > > to call update_progress from reorderbuffer.c similar to what the patch\n> > > has removed or we need some other way to address it. Do you have any\n> > > better idea?\n> >\n> > I don't mind calling something like update_progress() in the specific cases\n> > that's needed, but I think those are just the\n> > if (!RelationIsLogicallyLogged(relation))\n> > if (relation->rd_rel->relrewrite && !rb->output_rewrites))\n> >\n> > To me it makes a lot more sense to call update_progress() for those, rather\n> > than generally.\n> >\n> \n> Won't it be better to call it wherever we don't invoke any wrapper\n> function like for cases REORDER_BUFFER_CHANGE_INVALIDATION, sequence\n> changes, etc.? I was thinking that wherever we don't call the wrapper\n> function which means we don't have a chance to invoke\n> update_progress(), the timeout can happen if there are a lot of such\n> messages.\n\nISTM that the likelihood of causing harm due to increased overhead is higher\nthan the gain.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 10:03:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Thur, Feb 14, 2023 at 2:03 AM Andres Freund <andres@anarazel.de> wrote:\r\n> On 2023-02-13 14:06:57 +0530, Amit Kapila wrote:\r\n> > > > The patch calls update_progress in change_cb_wrapper and other\r\n> > > > wrappers which will miss the case of DDLs that generates a lot of data\r\n> > > > that is not processed by the plugin. I think for that we either need\r\n> > > > to call update_progress from reorderbuffer.c similar to what the patch\r\n> > > > has removed or we need some other way to address it. Do you have any\r\n> > > > better idea?\r\n> > >\r\n> > > I don't mind calling something like update_progress() in the specific cases\r\n> > > that's needed, but I think those are just the\r\n> > > if (!RelationIsLogicallyLogged(relation))\r\n> > > if (relation->rd_rel->relrewrite && !rb->output_rewrites))\r\n> > >\r\n> > > To me it makes a lot more sense to call update_progress() for those, rather\r\n> > > than generally.\r\n> > >\r\n> >\r\n> > Won't it be better to call it wherever we don't invoke any wrapper\r\n> > function like for cases REORDER_BUFFER_CHANGE_INVALIDATION, sequence\r\n> > changes, etc.? I was thinking that wherever we don't call the wrapper\r\n> > function which means we don't have a chance to invoke\r\n> > update_progress(), the timeout can happen if there are a lot of such\r\n> > messages.\r\n> \r\n> ISTM that the likelihood of causing harm due to increased overhead is higher\r\n> than the gain.\r\n\r\nI would like to do something for this thread. So, I am planning to update the\r\npatch as per discussion in the email chain unless someone is already working on\r\nit.\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Sun, 19 Feb 2023 13:06:02 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Sun, Feb 19, 2023 at 21:06 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> On Thur, Feb 14, 2023 at 2:03 AM Andres Freund <andres@anarazel.de> wrote:\r\n> > On 2023-02-13 14:06:57 +0530, Amit Kapila wrote:\r\n> > > > > The patch calls update_progress in change_cb_wrapper and other\r\n> > > > > wrappers which will miss the case of DDLs that generates a lot of data\r\n> > > > > that is not processed by the plugin. I think for that we either need\r\n> > > > > to call update_progress from reorderbuffer.c similar to what the patch\r\n> > > > > has removed or we need some other way to address it. Do you have any\r\n> > > > > better idea?\r\n> > > >\r\n> > > > I don't mind calling something like update_progress() in the specific cases\r\n> > > > that's needed, but I think those are just the\r\n> > > > if (!RelationIsLogicallyLogged(relation))\r\n> > > > if (relation->rd_rel->relrewrite && !rb->output_rewrites))\r\n> > > >\r\n> > > > To me it makes a lot more sense to call update_progress() for those, rather\r\n> > > > than generally.\r\n> > > >\r\n> > >\r\n> > > Won't it be better to call it wherever we don't invoke any wrapper\r\n> > > function like for cases REORDER_BUFFER_CHANGE_INVALIDATION, sequence\r\n> > > changes, etc.? I was thinking that wherever we don't call the wrapper\r\n> > > function which means we don't have a chance to invoke\r\n> > > update_progress(), the timeout can happen if there are a lot of such\r\n> > > messages.\r\n> >\r\n> > ISTM that the likelihood of causing harm due to increased overhead is higher\r\n> > than the gain.\r\n> \r\n> I would like to do something for this thread. So, I am planning to update the\r\n> patch as per discussion in the email chain unless someone is already working on\r\n> it.\r\n\r\nThanks to Andres and Amit for the discussion.\r\n\r\nBased on the discussion and Andres' WIP(in [1]), I made the following\r\nmodifications:\r\n1. Some function renaming stuffs.\r\n2. Added the threshold-related logic in the function\r\nupdate_progress_and_keepalive.\r\n3. Added the timeout-related processing of temporary data and\r\nunlogged/foreign/system tables in the function ReorderBufferProcessTXN.\r\n4. Improved error messages in the function OutputPluginPrepareWrite.\r\n5. Invoked function update_progress_and_keepalive to fix sync-related problems\r\ncaused by filters such as origin in functions DecodeCommit(), DecodePrepare()\r\nand ReorderBufferAbort();\r\n6. Removed the invocation of function update_progress_and_keepalive in the\r\nfunction begin_prepare_cb_wrapper().\r\n7. Invoked the function update_progress_and_keepalive() in the function\r\nstream_truncate_cb_wrapper(), just like we do in the function\r\ntruncate_cb_wrapper().\r\n8. Removed the check of SyncRepRequested() in the syncrep logic in the function\r\nWalSndUpdateProgressAndKeepAlive();\r\n9. Added the check for wal_sender_timeout before using it in functions\r\nWalSndUpdateProgressAndKeepAlive() and WalSndWriteData();\r\n\r\nAttach the new patch.\r\n\r\n[1] - https://www.postgresql.org/message-id/20230208200235.esfoggsmuvf4pugt%40awork3.anarazel.de\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Wed, 22 Feb 2023 12:12:19 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "Dear Wang,\n\nThank you for making the patch. IIUC your patch basically can achieve that output plugin\ndoes not have to call UpdateProgress.\n\nI think the basic approach is as follows, is it right?\n\n1. In *_cb_wrapper, set ctx->did_write to false\n2. In OutputPluginWrite() set ctx->did_write to true.\n This means that changes are really written, not skipped.\n3. At the end of the transaction, call update_progress_and_keepalive().\n Even if we are not at the end, check skipped count and call the function if needed.\n The counter will be reset if ctx->did_write is true or we exceed the threshold.\n\nFollowings are my comments. I apologize if I missed some previous discussions.\n\n01. logical.c\n\n```\n+static void update_progress_and_keepalive(struct LogicalDecodingContext *ctx,\n+ bool finished_xact);\n+\n+static bool is_skip_threshold_change(struct LogicalDecodingContext *ctx);\n```\n\n\"struct\" may be not needed.\n\n02. UpdateDecodingProgressAndKeepalive\n\nI think the name should be UpdateDecodingProgressAndSendKeepalive(), keepalive is not verb.\n(But it's ok to ignore if you prefer the shorter name)\nSame thing can be said for the name of datatype and callback.\n\n03. UpdateDecodingProgressAndKeepalive\n\n```\n+ /* set output state */\n+ ctx->accept_writes = false;\n+ ctx->write_xid = xid;\n+ ctx->write_location = lsn;\n+ ctx->did_write = false;\n```\n\nDo we have to modify accept_writes, write_xid, and write_location here?\nThese value is not used in WalSndUpdateProgressAndKeepalive().\n\n04. stream_abort_cb_wrapper\n\n```\n+ update_progress_and_keepalive(ctx, true)\n```\n\nI'm not sure, but is it correct that call update_progress_and_keepalive() with\nfinished_xact = true? Isn't there a possibility that streamed sub-transaciton is aborted?\n\n\n05. is_skip_threshold_change\n\nAt the end of the transaction, update_progress_and_keepalive() is called directly.\nDon't we have to reset change_count here?\n\n06. ReorderBufferAbort\n\nAssuming that the top transaction is aborted. At that time update_progress_and_keepalive()\nis called in stream_abort_cb_wrapper(), an then WalSndUpdateProgressAndKeepalive()\nis called at the end of ReorderBufferAbort(). Do we have to do in both?\nI think stream_abort_cb_wrapper() may be not needed.\n\n07. WalSndUpdateProgress\n\nYou renamed ProcessPendingWrites() to WalSndSendPending(), but it may be still strange\nbecause it will be called even if there are no pending writes.\n\nIsn't it sufficient to call ProcessRepliesIfAny(), WalSndCheckTimeOut() and\n(at least) WalSndKeepaliveIfNecessary()in the case? Or better name may be needed.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 10:40:34 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Thur, Feb 23, 2023 at 18:41 PM Kuroda, Hayato/�\\田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Wang,\r\n> \r\n> Thank you for making the patch. IIUC your patch basically can achieve that\r\n> output plugin\r\n> does not have to call UpdateProgress.\r\n\r\nThanks for your review and comments.\r\n\r\n> I think the basic approach is as follows, is it right?\r\n> \r\n> 1. In *_cb_wrapper, set ctx->did_write to false\r\n> 2. In OutputPluginWrite() set ctx->did_write to true.\r\n> This means that changes are really written, not skipped.\r\n> 3. At the end of the transaction, call update_progress_and_keepalive().\r\n> Even if we are not at the end, check skipped count and call the function if\r\n> needed.\r\n> The counter will be reset if ctx->did_write is true or we exceed the threshold.\r\n\r\nYes, you are right.\r\nFor the reset of the counter, please also refer to the reply to #05.\r\n\r\n> Followings are my comments. I apologize if I missed some previous discussions.\r\n> \r\n> 01. logical.c\r\n> \r\n> ```\r\n> +static void update_progress_and_keepalive(struct LogicalDecodingContext *ctx,\r\n> + bool finished_xact);\r\n> +\r\n> +static bool is_skip_threshold_change(struct LogicalDecodingContext *ctx);\r\n> ```\r\n> \r\n> \"struct\" may be not needed.\r\n\r\nRemoved.\r\n\r\n> 02. UpdateDecodingProgressAndKeepalive\r\n> \r\n> I think the name should be UpdateDecodingProgressAndSendKeepalive(),\r\n> keepalive is not verb.\r\n> (But it's ok to ignore if you prefer the shorter name)\r\n> Same thing can be said for the name of datatype and callback.\r\n\r\nYes, I prefer the shorter one. Otherwise, I think some names would be longer.\r\n\r\n> 03. UpdateDecodingProgressAndKeepalive\r\n> \r\n> ```\r\n> + /* set output state */\r\n> + ctx->accept_writes = false;\r\n> + ctx->write_xid = xid;\r\n> + ctx->write_location = lsn;\r\n> + ctx->did_write = false;\r\n> ```\r\n> \r\n> Do we have to modify accept_writes, write_xid, and write_location here?\r\n> These value is not used in WalSndUpdateProgressAndKeepalive().\r\n\r\nI think it might be better to set these three flags.\r\nSince LogicalOutputPluginWriterUpdateProgressAndKeepalive is an open callback, I\r\nthink setting write_xid and write_location is not just for the function\r\nWalSndUpdateProgressAndKeepalive. And I think setting accept_writes could\r\nprevent some wrong usage.\r\n\r\n> 04. stream_abort_cb_wrapper\r\n> \r\n> ```\r\n> + update_progress_and_keepalive(ctx, true)\r\n> ```\r\n> \r\n> I'm not sure, but is it correct that call update_progress_and_keepalive() with\r\n> finished_xact = true? Isn't there a possibility that streamed sub-transaciton is\r\n> aborted?\r\n\r\nFixed.\r\n\r\n> 05. is_skip_threshold_change\r\n> \r\n> At the end of the transaction, update_progress_and_keepalive() is called directly.\r\n> Don't we have to reset change_count here?\r\n\r\nI think this might complicate the function is_skip_threshold_change, so I didn't\r\nreset the counter in this case.\r\nI think the worst case of not resetting the counter is to delay sending the\r\nkeepalive message for the next transaction. But since the threshold we're using\r\nis safe enough, it seems fine to me not to reset the counter in this case.\r\nAdded these related comments in the function is_skip_threshold_change.\r\n\r\n> 06. ReorderBufferAbort\r\n> \r\n> Assuming that the top transaction is aborted. At that time\r\n> update_progress_and_keepalive()\r\n> is called in stream_abort_cb_wrapper(), an then\r\n> WalSndUpdateProgressAndKeepalive()\r\n> is called at the end of ReorderBufferAbort(). Do we have to do in both?\r\n> I think stream_abort_cb_wrapper() may be not needed.\r\n\r\nYes, I think we only need one call for this case.\r\nTo make the behavior in *_cb_wrapper look consistent, I avoided the second call\r\nfor this case in the function ReorderBufferAbort.\r\n\r\n> 07. WalSndUpdateProgress\r\n> \r\n> You renamed ProcessPendingWrites() to WalSndSendPending(), but it may be\r\n> still strange\r\n> because it will be called even if there are no pending writes.\r\n> \r\n> Isn't it sufficient to call ProcessRepliesIfAny(), WalSndCheckTimeOut() and\r\n> (at least) WalSndKeepaliveIfNecessary()in the case? Or better name may be\r\n> needed.\r\n\r\nI think after sending the keepalive message (in WalSndKeepaliveIfNecessary), we\r\nneed to make sure the pending data is flushed through the loop.\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Mon, 27 Feb 2023 09:30:14 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "Here are some comments for the v2-0001 patch.\n\n(I haven't looked at the v3 that was posted overnight; maybe some of\nmy comments have already been addressed.)\n\n======\nGeneral\n\n1. (Info from the commit message)\nSince we can know whether the change is an end of transaction change in the\ncommon code, we removed the LogicalDecodingContext->end_xact introduced in\ncommit f95d53e.\n\n~\n\nTBH, it was not clear to me that this change was an improvement. IIUC,\nit removes the \"unnecessary\" member, but only does that by replacing\nit everywhere with a boolean parameter passed to\nupdate_progress_and_keepalive(). So the end result seems no less code,\nbut it is less readable code now because you need to know what the\ntrue/false parameter means. I wonder if it would have been better just\nto leave this how it was.\n\n======\nsrc/backend/replication/logical/logical.c\n\n2. General - blank lines\n\nThere are multiple places in this file where the patch removed some\nstatements but left blank lines. The result is 2 blank lines remaining\ninstead of one.\n\nsee change_cb_wrapper.\nsee truncate_cb_wrapper.\nsee stream_start_cb_wrapper.\nsee stream_stop_cb_wrapper.\nsee stream_change_cb_wrapper.\n\ne.g.\n\nBEFORE\nctx->write_location = last_lsn;\n\nctx->end_xact = false;\n\n/* in streaming mode, stream_stop_cb is required */\n\nAFTER (now there are 2 blank lines)\nctx->write_location = last_lsn;\n\n\n/* in streaming mode, stream_stop_cb is required */\n\n~~~\n\n3. General - calls to is_skip_threshold_change()\n\n+ if (is_skip_threshold_change(ctx))\n+ update_progress_and_keepalive(ctx, false);\n\nThere are multiple calls like this, which are guarding the\nupdate_progress_and_keepalive() with the is_skip_threshold_change()\n- See truncate_cb_wrapper\n- See message_cb_wrapper\n- See stream_change_cb_wrapper\n- See stream_message_cb_wrapper\n- See stream_truncate_cb_wrapper\n- See UpdateDecodingProgressAndKeepalive\n\nIIUC, then I was thinking all those conditions maybe can be pushed\ndown *into* the wrapper, thereby making every calling code simpler.\n\ne.g. make the wrapper function code look similar to the current\nUpdateDecodingProgressAndKeepalive:\n\nBEFORE (update_progress_and_keepalive)\n{\nif (!ctx->update_progress_and_keepalive)\nreturn;\n\nctx->update_progress_and_keepalive(ctx, ctx->write_location,\n ctx->write_xid, ctx->did_write,\n finished_xact);\n}\nAFTER\n{\nif (!ctx->update_progress_and_keepalive)\nreturn;\n\nif (finished_xact || is_skip_threshold_change(ctx))\n{\nctx->update_progress_and_keepalive(ctx, ctx->write_location,\n ctx->write_xid, ctx->did_write,\n finished_xact);\n}\n}\n\n\n~~~\n\n4. StartupDecodingContext\n\n@@ -334,7 +329,7 @@ CreateInitDecodingContext(const char *plugin,\n XLogReaderRoutine *xl_routine,\n LogicalOutputPluginWriterPrepareWrite prepare_write,\n LogicalOutputPluginWriterWrite do_write,\n- LogicalOutputPluginWriterUpdateProgress update_progress)\n+ LogicalOutputPluginWriterUpdateProgressAndKeepalive\nupdate_progress_and_keepalive)\n\nTBH, I find it confusing that the new parameter name\n('update_progress_and_keepalive') is identical to the static function\nname in the same C source file. It introduces a kind of unnecessary\nshadowing and makes it harder to search/read the code.\n\nI suggest just calling this param something unique and local to the\nfunction like 'do_update_keepalive'.\n\n~~~\n\n5. @@ -334,7 +329,7 @@ CreateInitDecodingContext(const char *plugin,\n XLogReaderRoutine *xl_routine,\n LogicalOutputPluginWriterPrepareWrite prepare_write,\n LogicalOutputPluginWriterWrite do_write,\n- LogicalOutputPluginWriterUpdateProgress update_progress)\n+ LogicalOutputPluginWriterUpdateProgressAndKeepalive\nupdate_progress_and_keepalive)\n\n(Ditto previous comment #4)\n\nTBH, I find it confusing that the new parameter name\n('update_progress_and_keepalive') is identical to the static function\nname in the same C source file. It introduces a kind of unnecessary\nshadowing and makes it harder to search/read the code.\n\nI suggest just calling this param something unique and local to the\nfunction like 'do_update_keepalive'.\n\n~~~\n\n6. CreateDecodingContext\n\n@@ -493,7 +488,7 @@ CreateDecodingContext(XLogRecPtr start_lsn,\n XLogReaderRoutine *xl_routine,\n LogicalOutputPluginWriterPrepareWrite prepare_write,\n LogicalOutputPluginWriterWrite do_write,\n- LogicalOutputPluginWriterUpdateProgress update_progress)\n+ LogicalOutputPluginWriterUpdateProgressAndKeepalive\nupdate_progress_and_keepalive)\n\n(Ditto previous comment #4)\n\nTBH, I find it confusing that the new parameter name\n('update_progress_and_keepalive') is identical to the static function\nname in the same C source file. It introduces a kind of unnecessary\nshadowing and makes it harder to search/read the code.\n\nI suggest just calling this param something unique and local to the\nfunction like 'do_update_keepalive'.\n\n~~~\n\n7. OutputPluginPrepareWrite\n\n@@ -662,7 +657,7 @@ void\n OutputPluginPrepareWrite(struct LogicalDecodingContext *ctx, bool last_write)\n {\n if (!ctx->accept_writes)\n- elog(ERROR, \"writes are only accepted in commit, begin and change callbacks\");\n+ elog(ERROR, \"writes are only accepted in callbacks in the\nOutputPluginCallbacks structure (except startup, shutdown,\nfilter_by_origin and filter_prepare callbacks)\");\n\nIt seems a confusing error message. Can it be worded better? Also, I\nnoticed this flag is never used except in this one place where it\nthrows an error, so would an \"Assert\" would be more appropriate here?\n\n~~~\n\n8. rollback_prepared_cb_wrapper\n\n /*\n * If the plugin support two-phase commits then rollback prepared callback\n * is mandatory\n+ *\n+ * FIXME: This should have been caught much earlier.\n */\n if (ctx->callbacks.rollback_prepared_cb == NULL)\n~\nIs this FIXME related to the current patch, or should this be an\nentirely different topic?\n\n~~~\n\n\n9. is_skip_threshold_change\n\nThe current usage for this function is like:\n\nif (is_skip_threshold_change(ctx))\n+ update_progress_and_keepalive(ctx, false);\n\n~\n\nIMO a better name for this function might be like\n'is_change_threshold_exceeded()' (or\n'is_keepalive_threshold_exceeded()' etc) because seems more readable\nto say\n\nif (is_change_threshold_exceeded())\ndo_something();\n\n~~~\n\n10. is_skip_threshold_change\n\nstatic bool\nis_skip_threshold_change(struct LogicalDecodingContext *ctx)\n{\nstatic int changes_count = 0; /* used to accumulate the number of\n* changes */\n\n/* If the change was published, reset the counter and return false */\nif (ctx->did_write)\n{\nchanges_count = 0;\nreturn false;\n}\n\n/*\n* It is possible that the data is not sent to downstream for a long time\n* either because the output plugin filtered it or there is a DDL that\n* generates a lot of data that is not processed by the plugin. So, in\n* such cases, the downstream can timeout. To avoid that we try to send a\n* keepalive message if required. Trying to send a keepalive message\n* after every change has some overhead, but testing showed there is no\n* noticeable overhead if we do it after every ~100 changes.\n*/\n#define CHANGES_THRESHOLD 100\nif (!ctx->did_write && ++changes_count >= CHANGES_THRESHOLD)\n{\nchanges_count = 0;\nreturn true;\n}\n\nreturn false;\n}\n\n~\n\nThat 2nd condition checking if (!ctx->did_write && ++changes_count >=\nCHANGES_THRESHOLD) does not seem right. There is no need to check the\nctx->did_write; it must be false because it was checked earlier in the\nfunction:\n\nBEFORE\nif (!ctx->did_write && ++changes_count >= CHANGES_THRESHOLD)\n\nSUGGESTION1\nAssert(!ctx->did_write);\nif (++changes_count >= CHANGES_THRESHOLD)\n\nSUGGESTION2\nif (++changes_count >= CHANGES_THRESHOLD)\n\n~~~\n\n11. update_progress_and_keepalive\n\n/*\n * Update progress tracking and send keep alive (if required).\n */\nstatic void\nupdate_progress_and_keepalive(struct LogicalDecodingContext *ctx,\n bool finished_xact)\n{\nif (!ctx->update_progress_and_keepalive)\nreturn;\n\nctx->update_progress_and_keepalive(ctx, ctx->write_location,\n ctx->write_xid, ctx->did_write,\n finished_xact);\n}\n\n~\n\nMaybe it's simpler to code this without the return.\n\ne.g.\n\nif (ctx->update_progress_and_keepalive)\n{\nctx->update_progress_and_keepalive(ctx, ctx->write_location,\n ctx->write_xid, ctx->did_write,\n finished_xact);\n}\n\n(it is just generic suggested code for example -- I made some other\nreview comments overlapping this)\n\n\n======\n.../replication/logical/reorderbuffer.c\n\n12. ReorderBufferAbort\n\n+ UpdateDecodingProgressAndKeepalive((LogicalDecodingContext *)rb->private_data,\n+ xid, lsn, !TransactionIdIsValid(txn->toplevel_xid));\n+\n\nI didn't really recognise how the\n\"!TransactionIdIsValid(txn->toplevel_xid)\" maps to the boolean\n'finished_xact' param. Can this call have an explanatory comment about\nhow it works?\n\n======\nsrc/backend/replication/walsender.c\n\n~~~\n\n13. WalSndUpdateProgressAndKeepalive\n\n- if (pending_writes || (!end_xact &&\n+ if (pending_writes || (!finished_xact && wal_sender_timeout > 0 &&\n now >= TimestampTzPlusMilliseconds(last_reply_timestamp,\n wal_sender_timeout / 2)))\n- ProcessPendingWrites();\n+ WalSndSendPending();\n\nIs this new function name OK to be WalSndSendPending? From this code,\nwe can see it can also be called in other scenarios even when there is\nnothing \"pending\" at all.\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 28 Feb 2023 12:12:03 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "Hi,\n\n\nOn Monday, February 27, 2023 6:30 PM wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com> wrote:\n> Attach the new patch.\nThanks for sharing v3. Minor review comments and question.\n\n\n(1) UpdateDecodingProgressAndKeepalive header comment\n\nThe comment should be updated to explain maybe why we reset some other flags as discussed in [1] and the functionality to update and keepalive of the function simply.\n\n(2) OutputPluginPrepareWrite\n\nProbably the changed error string is too wide.\n\n@@ -662,7 +657,7 @@ void\n OutputPluginPrepareWrite(struct LogicalDecodingContext *ctx, bool last_write)\n {\n if (!ctx->accept_writes)\n- elog(ERROR, \"writes are only accepted in commit, begin and change callbacks\");\n+ elog(ERROR, \"writes are only accepted in callbacks in the OutputPluginCallbacks structure (except startup, shutdown, filter_by_origin and filter_prepare callbacks)\");\n\nI thought you can break the error message into two string lines. Or, you can rephrase it to different expression.\n\n(3) Minor question\n\nThe patch introduced the goto statements into the cb_wrapper functions. Is the purpose to call the update_progress_and_keepalive after pop the error stack, even if the corresponding callback is missing ? I thought we can just have \"else\" clause for the check of the existence of callback, but did you choose the current goto style for readability ?\n\n(4) Name of is_skip_threshold_change\n\nI also feel the name of is_skip_threshold_change can be changed to \"exceeded_keepalive_threshold\" or something. Other candidates are proposed by Peter-san in [2].\n\n\n\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275374EBE7C8CABBE6730099EAF9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/message-id/CAHut%2BPt3ZEMo-KTF%3D5KJSU%2BHdWJD19GPGGCKOmBeM47484Ychw%40mail.gmail.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Tue, 28 Feb 2023 03:31:06 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Tues, Feb 28, 2023 at 9:12 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some comments for the v2-0001 patch.\r\n> \r\n> (I haven't looked at the v3 that was posted overnight; maybe some of\r\n> my comments have already been addressed.)\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> General\r\n> \r\n> 1. (Info from the commit message)\r\n> Since we can know whether the change is an end of transaction change in the\r\n> common code, we removed the LogicalDecodingContext->end_xact introduced\r\n> in\r\n> commit f95d53e.\r\n> \r\n> ~\r\n> \r\n> TBH, it was not clear to me that this change was an improvement. IIUC,\r\n> it removes the \"unnecessary\" member, but only does that by replacing\r\n> it everywhere with a boolean parameter passed to\r\n> update_progress_and_keepalive(). So the end result seems no less code,\r\n> but it is less readable code now because you need to know what the\r\n> true/false parameter means. I wonder if it would have been better just\r\n> to leave this how it was.\r\n\r\nSince I think we can know the meaning of the input based on the parameter name\r\nof the function, I think both approaches are fine. But the approach in the\r\ncurrent patch can reduce a member of the structure, so I think this modification\r\nlooks good to me.\r\n\r\n> ======\r\n> src/backend/replication/logical/logical.c\r\n> \r\n> 2. General - blank lines\r\n> \r\n> There are multiple places in this file where the patch removed some\r\n> statements but left blank lines. The result is 2 blank lines remaining\r\n> instead of one.\r\n> \r\n> see change_cb_wrapper.\r\n> see truncate_cb_wrapper.\r\n> see stream_start_cb_wrapper.\r\n> see stream_stop_cb_wrapper.\r\n> see stream_change_cb_wrapper.\r\n> \r\n> e.g.\r\n> \r\n> BEFORE\r\n> ctx->write_location = last_lsn;\r\n> \r\n> ctx->end_xact = false;\r\n> \r\n> /* in streaming mode, stream_stop_cb is required */\r\n> \r\n> AFTER (now there are 2 blank lines)\r\n> ctx->write_location = last_lsn;\r\n> \r\n> \r\n> /* in streaming mode, stream_stop_cb is required */\r\n\r\nRemoved.\r\n\r\n> ~~~\r\n> \r\n> 3. General - calls to is_skip_threshold_change()\r\n> \r\n> + if (is_skip_threshold_change(ctx))\r\n> + update_progress_and_keepalive(ctx, false);\r\n> \r\n> There are multiple calls like this, which are guarding the\r\n> update_progress_and_keepalive() with the is_skip_threshold_change()\r\n> - See truncate_cb_wrapper\r\n> - See message_cb_wrapper\r\n> - See stream_change_cb_wrapper\r\n> - See stream_message_cb_wrapper\r\n> - See stream_truncate_cb_wrapper\r\n> - See UpdateDecodingProgressAndKeepalive\r\n> \r\n> IIUC, then I was thinking all those conditions maybe can be pushed\r\n> down *into* the wrapper, thereby making every calling code simpler.\r\n> \r\n> e.g. make the wrapper function code look similar to the current\r\n> UpdateDecodingProgressAndKeepalive:\r\n> \r\n> BEFORE (update_progress_and_keepalive)\r\n> {\r\n> if (!ctx->update_progress_and_keepalive)\r\n> return;\r\n> \r\n> ctx->update_progress_and_keepalive(ctx, ctx->write_location,\r\n> ctx->write_xid, ctx->did_write,\r\n> finished_xact);\r\n> }\r\n> AFTER\r\n> {\r\n> if (!ctx->update_progress_and_keepalive)\r\n> return;\r\n> \r\n> if (finished_xact || is_skip_threshold_change(ctx))\r\n> {\r\n> ctx->update_progress_and_keepalive(ctx, ctx->write_location,\r\n> ctx->write_xid, ctx->did_write,\r\n> finished_xact);\r\n> }\r\n> }\r\n\r\nSince I want to keep the function update_progress_and_keepalive simple, I didn't\r\nchange it.\r\n\r\n> ~~~\r\n> \r\n> 4. StartupDecodingContext\r\n> \r\n> @@ -334,7 +329,7 @@ CreateInitDecodingContext(const char *plugin,\r\n> XLogReaderRoutine *xl_routine,\r\n> LogicalOutputPluginWriterPrepareWrite prepare_write,\r\n> LogicalOutputPluginWriterWrite do_write,\r\n> - LogicalOutputPluginWriterUpdateProgress update_progress)\r\n> + LogicalOutputPluginWriterUpdateProgressAndKeepalive\r\n> update_progress_and_keepalive)\r\n> \r\n> TBH, I find it confusing that the new parameter name\r\n> ('update_progress_and_keepalive') is identical to the static function\r\n> name in the same C source file. It introduces a kind of unnecessary\r\n> shadowing and makes it harder to search/read the code.\r\n> \r\n> I suggest just calling this param something unique and local to the\r\n> function like 'do_update_keepalive'.\r\n> \r\n> ~~~\r\n> 5. @@ -334,7 +329,7 @@ CreateInitDecodingContext(const char *plugin,\r\n> XLogReaderRoutine *xl_routine,\r\n> LogicalOutputPluginWriterPrepareWrite prepare_write,\r\n> LogicalOutputPluginWriterWrite do_write,\r\n> - LogicalOutputPluginWriterUpdateProgress update_progress)\r\n> + LogicalOutputPluginWriterUpdateProgressAndKeepalive\r\n> update_progress_and_keepalive)\r\n> \r\n> (Ditto previous comment #4)\r\n> \r\n> TBH, I find it confusing that the new parameter name\r\n> ('update_progress_and_keepalive') is identical to the static function\r\n> name in the same C source file. It introduces a kind of unnecessary\r\n> shadowing and makes it harder to search/read the code.\r\n> \r\n> I suggest just calling this param something unique and local to the\r\n> function like 'do_update_keepalive'.\r\n> ~~~\r\n> \r\n> 6. CreateDecodingContext\r\n> \r\n> @@ -493,7 +488,7 @@ CreateDecodingContext(XLogRecPtr start_lsn,\r\n> XLogReaderRoutine *xl_routine,\r\n> LogicalOutputPluginWriterPrepareWrite prepare_write,\r\n> LogicalOutputPluginWriterWrite do_write,\r\n> - LogicalOutputPluginWriterUpdateProgress update_progress)\r\n> + LogicalOutputPluginWriterUpdateProgressAndKeepalive\r\n> update_progress_and_keepalive)\r\n> \r\n> (Ditto previous comment #4)\r\n> \r\n> TBH, I find it confusing that the new parameter name\r\n> ('update_progress_and_keepalive') is identical to the static function\r\n> name in the same C source file. It introduces a kind of unnecessary\r\n> shadowing and makes it harder to search/read the code.\r\n> \r\n> I suggest just calling this param something unique and local to the\r\n> function like 'do_update_keepalive'.\r\n\r\nI'm not sure if 'do_update_keepalive' is accurate. So, to distinguish this\r\nfunction from the parameter, I renamed the function to\r\n'UpdateProgressAndKeepalive'.\r\n\r\n> ~~~\r\n> \r\n> 7. OutputPluginPrepareWrite\r\n> \r\n> @@ -662,7 +657,7 @@ void\r\n> OutputPluginPrepareWrite(struct LogicalDecodingContext *ctx, bool last_write)\r\n> {\r\n> if (!ctx->accept_writes)\r\n> - elog(ERROR, \"writes are only accepted in commit, begin and change callbacks\");\r\n> + elog(ERROR, \"writes are only accepted in callbacks in the\r\n> OutputPluginCallbacks structure (except startup, shutdown,\r\n> filter_by_origin and filter_prepare callbacks)\");\r\n> \r\n> It seems a confusing error message. Can it be worded better?\r\n\r\nI tried to improve this message in the new patch. Do you have any suggestions to\r\nimprove it?\r\n\r\n> Also, I\r\n> noticed this flag is never used except in this one place where it\r\n> throws an error, so would an \"Assert\" would be more appropriate here?\r\n\r\nI'm not sure if we should change errors to assertions here.\r\n\r\n> ~~~\r\n> \r\n> 8. rollback_prepared_cb_wrapper\r\n> \r\n> /*\r\n> * If the plugin support two-phase commits then rollback prepared callback\r\n> * is mandatory\r\n> + *\r\n> + * FIXME: This should have been caught much earlier.\r\n> */\r\n> if (ctx->callbacks.rollback_prepared_cb == NULL)\r\n> ~\r\n> Is this FIXME related to the current patch, or should this be an\r\n> entirely different topic?\r\n\r\nI think this FIXME seems to be another topic and I will delete this FIXME later.\r\n\r\n> ~~~\r\n> \r\n> \r\n> 9. is_skip_threshold_change\r\n> \r\n> The current usage for this function is like:\r\n> \r\n> if (is_skip_threshold_change(ctx))\r\n> + update_progress_and_keepalive(ctx, false);\r\n> \r\n> ~\r\n> \r\n> IMO a better name for this function might be like\r\n> 'is_change_threshold_exceeded()' (or\r\n> 'is_keepalive_threshold_exceeded()' etc) because seems more readable\r\n> to say\r\n> \r\n> if (is_change_threshold_exceeded())\r\n> do_something();\r\n\r\nRenamed this function to is_keepalive_threshold_exceeded.\r\n\r\n> ~~~\r\n> \r\n> 10. is_skip_threshold_change\r\n> \r\n> static bool\r\n> is_skip_threshold_change(struct LogicalDecodingContext *ctx)\r\n> {\r\n> static int changes_count = 0; /* used to accumulate the number of\r\n> * changes */\r\n> \r\n> /* If the change was published, reset the counter and return false */\r\n> if (ctx->did_write)\r\n> {\r\n> changes_count = 0;\r\n> return false;\r\n> }\r\n> \r\n> /*\r\n> * It is possible that the data is not sent to downstream for a long time\r\n> * either because the output plugin filtered it or there is a DDL that\r\n> * generates a lot of data that is not processed by the plugin. So, in\r\n> * such cases, the downstream can timeout. To avoid that we try to send a\r\n> * keepalive message if required. Trying to send a keepalive message\r\n> * after every change has some overhead, but testing showed there is no\r\n> * noticeable overhead if we do it after every ~100 changes.\r\n> */\r\n> #define CHANGES_THRESHOLD 100\r\n> if (!ctx->did_write && ++changes_count >= CHANGES_THRESHOLD)\r\n> {\r\n> changes_count = 0;\r\n> return true;\r\n> }\r\n> \r\n> return false;\r\n> }\r\n> \r\n> ~\r\n> \r\n> That 2nd condition checking if (!ctx->did_write && ++changes_count >=\r\n> CHANGES_THRESHOLD) does not seem right. There is no need to check the\r\n> ctx->did_write; it must be false because it was checked earlier in the\r\n> function:\r\n> \r\n> BEFORE\r\n> if (!ctx->did_write && ++changes_count >= CHANGES_THRESHOLD)\r\n> \r\n> SUGGESTION1\r\n> Assert(!ctx->did_write);\r\n> if (++changes_count >= CHANGES_THRESHOLD)\r\n> \r\n> SUGGESTION2\r\n> if (++changes_count >= CHANGES_THRESHOLD)\r\n\r\nFixed.\r\nI think the second suggestion looks better to me.\r\n\r\n> ~~~\r\n> \r\n> 11. update_progress_and_keepalive\r\n> \r\n> /*\r\n> * Update progress tracking and send keep alive (if required).\r\n> */\r\n> static void\r\n> update_progress_and_keepalive(struct LogicalDecodingContext *ctx,\r\n> bool finished_xact)\r\n> {\r\n> if (!ctx->update_progress_and_keepalive)\r\n> return;\r\n> \r\n> ctx->update_progress_and_keepalive(ctx, ctx->write_location,\r\n> ctx->write_xid, ctx->did_write,\r\n> finished_xact);\r\n> }\r\n> \r\n> ~\r\n> \r\n> Maybe it's simpler to code this without the return.\r\n> \r\n> e.g.\r\n> \r\n> if (ctx->update_progress_and_keepalive)\r\n> {\r\n> ctx->update_progress_and_keepalive(ctx, ctx->write_location,\r\n> ctx->write_xid, ctx->did_write,\r\n> finished_xact);\r\n> }\r\n> \r\n> (it is just generic suggested code for example -- I made some other\r\n> review comments overlapping this)\r\n\r\nI think these two approaches are fine. But because I think the approach in the\r\ncurrent patch is consistent with the style of other functions, I didn't change\r\nit.\r\n\r\n> ======\r\n> .../replication/logical/reorderbuffer.c\r\n> \r\n> 12. ReorderBufferAbort\r\n> \r\n> + UpdateDecodingProgressAndKeepalive((LogicalDecodingContext *)rb-\r\n> >private_data,\r\n> + xid, lsn, !TransactionIdIsValid(txn->toplevel_xid));\r\n> +\r\n> \r\n> I didn't really recognise how the\r\n> \"!TransactionIdIsValid(txn->toplevel_xid)\" maps to the boolean\r\n> 'finished_xact' param. Can this call have an explanatory comment about\r\n> how it works?\r\n\r\nIt seems confusing to use txn->toplevel_xid to check whether it is top\r\ntransaction. Because the comment of txn->toptxn shows the meaning of value, I\r\nupdated the patch to use txn->toptxn to check this.\r\n\r\n> ======\r\n> src/backend/replication/walsender.c\r\n> ~~~\r\n> \r\n> 13. WalSndUpdateProgressAndKeepalive\r\n> \r\n> - if (pending_writes || (!end_xact &&\r\n> + if (pending_writes || (!finished_xact && wal_sender_timeout > 0 &&\r\n> now >= TimestampTzPlusMilliseconds(last_reply_timestamp,\r\n> wal_sender_timeout / 2)))\r\n> - ProcessPendingWrites();\r\n> + WalSndSendPending();\r\n> \r\n> Is this new function name OK to be WalSndSendPending? From this code,\r\n> we can see it can also be called in other scenarios even when there is\r\n> nothing \"pending\" at all.\r\n\r\nI think this function is used to flush pending data or send keepalive message.\r\nBut I'm not sure if we should add keepalive related string to the function\r\nname, which seems to make this function name too long.\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Wed, 1 Mar 2023 10:16:40 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Tues, Feb 28, 2023 at 11:31 AM Osumi, Takamichi/大墨 昂道 <osumi.takamichi@fujitsu.com> wrote:\r\n> Hi,\r\n> \r\n> \r\n> On Monday, February 27, 2023 6:30 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> > Attach the new patch.\r\n> Thanks for sharing v3. Minor review comments and question.\r\n\r\nThanks for your comments.\r\n\r\n> (1) UpdateDecodingProgressAndKeepalive header comment\r\n> \r\n> The comment should be updated to explain maybe why we reset some other\r\n> flags as discussed in [1] and the functionality to update and keepalive of the\r\n> function simply.\r\n\r\nAdded the comments atop the function UpdateDecodingProgressAndKeepalive about\r\nwhen to call this function.\r\n\r\n> (2) OutputPluginPrepareWrite\r\n> \r\n> Probably the changed error string is too wide.\r\n> \r\n> @@ -662,7 +657,7 @@ void\r\n> OutputPluginPrepareWrite(struct LogicalDecodingContext *ctx, bool last_write)\r\n> {\r\n> if (!ctx->accept_writes)\r\n> - elog(ERROR, \"writes are only accepted in commit, begin and change\r\n> callbacks\");\r\n> + elog(ERROR, \"writes are only accepted in callbacks in the\r\n> OutputPluginCallbacks structure (except startup, shutdown, filter_by_origin and\r\n> filter_prepare callbacks)\");\r\n> \r\n> I thought you can break the error message into two string lines. Or, you can\r\n> rephrase it to different expression.\r\n\r\nI tried to improve this message and broke it into two lines in the new patch.\r\n\r\n> (3) Minor question\r\n> \r\n> The patch introduced the goto statements into the cb_wrapper functions. Is the\r\n> purpose to call the update_progress_and_keepalive after pop the error stack,\r\n> even if the corresponding callback is missing ? I thought we can just have \"else\"\r\n> clause for the check of the existence of callback, but did you choose the current\r\n> goto style for readability ?\r\n\r\nI think both styles look fine to me.\r\nI haven't modified this for this version. I'll reconsider if anyone else has\r\nsimilar thoughts later.\r\n\r\n> (4) Name of is_skip_threshold_change\r\n> \r\n> I also feel the name of is_skip_threshold_change can be changed to\r\n> \"exceeded_keepalive_threshold\" or something. Other candidates are proposed\r\n> by Peter-san in [2].\r\n\r\nRenamed this function to is_keepalive_threshold_exceeded.\r\n\r\nPlease see the new patch in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275C6CA72222C0C23730A319EAD9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Wed, 1 Mar 2023 10:18:54 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 9:16 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Tues, Feb 28, 2023 at 9:12 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Here are some comments for the v2-0001 patch.\n> >\n> > (I haven't looked at the v3 that was posted overnight; maybe some of\n> > my comments have already been addressed.)\n>\n> Thanks for your comments.\n>\n> > ======\n> > General\n> >\n> > 1. (Info from the commit message)\n> > Since we can know whether the change is an end of transaction change in the\n> > common code, we removed the LogicalDecodingContext->end_xact introduced\n> > in\n> > commit f95d53e.\n> >\n> > ~\n> >\n> > TBH, it was not clear to me that this change was an improvement. IIUC,\n> > it removes the \"unnecessary\" member, but only does that by replacing\n> > it everywhere with a boolean parameter passed to\n> > update_progress_and_keepalive(). So the end result seems no less code,\n> > but it is less readable code now because you need to know what the\n> > true/false parameter means. I wonder if it would have been better just\n> > to leave this how it was.\n>\n> Since I think we can know the meaning of the input based on the parameter name\n> of the function, I think both approaches are fine. But the approach in the\n> current patch can reduce a member of the structure, so I think this modification\n> looks good to me.\n>\n\nHmm, I am not so sure:\n\n- Why is reducing members of LogicalDecodingContext even a goal? I\nthought the LogicalDecodingContext is intended to be the one-stop\nplace to hold *all* things related to the \"Context\" (including that\nmember that was deleted).\n\n- How is reducing one member better than introducing one new parameter\nin multiple calls?\n\nAnyway, I think this exposes another problem. If you still want the\npatch to pass the 'finshed_xact' parameter separately then AFAICT the\nfirst parameter (ctx) now becomes unused/redundant in the\nWalSndUpdateProgressAndKeepalive function, so it ought to be removed.\n\n> > ======\n> > src/backend/replication/logical/logical.c\n> >\n> > 3. General - calls to is_skip_threshold_change()\n> >\n> > + if (is_skip_threshold_change(ctx))\n> > + update_progress_and_keepalive(ctx, false);\n> >\n> > There are multiple calls like this, which are guarding the\n> > update_progress_and_keepalive() with the is_skip_threshold_change()\n> > - See truncate_cb_wrapper\n> > - See message_cb_wrapper\n> > - See stream_change_cb_wrapper\n> > - See stream_message_cb_wrapper\n> > - See stream_truncate_cb_wrapper\n> > - See UpdateDecodingProgressAndKeepalive\n> >\n> > IIUC, then I was thinking all those conditions maybe can be pushed\n> > down *into* the wrapper, thereby making every calling code simpler.\n> >\n> > e.g. make the wrapper function code look similar to the current\n> > UpdateDecodingProgressAndKeepalive:\n> >\n> > BEFORE (update_progress_and_keepalive)\n> > {\n> > if (!ctx->update_progress_and_keepalive)\n> > return;\n> >\n> > ctx->update_progress_and_keepalive(ctx, ctx->write_location,\n> > ctx->write_xid, ctx->did_write,\n> > finished_xact);\n> > }\n> > AFTER\n> > {\n> > if (!ctx->update_progress_and_keepalive)\n> > return;\n> >\n> > if (finished_xact || is_skip_threshold_change(ctx))\n> > {\n> > ctx->update_progress_and_keepalive(ctx, ctx->write_location,\n> > ctx->write_xid, ctx->did_write,\n> > finished_xact);\n> > }\n> > }\n>\n> Since I want to keep the function update_progress_and_keepalive simple, I didn't\n> change it.\n\nHmm, the reason given seems like a false economy to me. You are able\nto keep this 1 function simpler only by adding more complexity to the\ncalls in 6 other places. Let's see if other people have opinions about\nthis.\n\n~~~\n\n1.\n+\n+static void UpdateProgressAndKeepalive(LogicalDecodingContext *ctx,\n+ bool finished_xact);\n+\n+static bool is_keepalive_threshold_exceeded(LogicalDecodingContext *ctx);\n\n1a.\nThere is an unnecessary extra blank line above the UpdateProgressAndKeepalive.\n\n~\n\n1b.\nI did not recognize a reason for the different naming conventions.\nHere are two new functions but one is CamelCase and one is snake_case.\nWhat are the rules to decide the naming?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 3 Mar 2023 11:18:04 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-03 11:18:04 +1100, Peter Smith wrote:\n> - Why is reducing members of LogicalDecodingContext even a goal? I\n> thought the LogicalDecodingContext is intended to be the one-stop\n> place to hold *all* things related to the \"Context\" (including that\n> member that was deleted).\n\nThere's not really a reason to keep it in LogicalDecodingContext after\nthis change. It was only needed there because of the broken\narchitectural model of calling UpdateProgress from within output\nplugins. Why set a field in each wrapper that we don't need?\n\n> - How is reducing one member better than introducing one new parameter\n> in multiple calls?\n\nReducing the member isn't important, needing to set it before each\ncallback however makes sense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Mar 2023 16:37:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Friday, March 3, 2023 8:18 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> On Wed, Mar 1, 2023 at 9:16 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Tues, Feb 28, 2023 at 9:12 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > > Here are some comments for the v2-0001 patch.\r\n> > >\r\n> > > (I haven't looked at the v3 that was posted overnight; maybe some of\r\n> > > my comments have already been addressed.)\r\n> >\r\n> > Thanks for your comments.\r\n> >\r\n> > > ======\r\n> > > General\r\n> > >\r\n> > > 1. (Info from the commit message)\r\n> > > Since we can know whether the change is an end of transaction change\r\n> > > in the common code, we removed the\r\n> LogicalDecodingContext->end_xact\r\n> > > introduced in commit f95d53e.\r\n> > >\r\n> > > ~\r\n> > >\r\n> > > TBH, it was not clear to me that this change was an improvement.\r\n> > > IIUC, it removes the \"unnecessary\" member, but only does that by\r\n> > > replacing it everywhere with a boolean parameter passed to\r\n> > > update_progress_and_keepalive(). So the end result seems no less\r\n> > > code, but it is less readable code now because you need to know what\r\n> > > the true/false parameter means. I wonder if it would have been\r\n> > > better just to leave this how it was.\r\n> >\r\n> > Since I think we can know the meaning of the input based on the\r\n> > parameter name of the function, I think both approaches are fine. But\r\n> > the approach in the current patch can reduce a member of the\r\n> > structure, so I think this modification looks good to me.\r\n> >\r\n> \r\n...\r\n> \r\n> Anyway, I think this exposes another problem. If you still want the patch to pass\r\n> the 'finshed_xact' parameter separately then AFAICT the first parameter (ctx)\r\n> now becomes unused/redundant in the WalSndUpdateProgressAndKeepalive\r\n> function, so it ought to be removed.\r\n> \r\n\r\nI am not sure about this. The first parameter (ctx) has been introduced since\r\nthe Lag tracking feature. I think this is to make it consistent with other\r\nLogicalOutputPluginWriter callbacks. In addition, this is a public callback\r\nfunction and user can implement their own logic in this callbacks based on\r\ninterface, removing this existing parameter doesn't look great to me. Although\r\nthis patch also removes the existing skipped_xact, but it's because we decide\r\nto use another parameter did_write which can play a similar role.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Fri, 3 Mar 2023 02:27:49 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Fri, Mar 3, 2023 at 1:27 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, March 3, 2023 8:18 AM Peter Smith <smithpb2250@gmail.com> wrote:\n...\n> > Anyway, I think this exposes another problem. If you still want the patch to pass\n> > the 'finshed_xact' parameter separately then AFAICT the first parameter (ctx)\n> > now becomes unused/redundant in the WalSndUpdateProgressAndKeepalive\n> > function, so it ought to be removed.\n> >\n>\n> I am not sure about this. The first parameter (ctx) has been introduced since\n> the Lag tracking feature. I think this is to make it consistent with other\n> LogicalOutputPluginWriter callbacks. In addition, this is a public callback\n> function and user can implement their own logic in this callbacks based on\n> interface, removing this existing parameter doesn't look great to me. Although\n> this patch also removes the existing skipped_xact, but it's because we decide\n> to use another parameter did_write which can play a similar role.\n>\n\nOh right, that makes sense. Thanks.\n\nPerhaps it just wants some comment to mention that although the\nbuilt-in implementation does not use the 'ctx' users might implement\ntheir own logic which does use it.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 3 Mar 2023 13:43:06 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Fri, Mar 3, 2023 8:18 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n\r\nThanks for your comments.\r\n\r\n> 1.\r\n> +\r\n> +static void UpdateProgressAndKeepalive(LogicalDecodingContext *ctx,\r\n> + bool finished_xact);\r\n> +\r\n> +static bool is_keepalive_threshold_exceeded(LogicalDecodingContext *ctx);\r\n> \r\n> 1a.\r\n> There is an unnecessary extra blank line above the UpdateProgressAndKeepalive.\r\n\r\nRemoved.\r\n\r\n> ~\r\n> \r\n> 1b.\r\n> I did not recognize a reason for the different naming conventions.\r\n> Here are two new functions but one is CamelCase and one is snake_case.\r\n> What are the rules to decide the naming?\r\n\r\nI used the snake_case style for the function UpdateProgressAndKeepalive in the\r\nprevious version, but it was confusing because it shared the same parameter name\r\nwith the functions StartupDecodingContext, CreateInitDecodingContext and\r\nCreateDecodingContext. To avoid this confusion, and since both naming styles\r\nexist in this file, I changed it to CamelCase style.\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Mon, 6 Mar 2023 10:18:18 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "Dear Wang,\r\n\r\nThank you for updating the patch! Followings are my comments.\r\n\r\n---\r\n01. missing comments\r\nYou might miss the comment from Peter[1]. Or could you pin the related one?\r\n\r\n---\r\n02. LogicalDecodingProcessRecord()\r\n\r\nDon't we have to call UpdateDecodingProgressAndKeepalive() when there is no\r\ndecoding function? Assuming that the timeout parameter does not have enough time\r\nperiod and there are so many sequential operations in the transaction. At that time\r\nthere may be a possibility that timeout is occurred while calling ReorderBufferProcessXid()\r\nseveral times. It may be a bad example, but I meant to say that we may have to\r\nconsider the case that decoding function has not implemented yet.\r\n\r\n---\r\n03. stream_*_cb_wrapper\r\n\r\nOnly stream_*_cb_wrapper have comments \"don't call update progress, we didn't really make any\", but\r\nthere are more functions that does not send updates. Do you have any reasons why only they have?\r\n\r\n[1]: https://www.postgresql.org/message-id/CAHut%2BPsksiQHuv4A54R4w79TAvCu__PcuffKYY0V96e2z_sEvA%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 7 Mar 2023 07:55:04 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Tue, Mar 7, 2023 15:55 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Wang,\r\n> \r\n> Thank you for updating the patch! Followings are my comments.\r\n\r\nThanks for your comments.\r\n\r\n> ---\r\n> 01. missing comments\r\n> You might miss the comment from Peter[1]. Or could you pin the related one?\r\n\r\nSince I think the functions WalSndPrepareWrite and WalSndWriteData have similar\r\nparameters and the HEAD has no related comments, I'm not sure whether we should\r\nadd them in this patch, or in a separate patch to comment atop these callback\r\nfunctions or where they are called.\r\n\r\n> ---\r\n> 02. LogicalDecodingProcessRecord()\r\n> \r\n> Don't we have to call UpdateDecodingProgressAndKeepalive() when there is no\r\n> decoding function? Assuming that the timeout parameter does not have enough\r\n> time\r\n> period and there are so many sequential operations in the transaction. At that\r\n> time\r\n> there may be a possibility that timeout is occurred while calling\r\n> ReorderBufferProcessXid()\r\n> several times. It may be a bad example, but I meant to say that we may have to\r\n> consider the case that decoding function has not implemented yet.\r\n\r\nI think it's ok in this function. If the decoding function has not been\r\nimplemented for a record, I think we quickly return to the loop in the function\r\nWalSndLoop, where it will try to send the keepalive message.\r\n\r\nBTW, in the previous discussion [1], we decided to ignore some paths, because\r\nthe gain from modifying them may not be so great.\r\n\r\n> ---\r\n> 03. stream_*_cb_wrapper\r\n> \r\n> Only stream_*_cb_wrapper have comments \"don't call update progress, we\r\n> didn't really make any\", but\r\n> there are more functions that does not send updates. Do you have any reasons\r\n> why only they have?\r\n\r\nAdded this comment to more functions.\r\nI think the following six functions don't call the function\r\nUpdateProgressAndKeepalive in v5 patch:\r\n- begin_cb_wrapper\r\n- begin_prepare_cb_wrapper\r\n- startup_cb_wrapper\r\n- shutdown_cb_wrapper\r\n- filter_prepare_cb_wrapper\r\n- filter_by_origin_cb_wrapper\r\n\r\nI think the comment you mentioned means that no new progress needs to be updated\r\nin this *_cb_wrapper. Also, I think we don't need to update the progress at the\r\nbeginning of a transaction, just like in HEAD. So, I added the same comment only\r\nin the 4 functions below:\r\n- startup_cb_wrapper\r\n- shutdown_cb_wrapper\r\n- filter_prepare_cb_wrapper\r\n- filter_by_origin_cb_wrapper\r\n\r\nAttach the new patch.\r\n\r\n[1] - https://www.postgresql.org/message-id/20230213180302.u5sqosteflr3zkiz%40awork3.anarazel.de\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Wed, 8 Mar 2023 02:54:26 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "Dear Wang,\r\n\r\nThank you for updating the patch! I have briefly tested your patch\r\nand it worked well in following case.\r\n\r\n* WalSndUpdateProgressAndKeepalive is called when many inserts have come\r\n but the publisher does not publish the insertion. PSA the script for this.\r\n* WalSndUpdateProgressAndKeepalive is called when the commit record is not\r\n related with the specified database\r\n* WalSndUpdateProgressAndKeepalive is called when many inserts for unlogged\r\n tables are done.\r\n\r\n> > ---\r\n> > 01. missing comments\r\n> > You might miss the comment from Peter[1]. Or could you pin the related one?\r\n> \r\n> Since I think the functions WalSndPrepareWrite and WalSndWriteData have\r\n> similar\r\n> parameters and the HEAD has no related comments, I'm not sure whether we\r\n> should\r\n> add them in this patch, or in a separate patch to comment atop these callback\r\n> functions or where they are called.\r\n\r\nMake sense, OK.\r\n\r\n> > ---\r\n> > 02. LogicalDecodingProcessRecord()\r\n> >\r\n> > Don't we have to call UpdateDecodingProgressAndKeepalive() when there is no\r\n> > decoding function? Assuming that the timeout parameter does not have enough\r\n> > time\r\n> > period and there are so many sequential operations in the transaction. At that\r\n> > time\r\n> > there may be a possibility that timeout is occurred while calling\r\n> > ReorderBufferProcessXid()\r\n> > several times. It may be a bad example, but I meant to say that we may have to\r\n> > consider the case that decoding function has not implemented yet.\r\n> \r\n> I think it's ok in this function. If the decoding function has not been\r\n> implemented for a record, I think we quickly return to the loop in the function\r\n> WalSndLoop, where it will try to send the keepalive message.\r\n\r\nI confirmed that and yes, we will go back to WalSndLoop().\r\n\r\n> BTW, in the previous discussion [1], we decided to ignore some paths, because\r\n> the gain from modifying them may not be so great.\r\n\r\nI missed the discussion, thanks. Based on that codes seems right.\r\n\r\nFollowings are my comments.\r\n\r\n---\r\n```\r\n+/*\r\n+ * Update progress tracking and send keep alive (if required).\r\n+ */\r\n+static void\r\n+UpdateProgressAndKeepalive(LogicalDecodingContext *ctx, bool finished_xact)\r\n```\r\n\r\nCan we add atop the UpdateProgressAndKeepalive()? Currently the developers who\r\ncreate output plugins must call OutputPluginUpdateProgress(), but from now the\r\nfunction is not only renamed but does not have nessesary to call from plugin\r\n(of cource we do not restrict to call it). I think it must be clarified for them.\r\n\r\n---\r\nReorderBufferUpdateProgressTxnCB must be removed from typedefs.list.\r\n\r\n---\r\nDo we have to write a document for the breakage somewhere? I think we do not have\r\nto add appendix-obsolete-* file because we did not have any links for that, but\r\nwe can add a warning in \"Functions for Producing Output\" subsection if needed. \r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 8 Mar 2023 11:05:52 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nOn Wednesday, March 8, 2023 11:54 AM From: wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com> wrote:\r\n> Attach the new patch.\r\nThanks for sharing v6 ! Few minor comments for the same.\r\n\r\n(1) commit message\r\n\r\nThe old function name 'is_skip_threshold_change' is referred currently. We need to update it to 'is_keepalive_threshold_exceeded' I think.\r\n\r\n(2) OutputPluginPrepareWrite\r\n\r\n@@ -662,7 +656,8 @@ void\r\n OutputPluginPrepareWrite(struct LogicalDecodingContext *ctx, bool last_write)\r\n {\r\n if (!ctx->accept_writes)\r\n- elog(ERROR, \"writes are only accepted in commit, begin and change callbacks\");\r\n+ elog(ERROR, \"writes are only accepted in output plugin callbacks, \"\r\n+ \"except startup, shutdown, filter_by_origin, and filter_prepare.\");\r\n\r\nWe can remove the period at the end of error string.\r\n\r\n(3) is_keepalive_threshold_exceeded's comments\r\n\r\n+/*\r\n+ * Helper function to check whether a large number of changes have been skipped\r\n+ * continuously.\r\n+ */\r\n+static bool\r\n+is_keepalive_threshold_exceeded(LogicalDecodingContext *ctx)\r\n\r\nI suggest to update the comment slightly something like below.\r\nFrom:\r\n...whether a large number of changes have been skipped continuously\r\nTo:\r\n...whether a large number of changes have been skipped without being sent to the output plugin continuously\r\n\r\n(4) term for 'keepalive'\r\n\r\n+/*\r\n+ * Update progress tracking and send keep alive (if required).\r\n+ */\r\n\r\nThe 'keep alive' might be better to be replaced with 'keepalive', which looks commonest in other source codes. In the current patch, there are 3 different ways to express it (the other one is 'keep-alive') and it would be better to unify the term, at least within the same patch ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 8 Mar 2023 15:54:36 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "Here are some review comments for v6-0001\n\n======\nGeneral.\n\n1.\nThere are lots of new comments saying:\n/* don't call update progress, we didn't really make any */\n\nbut is the wording \"call update progress\" meaningful?\n\nShould that be written something more like:\n/* No progress has been made so there is no need to call\nUpdateProgressAndKeepalive. */\n\n======\n\n2. rollback_prepared_cb_wrapper\n\n /*\n * If the plugin support two-phase commits then rollback prepared callback\n * is mandatory\n+ *\n+ * FIXME: This should have been caught much earlier.\n */\n if (ctx->callbacks.rollback_prepared_cb == NULL)\n ereport(ERROR,\n\n~\n\nWhy is this seemingly unrelated FIXME still in the patch? I thought it\nwas posted a while ago (See [1] comment #8) that this would be\ndeleted.\n\n~~~\n\n4.\n\n@@ -1370,6 +1377,8 @@ stream_abort_cb_wrapper(ReorderBuffer *cache,\nReorderBufferTXN *txn,\n\n /* Pop the error context stack */\n error_context_stack = errcallback.previous;\n+\n+ UpdateProgressAndKeepalive(ctx, (txn->toptxn == NULL));\n }\n\n~\n\nAre the double parentheses necessary?\n\n~~~\n\n5. UpdateProgressAndKeepalive\n\nI had previously suggested (See [2] comment #3) that the code might be\nsimplified if the \"is_keepalive_threshold_exceeded(ctx)\" check was\npushed down into this function, but it seems like nobody else gave any\nopinion for/against that idea yet... so the question still stands.\n\n======\nsrc/backend/replication/walsender.c\n\n6. WalSndUpdateProgressAndKeepalive\n\nSince the 'ctx' is unused here, it might be nicer to annotate that to\nmake it clear it is deliberate and suppress any possible warnings\nabout unused params.\n\ne.g. something like:\n\nWalSndUpdateProgressAndKeepalive(\npg_attribute_unused() LogicalDecodingContext *ctx,\nXLogRecPtr lsn,\nTransactionId xid,\nbool did_write,\nbool finished_xact)\n\n------\n[1] https://www.postgresql.org/message-id/OS3PR01MB6275C6CA72222C0C23730A319EAD9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n[2] https://www.postgresql.org/message-id/CAHut%2BPt3ZEMo-KTF%3D5KJSU%2BHdWJD19GPGGCKOmBeM47484Ychw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Thu, 9 Mar 2023 16:26:04 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 8:24 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> Attach the new patch.\n>\n\nI think this combines multiple improvements in one patch. We can\nconsider all of them together or maybe it would be better to split\nsome of those. Do we think it makes sense to split some of the\nimprovements? I could think of below:\n\n1. Remove SyncRepRequested() check from WalSndUpdateProgress().\n2. Add check of wal_sender_timeout > 0 in WalSndUpdateProgress() and\nany other similar place.\n3. Change the name of ProcessPendingWrites() to WalSndSendPending().\n4. Change WalSndUpdateProgress() to WalSndUpdateProgressAndKeepalive().\n5. The remaining patch.\n\nNow, for (1), we can consider backpatching but I am not sure if it is\nworth it because in the worst case, we will miss sending a keepalive.\nFor (4), it is not clear to me that we have a complete agreement on\nthe new name. Andres, do you have an opinion on the new name used in\nthe patch?\n\nIf we agree that we don't need to backpatch for (1) and the new name\nfor (4) is reasonable then we can commit 1-4 as one patch and then\nlook at the remaining patch.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 10 Mar 2023 09:25:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 10:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 2. rollback_prepared_cb_wrapper\n>\n> /*\n> * If the plugin support two-phase commits then rollback prepared callback\n> * is mandatory\n> + *\n> + * FIXME: This should have been caught much earlier.\n> */\n> if (ctx->callbacks.rollback_prepared_cb == NULL)\n> ereport(ERROR,\n>\n> ~\n>\n> Why is this seemingly unrelated FIXME still in the patch?\n>\n\nAfter reading this Fixme comment and the error message (\"logical\nreplication at prepare time requires a %s callback\nrollback_prepared_cb\"), I think we can move this and a similar check\nin function commit_prepared_cb_wrapper() to prepare_cb_wrapper()\nfunction. This is because there is no use of letting prepare pass when\nwe can't do a rollback or commit prepared. What do you think?\n\n>\n> 4.\n>\n> @@ -1370,6 +1377,8 @@ stream_abort_cb_wrapper(ReorderBuffer *cache,\n> ReorderBufferTXN *txn,\n>\n> /* Pop the error context stack */\n> error_context_stack = errcallback.previous;\n> +\n> + UpdateProgressAndKeepalive(ctx, (txn->toptxn == NULL));\n> }\n>\n> ~\n>\n> Are the double parentheses necessary?\n>\n\nPersonally, I find this style easier to follow.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 10 Mar 2023 10:02:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 9, 2023 at 10:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > 2. rollback_prepared_cb_wrapper\n> >\n> > /*\n> > * If the plugin support two-phase commits then rollback prepared callback\n> > * is mandatory\n> > + *\n> > + * FIXME: This should have been caught much earlier.\n> > */\n> > if (ctx->callbacks.rollback_prepared_cb == NULL)\n> > ereport(ERROR,\n> >\n> > ~\n> >\n> > Why is this seemingly unrelated FIXME still in the patch?\n> >\n>\n> After reading this Fixme comment and the error message (\"logical\n> replication at prepare time requires a %s callback\n> rollback_prepared_cb\"), I think we can move this and a similar check\n> in function commit_prepared_cb_wrapper() to prepare_cb_wrapper()\n> function. This is because there is no use of letting prepare pass when\n> we can't do a rollback or commit prepared. What do you think?\n>\n\nMy first impression was it sounds like a good idea to catch the\nmissing callbacks early as you said.\n\nBut if you decide to check for missing commit/rollback callbacks early\nin prepare_cb_wrapper(), then won't you also want to have equivalent\nchecking done earlier for stream_prepare_cb_wrapper()?\n\nAnd then it quickly becomes a slippery slope to question many other things:\n- Why allow startup_cb if shutdown_cb is missing?\n- Why allow change_cb if commit_cb or rollback_cb is missing?\n- Why allow filter_prepare_cb if prepare_cb is missing?\n- etc.\n\n~\n\nSo I am wondering if the HEAD code lazy-check of the callback only at\nthe point where it is needed was actually a deliberate design choice\njust to be simpler - e.g. we don't need to be so concerned about any\nother callback dependencies.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 10 Mar 2023 16:47:08 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 11:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Mar 10, 2023 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Mar 9, 2023 at 10:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > 2. rollback_prepared_cb_wrapper\n> > >\n> > > /*\n> > > * If the plugin support two-phase commits then rollback prepared callback\n> > > * is mandatory\n> > > + *\n> > > + * FIXME: This should have been caught much earlier.\n> > > */\n> > > if (ctx->callbacks.rollback_prepared_cb == NULL)\n> > > ereport(ERROR,\n> > >\n> > > ~\n> > >\n> > > Why is this seemingly unrelated FIXME still in the patch?\n> > >\n> >\n> > After reading this Fixme comment and the error message (\"logical\n> > replication at prepare time requires a %s callback\n> > rollback_prepared_cb\"), I think we can move this and a similar check\n> > in function commit_prepared_cb_wrapper() to prepare_cb_wrapper()\n> > function. This is because there is no use of letting prepare pass when\n> > we can't do a rollback or commit prepared. What do you think?\n> >\n>\n> My first impression was it sounds like a good idea to catch the\n> missing callbacks early as you said.\n>\n> But if you decide to check for missing commit/rollback callbacks early\n> in prepare_cb_wrapper(), then won't you also want to have equivalent\n> checking done earlier for stream_prepare_cb_wrapper()?\n>\n\nYeah, probably or we can leave the lazy checking as it is. In the\nideal case, we could check for the presence of all the callbacks in\nStartupDecodingContext() but we delay it to find the missing methods\nlater. One possibility is that we check for any missing method in\nStartupDecodingContext() if any one of prepare/streaming calls are\npresent but not sure if that is any better than the current\narrangement.\n\n> And then it quickly becomes a slippery slope to question many other things:\n> - Why allow startup_cb if shutdown_cb is missing?\n>\n\nI am not sure if there is a hard dependency between these two but\ntheir callers do check for Null before invoking those.\n\n> - Why allow change_cb if commit_cb or rollback_cb is missing?\n\nWe have a check for change_cb and commit_cb in LoadOutputPlugin. Do we\nhave rollback_cb() defined at all?\n\n> - Why allow filter_prepare_cb if prepare_cb is missing?\n>\n\nI am not so sure about this but If prepare gets filtered, we don't\nneed to invoke prepare_cb.\n\n> - etc.\n>\n> ~\n>\n> So I am wondering if the HEAD code lazy-check of the callback only at\n> the point where it is needed was actually a deliberate design choice\n> just to be simpler - e.g. we don't need to be so concerned about any\n> other callback dependencies.\n>\n\nYeah, changing that probably needs some more thought. I have mentioned\none of the possibilities above.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 10 Mar 2023 12:05:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Mon, Mar 10, 2023 11:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Mar 8, 2023 at 8:24 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the new patch.\r\n> >\r\n> \r\n> I think this combines multiple improvements in one patch. We can\r\n> consider all of them together or maybe it would be better to split\r\n> some of those. Do we think it makes sense to split some of the\r\n> improvements? I could think of below:\r\n> \r\n> 1. Remove SyncRepRequested() check from WalSndUpdateProgress().\r\n> 2. Add check of wal_sender_timeout > 0 in WalSndUpdateProgress() and\r\n> any other similar place.\r\n> 3. Change the name of ProcessPendingWrites() to WalSndSendPending().\r\n> 4. Change WalSndUpdateProgress() to WalSndUpdateProgressAndKeepalive().\r\n> 5. The remaining patch.\r\n\r\nI think it would help to review different improvements separately, so I split\r\nthe patch as suggested.\r\n\r\nAlso addressed the comments by Kuroda-san, Osumi-san and Peter.\r\nAttach the new patch set.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Fri, 10 Mar 2023 09:32:28 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Wed, Mar 8, 2023 19:06 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Wang,\r\n\r\nThanks for your testing and comments.\r\n\r\n> ---\r\n> ```\r\n> +/*\r\n> + * Update progress tracking and send keep alive (if required).\r\n> + */\r\n> +static void\r\n> +UpdateProgressAndKeepalive(LogicalDecodingContext *ctx, bool finished_xact)\r\n> ```\r\n> \r\n> Can we add atop the UpdateProgressAndKeepalive()? Currently the developers\r\n> who\r\n> create output plugins must call OutputPluginUpdateProgress(), but from now the\r\n> function is not only renamed but does not have nessesary to call from plugin\r\n> (of cource we do not restrict to call it). I think it must be clarified for them.\r\n\r\nMake sense.\r\nAdded some comments atop this function.\r\n\r\n> ---\r\n> ReorderBufferUpdateProgressTxnCB must be removed from typedefs.list.\r\n\r\nRemoved.\r\n\r\n> ---\r\n> Do we have to write a document for the breakage somewhere? I think we do not\r\n> have\r\n> to add appendix-obsolete-* file because we did not have any links for that, but\r\n> we can add a warning in \"Functions for Producing Output\" subsection if needed.\r\n\r\nSince we've moved the feature (update progress and send keepalive) from the\r\noutput plugin into the infrastructure, the output plugin is no longer\r\nresponsible for maintaining this feature anymore. Also, I think output plugin\r\ndevelopers only need to remove the call to the old function\r\nOutputPluginUpdateProgress if they get compile errors related to this\r\nmodification. So, it seems to me that we don't need to add relevant\r\nmodifications in pg-doc.\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Fri, 10 Mar 2023 09:33:42 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Wed, Mar 8, 2023 23:55 PM Osumi, Takamichi/大墨 昂道 <osumi.takamichi@fujitsu.com> wrote:\r\n> Hi,\r\n> \r\n> \r\n> On Wednesday, March 8, 2023 11:54 AM From: wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> > Attach the new patch.\r\n> Thanks for sharing v6 ! Few minor comments for the same.\r\n\r\nThanks for your comments.\r\n\r\n> (1) commit message\r\n> \r\n> The old function name 'is_skip_threshold_change' is referred currently. We need\r\n> to update it to 'is_keepalive_threshold_exceeded' I think.\r\n\r\nFixed.\r\n\r\n> (2) OutputPluginPrepareWrite\r\n> \r\n> @@ -662,7 +656,8 @@ void\r\n> OutputPluginPrepareWrite(struct LogicalDecodingContext *ctx, bool last_write)\r\n> {\r\n> if (!ctx->accept_writes)\r\n> - elog(ERROR, \"writes are only accepted in commit, begin and change\r\n> callbacks\");\r\n> + elog(ERROR, \"writes are only accepted in output plugin callbacks, \"\r\n> + \"except startup, shutdown, filter_by_origin, and filter_prepare.\");\r\n> \r\n> We can remove the period at the end of error string.\r\n\r\nRemoved.\r\n\r\n> (3) is_keepalive_threshold_exceeded's comments\r\n> \r\n> +/*\r\n> + * Helper function to check whether a large number of changes have been\r\n> skipped\r\n> + * continuously.\r\n> + */\r\n> +static bool\r\n> +is_keepalive_threshold_exceeded(LogicalDecodingContext *ctx)\r\n> \r\n> I suggest to update the comment slightly something like below.\r\n> From:\r\n> ...whether a large number of changes have been skipped continuously\r\n> To:\r\n> ...whether a large number of changes have been skipped without being sent to\r\n> the output plugin continuously\r\n\r\nMake sense.\r\nAlso, I slightly corrected the original function comment with a grammar check\r\ntool. So, the modified comment looks like this:\r\n```\r\nHelper function to check for continuous skipping of many changes without sending\r\nthem to the output plugin.\r\n```\r\n\r\n> (4) term for 'keepalive'\r\n> \r\n> +/*\r\n> + * Update progress tracking and send keep alive (if required).\r\n> + */\r\n> \r\n> The 'keep alive' might be better to be replaced with 'keepalive', which looks\r\n> commonest in other source codes. In the current patch, there are 3 different\r\n> ways to express it (the other one is 'keep-alive') and it would be better to unify\r\n> the term, at least within the same patch ?\r\n\r\nYes, agree.\r\nUnified the comment you mentioned here ('keep alive') and the comment in the\r\ncommit message ('keep-alive') as 'keepalive'.\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Fri, 10 Mar 2023 09:34:29 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Thur, Mar 9, 2023 13:26 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for v6-0001\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> General.\r\n> \r\n> 1.\r\n> There are lots of new comments saying:\r\n> /* don't call update progress, we didn't really make any */\r\n> \r\n> but is the wording \"call update progress\" meaningful?\r\n> \r\n> Should that be written something more like:\r\n> /* No progress has been made so there is no need to call\r\n> UpdateProgressAndKeepalive. */\r\n\r\nChanged.\r\nShortened your suggested comment using a grammar tool. So, the modified comment\r\nlooks like this:\r\n```\r\nNo progress has been made, so don't call UpdateProgressAndKeepalive\r\n```\r\n\r\n> ~~~\r\n> \r\n> 4.\r\n> \r\n> @@ -1370,6 +1377,8 @@ stream_abort_cb_wrapper(ReorderBuffer *cache,\r\n> ReorderBufferTXN *txn,\r\n> \r\n> /* Pop the error context stack */\r\n> error_context_stack = errcallback.previous;\r\n> +\r\n> + UpdateProgressAndKeepalive(ctx, (txn->toptxn == NULL));\r\n> }\r\n> \r\n> ~\r\n> \r\n> Are the double parentheses necessary?\r\n\r\nI think the code looks clearer this way.\r\n\r\n> ======\r\n> src/backend/replication/walsender.c\r\n> \r\n> 6. WalSndUpdateProgressAndKeepalive\r\n> \r\n> Since the 'ctx' is unused here, it might be nicer to annotate that to\r\n> make it clear it is deliberate and suppress any possible warnings\r\n> about unused params.\r\n> \r\n> e.g. something like:\r\n> \r\n> WalSndUpdateProgressAndKeepalive(\r\n> pg_attribute_unused() LogicalDecodingContext *ctx,\r\n> XLogRecPtr lsn,\r\n> TransactionId xid,\r\n> bool did_write,\r\n> bool finished_xact)\r\n\r\nBecause many functions don't use this approach, I’m not sure what the rules are\r\nfor using it in PG. And I think that we should discuss this on a separate thread\r\nto check which similar functions need this kind of modification in PG source\r\ncode.\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Fri, 10 Mar 2023 09:36:04 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Mon, Mar 10, 2023 14:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Mar 10, 2023 at 11:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> >\r\n> > On Fri, Mar 10, 2023 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > >\r\n> > > On Thu, Mar 9, 2023 at 10:56 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > > > 2. rollback_prepared_cb_wrapper\r\n> > > >\r\n> > > > /*\r\n> > > > * If the plugin support two-phase commits then rollback prepared callback\r\n> > > > * is mandatory\r\n> > > > + *\r\n> > > > + * FIXME: This should have been caught much earlier.\r\n> > > > */\r\n> > > > if (ctx->callbacks.rollback_prepared_cb == NULL)\r\n> > > > ereport(ERROR,\r\n> > > >\r\n> > > > ~\r\n> > > >\r\n> > > > Why is this seemingly unrelated FIXME still in the patch?\r\n> > > >\r\n> > >\r\n> > > After reading this Fixme comment and the error message (\"logical\r\n> > > replication at prepare time requires a %s callback\r\n> > > rollback_prepared_cb\"), I think we can move this and a similar check\r\n> > > in function commit_prepared_cb_wrapper() to prepare_cb_wrapper()\r\n> > > function. This is because there is no use of letting prepare pass when\r\n> > > we can't do a rollback or commit prepared. What do you think?\r\n> > >\r\n> >\r\n> > My first impression was it sounds like a good idea to catch the\r\n> > missing callbacks early as you said.\r\n> >\r\n> > But if you decide to check for missing commit/rollback callbacks early\r\n> > in prepare_cb_wrapper(), then won't you also want to have equivalent\r\n> > checking done earlier for stream_prepare_cb_wrapper()?\r\n> >\r\n> \r\n> Yeah, probably or we can leave the lazy checking as it is. In the\r\n> ideal case, we could check for the presence of all the callbacks in\r\n> StartupDecodingContext() but we delay it to find the missing methods\r\n> later. One possibility is that we check for any missing method in\r\n> StartupDecodingContext() if any one of prepare/streaming calls are\r\n> present but not sure if that is any better than the current\r\n> arrangement.\r\n> \r\n> > And then it quickly becomes a slippery slope to question many other things:\r\n> > - Why allow startup_cb if shutdown_cb is missing?\r\n> >\r\n> \r\n> I am not sure if there is a hard dependency between these two but\r\n> their callers do check for Null before invoking those.\r\n> \r\n> > - Why allow change_cb if commit_cb or rollback_cb is missing?\r\n> \r\n> We have a check for change_cb and commit_cb in LoadOutputPlugin. Do we\r\n> have rollback_cb() defined at all?\r\n> \r\n> > - Why allow filter_prepare_cb if prepare_cb is missing?\r\n> >\r\n> \r\n> I am not so sure about this but If prepare gets filtered, we don't\r\n> need to invoke prepare_cb.\r\n> \r\n> > - etc.\r\n> >\r\n> > ~\r\n> >\r\n> > So I am wondering if the HEAD code lazy-check of the callback only at\r\n> > the point where it is needed was actually a deliberate design choice\r\n> > just to be simpler - e.g. we don't need to be so concerned about any\r\n> > other callback dependencies.\r\n> >\r\n> \r\n> Yeah, changing that probably needs some more thought. I have mentioned\r\n> one of the possibilities above.\r\n\r\nI think this approach looks fine to me. So, I wrote a separate patch (0006) for\r\ndiscussing and reviewing this approach.\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Fri, 10 Mar 2023 09:36:35 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nOn Friday, March 10, 2023 6:32 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com> wrote:\r\n> Attach the new patch set.\r\nThanks for updating the patch ! One review comment on v7-0005.\r\n\r\nstream_start_cb_wrapper and stream_stop_cb_wrapper don't call the pair of threshold check and UpdateProgressAndKeepalive unlike other write wrapper functions like below. But, both of them write some data to the output plugin, set the flag of did_write and thus it updates the subscriber's last_recv_timestamp used for timeout check in LogicalRepApplyLoop. So, it looks adding the pair to both functions can be more accurate, in order to reset the counter in changes_count on the publisher ?\r\n\r\n@@ -1280,6 +1282,8 @@ stream_start_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,\r\n\r\n /* Pop the error context stack */\r\n error_context_stack = errcallback.previous;\r\n+\r\n+ /* No progress has been made, so don't call UpdateProgressAndKeepalive */\r\n }\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 10 Mar 2023 12:17:16 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Fri, Mar 10, 2023 20:17 PM Osumi, Takamichi/大墨 昂道 <osumi.takamichi@fujitsu.com> wrote:\r\n> Hi,\r\n> \r\n> \r\n> On Friday, March 10, 2023 6:32 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com>\r\n> wrote:\r\n> > Attach the new patch set.\r\n> Thanks for updating the patch ! One review comment on v7-0005.\r\n\r\nThanks for your comment.\r\n\r\n> stream_start_cb_wrapper and stream_stop_cb_wrapper don't call the pair of\r\n> threshold check and UpdateProgressAndKeepalive unlike other write wrapper\r\n> functions like below. But, both of them write some data to the output plugin, set\r\n> the flag of did_write and thus it updates the subscriber's last_recv_timestamp\r\n> used for timeout check in LogicalRepApplyLoop. So, it looks adding the pair to\r\n> both functions can be more accurate, in order to reset the counter in\r\n> changes_count on the publisher ?\r\n> \r\n> @@ -1280,6 +1282,8 @@ stream_start_cb_wrapper(ReorderBuffer *cache,\r\n> ReorderBufferTXN *txn,\r\n> \r\n> /* Pop the error context stack */\r\n> error_context_stack = errcallback.previous;\r\n> +\r\n> + /* No progress has been made, so don't call UpdateProgressAndKeepalive */\r\n> }\r\n\r\nSince I think stream_start/stop_cp are different from change_cb, they don't\r\nrepresent records in wal, so I think the LSNs corresponding to these two\r\nmessages are the LSNs of other records. So, we don't call the function\r\nUpdateProgressAndKeepalive here. Also, for the reasons described in [1].#05, I\r\ndidn't reset the counter here.\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275374EBE7C8CABBE6730099EAF9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Mon, 13 Mar 2023 02:47:21 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Rework LogicalOutputPluginWriterUpdateProgress"
},
{
"msg_contents": "On Mon, 13 Mar 2023 at 08:17, wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Fri, Mar 10, 2023 20:17 PM Osumi, Takamichi/大墨 昂道 <osumi.takamichi@fujitsu.com> wrote:\n> > Hi,\n> >\n> >\n> > On Friday, March 10, 2023 6:32 PM Wang, Wei/王 威 <wangw.fnst@fujitsu.com>\n> > wrote:\n> > > Attach the new patch set.\n> > Thanks for updating the patch ! One review comment on v7-0005.\n>\n> Thanks for your comment.\n>\n> > stream_start_cb_wrapper and stream_stop_cb_wrapper don't call the pair of\n> > threshold check and UpdateProgressAndKeepalive unlike other write wrapper\n> > functions like below. But, both of them write some data to the output plugin, set\n> > the flag of did_write and thus it updates the subscriber's last_recv_timestamp\n> > used for timeout check in LogicalRepApplyLoop. So, it looks adding the pair to\n> > both functions can be more accurate, in order to reset the counter in\n> > changes_count on the publisher ?\n> >\n> > @@ -1280,6 +1282,8 @@ stream_start_cb_wrapper(ReorderBuffer *cache,\n> > ReorderBufferTXN *txn,\n> >\n> > /* Pop the error context stack */\n> > error_context_stack = errcallback.previous;\n> > +\n> > + /* No progress has been made, so don't call UpdateProgressAndKeepalive */\n> > }\n>\n> Since I think stream_start/stop_cp are different from change_cb, they don't\n> represent records in wal, so I think the LSNs corresponding to these two\n> messages are the LSNs of other records. So, we don't call the function\n> UpdateProgressAndKeepalive here. Also, for the reasons described in [1].#05, I\n> didn't reset the counter here.\n\nAs there has been no activity in this thread and it seems there is not\nmuch interest on this from the last 9 months, I have changed the\nstatus of the patch to \"Returned with Feedback\".\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 15 Jan 2024 21:50:50 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress"
}
] |
[
{
"msg_contents": "Hello pgsql-hackers!\n\nAs you may know there's a new version of UUID being standardized [0].\nThese new algorithms of UUID generation are very promising for\ndatabase performance. It keeps data locality for time-ordered values.\n From my POV v7 is especially needed for users. Current standard status\nis \"draft\". And I'm not sure it will be accepted before our feature\nfreeze for PG16. Maybe we could provide a draft implementation in 16\nand adjust it to the accepted version if the standard is changed? PFA\npatch with implementation.\n\nWhat do you think?\n\ncc Brad Peabody and Kyzer R. Davis, authors of the standard\ncc Kirk Wolak and Nik Samokhvalov who requested the feature\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format-04",
"msg_date": "Fri, 10 Feb 2023 15:57:50 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "UUID v7"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-10 15:57:50 -0800, Andrey Borodin wrote:\n> As you may know there's a new version of UUID being standardized [0].\n> These new algorithms of UUID generation are very promising for\n> database performance.\n\nI agree it's very useful to have.\n\n\n> [0] https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format-04\n\nThat looks to not be the current version anymore, it's superseded by:\nhttps://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis\n\n\n> It keeps data locality for time-ordered values.\n> From my POV v7 is especially needed for users. Current standard status\n> is \"draft\". And I'm not sure it will be accepted before our feature\n> freeze for PG16. Maybe we could provide a draft implementation in 16\n> and adjust it to the accepted version if the standard is changed? PFA\n> patch with implementation.\n\nHm. It seems somewhat worrisome to claim something is a v7 UUID when it might\nturn out to not be one.\n\n\nPerhaps we should name the function something like\ngen_time_ordered_random_uuid() instead? That gives us a bit more flexibility\nabout what uuid version we generate. And it might be easier for users, anyway.\n\nStill not sure what version we'd best use for now. Perhaps v8?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Feb 2023 17:14:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-10 15:57:50 -0800, Andrey Borodin wrote:\n>> From my POV v7 is especially needed for users. Current standard status\n>> is \"draft\". And I'm not sure it will be accepted before our feature\n>> freeze for PG16. Maybe we could provide a draft implementation in 16\n>> and adjust it to the accepted version if the standard is changed? PFA\n>> patch with implementation.\n\n> Hm. It seems somewhat worrisome to claim something is a v7 UUID when it might\n> turn out to not be one.\n\nI think there is no need to rush this into v16. Let's wait for the\nstandardization process to play out.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Feb 2023 20:18:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 5:14 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Perhaps we should name the function something like\n> gen_time_ordered_random_uuid() instead? That gives us a bit more flexibility\n> about what uuid version we generate. And it might be easier for users, anyway.\nI think users would be happy with any name.\n\n> Still not sure what version we'd best use for now. Perhaps v8?\nV8 is just a \"custom data\" format. Like \"place whatever you want\".\nThough I agree that its sample implementation looks to be better.\n\n\n\nOn Fri, Feb 10, 2023 at 5:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > Hm. It seems somewhat worrisome to claim something is a v7 UUID when it might\n> > turn out to not be one.\n>\n> I think there is no need to rush this into v16. Let's wait for the\n> standardization process to play out.\n>\n\nStandardization per se does not bring value to users. However, I agree\nthat eager users can just have it today as an extension and be happy\nwith it [0].\nMaybe it's fine to wait a year for others...\n\n\nBest regards, Andrey Borodin.\n\n\n[0] https://github.com/x4m/pg_uuid_next\n\n\n",
"msg_date": "Fri, 10 Feb 2023 18:53:25 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On 11.02.23 02:14, Andres Freund wrote:\n> On 2023-02-10 15:57:50 -0800, Andrey Borodin wrote:\n>> As you may know there's a new version of UUID being standardized [0].\n>> These new algorithms of UUID generation are very promising for\n>> database performance.\n> \n> I agree it's very useful to have.\n> \n> \n>> [0] https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format-04\n> \n> That looks to not be the current version anymore, it's superseded by:\n> https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis\n\nYes, this means that the draft that an individual had uploaded has now \nbeen taken on by a working group for formal review. If there is a \nprototype implementation, this is a good time to provide feedback. But \nit's too early to ship a production version.\n\n\n\n",
"msg_date": "Sat, 11 Feb 2023 16:50:39 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hello Group,\n\nI am happy to see others interested in the improvements provided by UUIDv7!\n\nI caught up on the thread and you all are correct.\n\nWork has moved on GitHub from uuid6/uuid6-ietf-draft to \nietf-wg-uuidrev/rfc4122bis\n- Draft 00 merged RFC4122 with Draft 04 and fixed as many problems as possible \nwith RFC4122.\n- Draft 01 continued to iterate on RFC4122 problems: \nhttps://author-tools.ietf.org/iddiff?url2=draft-ietf-uuidrev-rfc4122bis-01\n- Draft 02 items being changed are summarized in the latest PR for review in \nthe upcoming interim meeting (Feb 16th): \nhttps://github.com/ietf-wg-uuidrev/rfc4122bis/pull/60\nNote: Draft 02 should be published by the end of the week and long term we \nhave one more meeting at IETF 116 to iron out the replacement of RFC4122, \nperform last call and submit to the IESG for official review and consideration \nfor replacement of RFC4122 (actual timeline for that varies based on what IESG \nwants me to fix.)\n\nThat all being said:\nThe point is 99% of the work since adoption by the IETF has been ironing out \nRFC4122's problems and nothing major related to UUIDv6/7/8 which are all in a \nvery good state.\n\nIf anybody has any feedback found during draft reviewing or prototyping; \nplease either email uuidrev@ietf.org or drop an issue on the tracker:\nhttps://github.com/ietf-wg-uuidrev/rfc4122bis/issues\n\nLastly, I have added the C/SQL implementation to the prototypes page below:\nhttps://github.com/uuid6/prototypes\n\nThanks!\n\n-----Original Message-----\nFrom: Peter Eisentraut <peter.eisentraut@enterprisedb.com>\nSent: Saturday, February 11, 2023 10:51 AM\nTo: Andres Freund <andres@anarazel.de>; Andrey Borodin <amborodin86@gmail.com>\nCc: pgsql-hackers <pgsql-hackers@postgresql.org>; brad@peabody.io; \nwolakk@gmail.com; Kyzer Davis (kydavis) <kydavis@cisco.com>; Nikolay \nSamokhvalov <samokhvalov@gmail.com>\nSubject: Re: UUID v7\n\nOn 11.02.23 02:14, Andres Freund wrote:\n> On 2023-02-10 15:57:50 -0800, Andrey Borodin wrote:\n>> As you may know there's a new version of UUID being standardized [0].\n>> These new algorithms of UUID generation are very promising for\n>> database performance.\n>\n> I agree it's very useful to have.\n>\n>\n>> [0]\n>> https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid\n>> -format-04\n>\n> That looks to not be the current version anymore, it's superseded by:\n> https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis\n\nYes, this means that the draft that an individual had uploaded has now been \ntaken on by a working group for formal review. If there is a prototype \nimplementation, this is a good time to provide feedback. But it's too early \nto ship a production version.",
"msg_date": "Tue, 14 Feb 2023 14:13:43 +0000",
"msg_from": "\"Kyzer Davis (kydavis)\" <kydavis@cisco.com>",
"msg_from_op": false,
"msg_subject": "RE: UUID v7"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 6:13 AM Kyzer Davis (kydavis) <kydavis@cisco.com> wrote:\n> I am happy to see others interested in the improvements provided by UUIDv7!\n\nThank you for providing the details!\n\nSome small updates as I see them:\n- there is revision 7 now in https://github.com/ietf-wg-uuidrev/rfc4122bis\n- noticing that there is no commitfest record, I created one:\nhttps://commitfest.postgresql.org/43/4388/\n- recent post by Ants Aasma, Cybertec about the downsides of\ntraditional UUID raised a big discussion today on HN:\nhttps://news.ycombinator.com/item?id=36429986\n\n\n",
"msg_date": "Thu, 22 Jun 2023 11:30:20 -0700",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 22 Jun 2023, at 20:30, Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:\n\n> Some small updates as I see them:\n> - there is revision 7 now in https://github.com/ietf-wg-uuidrev/rfc4122bis\n> - noticing that there is no commitfest record, I created one:\n\nI will actually go ahead and close this entry in the current CF, not because we\ndon't want the feature but because it's unlikely that it will go in now given\nthat standardization is still underway. Comitting anything right now seems\npremature, we might as well wait for standardization given that we have lots of\ntime before the v17 freeze.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 14:24:11 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Thu, 6 Jul 2023 at 14:24, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 22 Jun 2023, at 20:30, Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:\n>\n> > Some small updates as I see them:\n> > - there is revision 7 now in https://github.com/ietf-wg-uuidrev/rfc4122bis\n> > - noticing that there is no commitfest record, I created one:\n>\n> I will actually go ahead and close this entry in the current CF, not because we\n> don't want the feature but because it's unlikely that it will go in now given\n> that standardization is still underway. Comitting anything right now seems\n> premature, we might as well wait for standardization given that we have lots of\n> time before the v17 freeze.\n\nI'd like to note that this draft has recently had its last call\nperiod, and has been proposed for publishing early last month. I don't\nknow how long this publishing process usually takes, but it seems like\nthe WG considers the text final, so unless this would take months I\nwouldn't mind keeping this patch around as \"waiting for external\nprocess to complete\". Sure, it's earlier than the actual release of\nthe standard, but that wasn't a blocker for SQL features that were\nconsidered finalized either.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 6 Jul 2023 15:29:50 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 6 Jul 2023, at 15:29, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n> \n> On Thu, 6 Jul 2023 at 14:24, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 22 Jun 2023, at 20:30, Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:\n>> \n>>> Some small updates as I see them:\n>>> - there is revision 7 now in https://github.com/ietf-wg-uuidrev/rfc4122bis\n>>> - noticing that there is no commitfest record, I created one:\n>> \n>> I will actually go ahead and close this entry in the current CF, not because we\n>> don't want the feature but because it's unlikely that it will go in now given\n>> that standardization is still underway. Comitting anything right now seems\n>> premature, we might as well wait for standardization given that we have lots of\n>> time before the v17 freeze.\n> \n> I'd like to note that this draft has recently had its last call\n> period, and has been proposed for publishing early last month.\n\nSure, but this document is in AD Evaluation and there are many stages left in\nthe IESG process, it may still take a fair bit of time before this is done.\n\n> Sure, it's earlier than the actual release of\n> the standard, but that wasn't a blocker for SQL features that were\n> considered finalized either.\n\nI can't speak for any SQL standard features we've committed before being\nstandardized, it's for sure not the norm for the project. I'm only commenting\non this particular Internet standard which we have plenty of time to commit\nbefore v17 without rushing to beat a standards committee.\n\nAlso, if you look you can see that I moved it to the next CF in a vague hope\nthat standardization will be swift (which is admittedly never is).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 15:43:06 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 6 Jul 2023, at 15:29, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>> Sure, it's earlier than the actual release of\n>> the standard, but that wasn't a blocker for SQL features that were\n>> considered finalized either.\n\n> I can't speak for any SQL standard features we've committed before being\n> standardized, it's for sure not the norm for the project.\n\nWe have done a couple of things that way recently. An important\nreason why we felt we could get away with that is that nowadays\nwe have people who actually sit on the SQL committee and have\nreliable information on what's likely to make it into the final text\nof the next version. I don't think we have equivalent visibility or\nshould have equivalent confidence about how UUID v7 standardization\nwill play out.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Jul 2023 10:02:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On 06.07.23 16:02, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 6 Jul 2023, at 15:29, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>>> Sure, it's earlier than the actual release of\n>>> the standard, but that wasn't a blocker for SQL features that were\n>>> considered finalized either.\n> \n>> I can't speak for any SQL standard features we've committed before being\n>> standardized, it's for sure not the norm for the project.\n> \n> We have done a couple of things that way recently. An important\n> reason why we felt we could get away with that is that nowadays\n> we have people who actually sit on the SQL committee and have\n> reliable information on what's likely to make it into the final text\n> of the next version. I don't think we have equivalent visibility or\n> should have equivalent confidence about how UUID v7 standardization\n> will play out.\n\n(I have been attending some meetings and I'm on the mailing list.)\n\nAnyway, I think it would be reasonable to review this patch now. We \nmight leave it hanging in \"Ready for Committer\" for a while when we get \nthere. But surely review can start now.\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 18:38:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 6 Jul 2023, at 21:38, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> I think it would be reasonable to review this patch now.\n+1.\n\nAlso, I think we should discuss UUID v8. UUID version 8 provides an RFC-compatible format for experimental or vendor-specific use cases. Revision 1 of IETF draft contained interesting code for v8: almost similar to v7, but with fields for \"node ID\" and \"rolling sequence number\".\nI think this is reasonable approach, thus I attach implementation of UUID v8 per [0]. But from my point of view this implementation has some flaws.\nThese two new fields \"node ID\" and \"sequence\" are there not for uniqueness, but rather for data locality.\nBut they are placed at the end, in bytes 14 and 15, after randomly generated numbers.\n\nI think that \"sequence\" is there to help generate local ascending identifiers when the real time clock do not provide enough resolution. So \"sequence\" field must be placed after 6 bytes of time-generated identifier.\n\nOn a contrary \"node ID\" must differentiate identifiers generated on different nodes. So it makes sense to place \"node ID\" before timing. So identifiers generated on different nodes will tend to be in different ranges.\nAlthough, section \"6.4. Distributed UUID Generation\" states that \"node ID\" is there to decrease the likelihood of a collision. So my intuition might be wrong here.\n\n\nDo we want to provide this \"vendor-specific\" UUID with tweaks for databases? Or should we limit the scope with well defined UUID v7?\n\n\nBest regards, Andrey Borodin.\n\n\n[0] https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-01",
"msg_date": "Fri, 7 Jul 2023 17:06:19 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Great discussions group,\n\n> I think it would be reasonable to review this patch now.\nI am happy to review the format and logic for any proposed v7 and/or v8\nUUID. Just point me to a PR or some code review location.\n\n> Distributed UUID Generation\" states that \"node ID\" is there to decrease\n> the likelihood of a collision.\nCorrect, node identifiers help provide some bit space that ensures no\ncollision in the event the stars align where two nodes create the exact\nUUID.\n\n From what I have seen UUIDv7 should meet the requirements outlined thus far\nIn this thread.\n\nAlso to add, there are two UUID prototypes for postgres from my checks.\nAlthough they are outdated from the latest draft sent up for official\nPublication so review them from an academic perspective.)\n- https://github.com/uuid6/prototypes\n- pg_uuid_next (see this thread which nicely summarizes some UUIDv7\n\"checkboxes\" https://github.com/x4m/pg_uuid_next/issues/1)\n- UUID_v7_for_Postgres.sql\n\nDon't forget, if we have UUIDv1 already implemented in the postgres code you\nmay want to examine UUIDv6.\nUUIDv6 is simply a fork of that code and swap of the timestamp bits.\nIn terms of effort UUIDv6 easy to implement and gives you a time ordered\nUUID re-using 99% of the code you may already have.\n\nLastly, my advice on v8 is that I would examine/implement v6 or v7 first\nbefore jumping to v8\nbecause whatever you do for implementing v6 or v7 will help you implement a\nbetter v8.\nThere are also a number of v8 prototype implementations (at the previous\nlink) if somebody wants to give them a scroll.\n\nHappy to answer any other questions where I can provide input.\n\nThanks,\n\n-----Original Message-----\nFrom: Andrey M. Borodin <x4mmm@yandex-team.ru> \nSent: Friday, July 7, 2023 8:06 AM\nTo: Peter Eisentraut <peter.eisentraut@enterprisedb.com>\nCc: Tom Lane <tgl@sss.pgh.pa.us>; Daniel Gustafsson <daniel@yesql.se>;\nMatthias van de Meent <boekewurm+postgres@gmail.com>; Nikolay Samokhvalov\n<samokhvalov@gmail.com>; Kyzer Davis (kydavis) <kydavis@cisco.com>; Andres\nFreund <andres@anarazel.de>; Andrey Borodin <amborodin86@gmail.com>;\nPostgreSQL Hackers <pgsql-hackers@postgresql.org>; brad@peabody.io;\nwolakk@gmail.com\nSubject: Re: UUID v7\n\n\n\n> On 6 Jul 2023, at 21:38, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> \n> I think it would be reasonable to review this patch now.\n+1.\n\nAlso, I think we should discuss UUID v8. UUID version 8 provides an\nRFC-compatible format for experimental or vendor-specific use cases.\nRevision 1 of IETF draft contained interesting code for v8: almost similar\nto v7, but with fields for \"node ID\" and \"rolling sequence number\".\nI think this is reasonable approach, thus I attach implementation of UUID v8\nper [0]. But from my point of view this implementation has some flaws.\nThese two new fields \"node ID\" and \"sequence\" are there not for uniqueness,\nbut rather for data locality.\nBut they are placed at the end, in bytes 14 and 15, after randomly generated\nnumbers.\n\nI think that \"sequence\" is there to help generate local ascending\nidentifiers when the real time clock do not provide enough resolution. So\n\"sequence\" field must be placed after 6 bytes of time-generated identifier.\n\nOn a contrary \"node ID\" must differentiate identifiers generated on\ndifferent nodes. So it makes sense to place \"node ID\" before timing. So\nidentifiers generated on different nodes will tend to be in different\nranges.\nAlthough, section \"6.4. Distributed UUID Generation\" states that \"node ID\"\nis there to decrease the likelihood of a collision. So my intuition might be\nwrong here.\n\n\nDo we want to provide this \"vendor-specific\" UUID with tweaks for databases?\nOr should we limit the scope with well defined UUID v7?\n\n\nBest regards, Andrey Borodin.\n\n\n[0] https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-01",
"msg_date": "Fri, 7 Jul 2023 13:31:07 +0000",
"msg_from": "\"Kyzer Davis (kydavis)\" <kydavis@cisco.com>",
"msg_from_op": false,
"msg_subject": "RE: UUID v7"
},
{
"msg_contents": "On 07.07.23 14:06, Andrey M. Borodin wrote:\n> Also, I think we should discuss UUID v8. UUID version 8 provides an RFC-compatible format for experimental or vendor-specific use cases. Revision 1 of IETF draft contained interesting code for v8: almost similar to v7, but with fields for \"node ID\" and \"rolling sequence number\".\n> I think this is reasonable approach, thus I attach implementation of UUID v8 per [0].\n\nI suggest we keep this thread to v7, which has pretty straightforward \nsemantics for PostgreSQL. v8 by definition has many possible \nimplementations, so you're going to have to make pretty strong arguments \nthat yours is the best and only one, if you are going to claim the \ngen_uuid_v8 function name.\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 18:50:38 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 10 Jul 2023, at 21:50, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> I suggest we keep this thread to v7, which has pretty straightforward semantics for PostgreSQL. v8 by definition has many possible implementations, so you're going to have to make pretty strong arguments that yours is the best and only one, if you are going to claim the gen_uuid_v8 function name.\n\nThanks Peter, I'll follow this course of action.\n\nAfter discussion on GitHub with Sergey Prokhorenko [0] I understood that counter is optional, but useful part of UUID v7. It actually promotes sortability of data generated at high speed.\nThe standard does not specify how big counter should be. PFA patch with 16 bit counter. Maybe it worth doing 18bit counter - it will save us one byte of PRNG data. Currently we only take 2 bits out of the whole random byte.\n\n\nBest regards, Andrey Borodin.\n\n\n[0] https://github.com/x4m/pg_uuid_next/issues/1#issuecomment-1657074776 <https://github.com/x4m/pg_uuid_next/issues/1#issuecomment-1657074776>",
"msg_date": "Sun, 30 Jul 2023 15:08:41 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 30 Jul 2023, at 13:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n> After discussion on GitHub with Sergey Prokhorenko [0] I understood that counter is optional, but useful part of UUID v7. It actually promotes sortability of data generated at high speed.\n> The standard does not specify how big counter should be. PFA patch with 16 bit counter. Maybe it worth doing 18bit counter - it will save us one byte of PRNG data. Currently we only take 2 bits out of the whole random byte.\n> \n\nHere's a new patch version. Now counter is initialised with strong random on every time change (each ms). However, one first bit of the counter is preserved to zero. This is done to extend counter capacity (I left comments with reference to RFC with explanations).\n\nThanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 20 Aug 2023 23:56:34 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 20 Aug 2023, at 23:56, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> <v4-0001-Implement-UUID-v7-as-per-IETF-draft.patch>\n\nI've observed, that pre-generating and buffering random numbers makes UUID generation 10 times faster.\n\nWithout buffering\npostgres=# with x as (select gen_uuid_v7() from generate_series(1,1e6)) select count(*) from x;\nTime: 5286.572 ms (00:05.287)\n\nWith buffering\npostgres=# with x as (select gen_uuid_v7() from generate_series(1,1e6)) select count(*) from x;\nTime: 390.091 ms\n\nThis can speed up gen_random_uuid() on the same scale too. PFA implementation of this technique.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Mon, 21 Aug 2023 11:42:20 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 21 Aug 2023, at 13:42, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> <v5-0001-Implement-UUID-v7-as-per-IETF-draft.patch><v5-0002-Buffer-random-numbers.patch><v5-0003-Use-cached-random-numbers-in-gen_random_uuid-too.patch>\n\nFPA attached next version.\nChanges:\n- implemented protection from time leap backwards when series is generated on the same backend\n- counter overflow is now translated into ms step forward\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Thu, 31 Aug 2023 00:04:46 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Andrey,\n\nThanks for all your work on this. I think this will be really useful.\n\n From a user perspective, it would be great to add 2 things:\n- A function to extract the timestamp from a V7 UUID (very useful for\ndefining constraints if partitioning by the uuid-embedded timestamps, for\ninstance).\n- Can we add an optional timestamptz argument to gen_uuid_v7 so that you\ncan explicitly specify a time instead of always generating for the current\ntime? If the argument is NULL, then use current time. This could be useful\nfor backfilling and other applications.\n\nThanks,\nMatvey Arye\nTimescale software developer.\n\n\nOn Wed, Aug 30, 2023 at 3:05 PM Andrey M. Borodin <x4mmm@yandex-team.ru>\nwrote:\n\n>\n>\n> > On 21 Aug 2023, at 13:42, Andrey M. Borodin <x4mmm@yandex-team.ru>\n> wrote:\n> >\n> >\n> <v5-0001-Implement-UUID-v7-as-per-IETF-draft.patch><v5-0002-Buffer-random-numbers.patch><v5-0003-Use-cached-random-numbers-in-gen_random_uuid-too.patch>\n>\n> FPA attached next version.\n> Changes:\n> - implemented protection from time leap backwards when series is generated\n> on the same backend\n> - counter overflow is now translated into ms step forward\n>\n>\n> Best regards, Andrey Borodin.\n>\n\nAndrey,Thanks for all your work on this. I think this will be really useful. From a user perspective, it would be great to add 2 things:- A function to extract the timestamp from a V7 UUID (very useful for defining constraints if partitioning by the uuid-embedded timestamps, for instance).- Can we add an optional timestamptz argument to gen_uuid_v7 so that you can explicitly specify a time instead of always generating for the current time? If the argument is NULL, then use current time. This could be useful for backfilling and other applications.Thanks,Matvey AryeTimescale software developer.On Wed, Aug 30, 2023 at 3:05 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n\n> On 21 Aug 2023, at 13:42, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> <v5-0001-Implement-UUID-v7-as-per-IETF-draft.patch><v5-0002-Buffer-random-numbers.patch><v5-0003-Use-cached-random-numbers-in-gen_random_uuid-too.patch>\n\nFPA attached next version.\nChanges:\n- implemented protection from time leap backwards when series is generated on the same backend\n- counter overflow is now translated into ms step forward\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Thu, 31 Aug 2023 11:32:36 -0400",
"msg_from": "Mat Arye <mat@timescaledb.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Thanks for interesting ideas, Mat!\n\n> On 31 Aug 2023, at 20:32, Mat Arye <mat@timescaledb.com> wrote:\n> \n> From a user perspective, it would be great to add 2 things:\n> - A function to extract the timestamp from a V7 UUID (very useful for defining constraints if partitioning by the uuid-embedded timestamps, for instance).\n\nWell, as far as I know, RFC discourages extracting timestamps from UUIDs. But we still can have such functions...maybe as an extension?\n\n> - Can we add an optional timestamptz argument to gen_uuid_v7 so that you can explicitly specify a time instead of always generating for the current time? If the argument is NULL, then use current time. This could be useful for backfilling and other applications.\n\nI think this makes sense. We could also have a counter as an argument. I'll try to implement that.\nHowever, so far I haven't figured out how to implement optional arguments for catalog functions. I'd appreciate any pointers here.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 31 Aug 2023 23:51:35 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "So I am in the process of reviewing the patch and hopefully can provide something there soon.\r\n\r\nHowever I want to address in the mean time the question of timestamp functions. I know that is outside the scope of this patch but I would be in favor of adding them generally, not just as an extension but eventually into core. I understand (and generally agree with) the logic of not generally extracting timestamps from UUIDs or other such field,s but there are cases where it is really, really helpful to be able to do. In particular when you are troubleshooting misbehavior, all information you can get is helpful. And so extracting all of the subfields can be helpful.\r\n\r\nThe problem with putting this in an extension is that this is mostly useful when debugging systems (particularly larger distributed systems) and so the chances of it hitting a critical mass enough to be supported by all major cloud vendors is effectively zero.\r\n\r\nSo I am not asking for this to be included in this patch but I am saying I would love to see these sort of things contributed at some point to core.",
"msg_date": "Mon, 09 Oct 2023 10:15:45 +0000",
"msg_from": "Chris Travers <chris.travers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Thu, 31 Aug 2023 at 23:10, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> Well, as far as I know, RFC discourages extracting timestamps from UUIDs. But we still can have such functions...maybe as an extension?\n\nDo you know of any reason for that?\n\n> However, so far I haven't figured out how to implement optional arguments for catalog functions. I'd appreciate any pointers here.\n\nI'd argue that the time argument shouldn't be optional. Asking the\nuser to supply time would force them to think whether they want to go\nwith `now()` or `clock_timestamp()` or something else.\n\nAlso, a shameless plug with my extension for UUID v1 that implements\nextract and create from (and an opclass):\nhttps://github.com/pgnickb/uuid_v1_ops\n\n\n",
"msg_date": "Mon, 9 Oct 2023 18:46:17 +0200",
"msg_from": "Nick Babadzhanian <pgnickb@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Mon, 9 Oct 2023 at 18:46, Nick Babadzhanian <pgnickb@gmail.com> wrote:\n>\n> On Thu, 31 Aug 2023 at 23:10, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > Well, as far as I know, RFC discourages extracting timestamps from UUIDs. But we still can have such functions...maybe as an extension?\n>\n> Do you know of any reason for that?\n\nNo reasons are given but the RFC states this:\n\n> UUIDs SHOULD be treated as opaque values and implementations SHOULD NOT examine the bits in a UUID to whatever extent is possible. However, where necessary, inspectors should refer to Section 4 for more information on determining UUID version and variant.\n\n> > However, so far I haven't figured out how to implement optional arguments for catalog functions. I'd appreciate any pointers here.\n>\n> I'd argue that the time argument shouldn't be optional. Asking the\n> user to supply time would force them to think whether they want to go\n> with `now()` or `clock_timestamp()` or something else.\n\nI think using `now()` is quite prone to sequence rollover. With the\ncurrent patch inserting more than 2^18~=0.26M rows into a table with\n`gen_uuid_v7()` as the default in a single transaction would already\ncause sequence rollover. I think using a monotonic clock source is the\nonly reasonable thing to do. From the RFC:\n\n> Implementations SHOULD use the current timestamp from a reliable source to provide values that are time-ordered and continually increasing. Care SHOULD be taken to ensure that timestamp changes from the environment or operating system are handled in a way that is consistent with implementation requirements. For example, if it is possible for the system clock to move backward due to either manual adjustment or corrections from a time synchronization protocol, implementations must decide how to handle such cases. (See Altering, Fuzzing, or Smearing bullet below.)\n\n\n",
"msg_date": "Mon, 9 Oct 2023 20:11:07 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> > Well, as far as I know, RFC discourages extracting timestamps from UUIDs. But we still can have such functions...maybe as an extension?\n> Do you know of any reason for that?\n\nI guess some of the detail may have been edited out over time with all of the changes, but it’s basically this: https://github.com/ietf-wg-uuidrev/rfc4122bis/blob/main/draft-ietf-uuidrev-rfc4122bis.md#opacity-opacity. The rationale is that when you introspect a UUID you essentially add interoperability concerns. E.g. if we say that applications can rely on being able to parse the timestamp from the UUID then it means that other implementations must provide guarantees about what that timestamp is. And since the point of a UUID is to provide a unique value, not to transmit additional metadata, the decision was made early on that it’s more realistic and representative of the reality of the situation to say that applications should generate values, try not to parse them if they don’t have to, but if they do it’s only going to be as accurate as the original data put into it. So systems with no NTP enabled, or that fuzz part of the time so as not to leak the exact moment in time something was done, etc - those are things that are going to happen and so buyer beware when parsing.\n\nIf the question is whether or not a function should exist to parse a timestamp from a UUID, I would say sure go ahead, just mention that the timestamp is only accurate as the input, and the spec doesn’t guarantee anything if your UUID came from another source. I imagine a common case would be UUIDs generated in within the same database, and someone wants to extract the timestamp, which would be as reliable as the timestamp on the database machine - seems like a perfectly good case where supporting timestamp extraction as practical value.\n\n\n> On Oct 9, 2023, at 11:11 AM, Jelte Fennema <postgres@jeltef.nl> wrote:\n> \n> On Mon, 9 Oct 2023 at 18:46, Nick Babadzhanian <pgnickb@gmail.com> wrote:\n>> \n>> On Thu, 31 Aug 2023 at 23:10, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>> Well, as far as I know, RFC discourages extracting timestamps from UUIDs. But we still can have such functions...maybe as an extension?\n>> \n>> Do you know of any reason for that?\n> \n> No reasons are given but the RFC states this:\n> \n>> UUIDs SHOULD be treated as opaque values and implementations SHOULD NOT examine the bits in a UUID to whatever extent is possible. However, where necessary, inspectors should refer to Section 4 for more information on determining UUID version and variant.\n> \n>>> However, so far I haven't figured out how to implement optional arguments for catalog functions. I'd appreciate any pointers here.\n>> \n>> I'd argue that the time argument shouldn't be optional. Asking the\n>> user to supply time would force them to think whether they want to go\n>> with `now()` or `clock_timestamp()` or something else.\n> \n> I think using `now()` is quite prone to sequence rollover. With the\n> current patch inserting more than 2^18~=0.26M rows into a table with\n> `gen_uuid_v7()` as the default in a single transaction would already\n> cause sequence rollover. I think using a monotonic clock source is the\n> only reasonable thing to do. From the RFC:\n> \n>> Implementations SHOULD use the current timestamp from a reliable source to provide values that are time-ordered and continually increasing. Care SHOULD be taken to ensure that timestamp changes from the environment or operating system are handled in a way that is consistent with implementation requirements. For example, if it is possible for the system clock to move backward due to either manual adjustment or corrections from a time synchronization protocol, implementations must decide how to handle such cases. (See Altering, Fuzzing, or Smearing bullet below.)\n\n\n> > Well, as far as I know, RFC discourages extracting timestamps from UUIDs. But we still can have such functions...maybe as an extension?> Do you know of any reason for that?I guess some of the detail may have been edited out over time with all of the changes, but it’s basically this: https://github.com/ietf-wg-uuidrev/rfc4122bis/blob/main/draft-ietf-uuidrev-rfc4122bis.md#opacity-opacity. The rationale is that when you introspect a UUID you essentially add interoperability concerns. E.g. if we say that applications can rely on being able to parse the timestamp from the UUID then it means that other implementations must provide guarantees about what that timestamp is. And since the point of a UUID is to provide a unique value, not to transmit additional metadata, the decision was made early on that it’s more realistic and representative of the reality of the situation to say that applications should generate values, try not to parse them if they don’t have to, but if they do it’s only going to be as accurate as the original data put into it. So systems with no NTP enabled, or that fuzz part of the time so as not to leak the exact moment in time something was done, etc - those are things that are going to happen and so buyer beware when parsing.If the question is whether or not a function should exist to parse a timestamp from a UUID, I would say sure go ahead, just mention that the timestamp is only accurate as the input, and the spec doesn’t guarantee anything if your UUID came from another source. I imagine a common case would be UUIDs generated in within the same database, and someone wants to extract the timestamp, which would be as reliable as the timestamp on the database machine - seems like a perfectly good case where supporting timestamp extraction as practical value.On Oct 9, 2023, at 11:11 AM, Jelte Fennema <postgres@jeltef.nl> wrote:On Mon, 9 Oct 2023 at 18:46, Nick Babadzhanian <pgnickb@gmail.com> wrote:On Thu, 31 Aug 2023 at 23:10, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:Well, as far as I know, RFC discourages extracting timestamps from UUIDs. But we still can have such functions...maybe as an extension?Do you know of any reason for that?No reasons are given but the RFC states this:UUIDs SHOULD be treated as opaque values and implementations SHOULD NOT examine the bits in a UUID to whatever extent is possible. However, where necessary, inspectors should refer to Section 4 for more information on determining UUID version and variant.However, so far I haven't figured out how to implement optional arguments for catalog functions. I'd appreciate any pointers here.I'd argue that the time argument shouldn't be optional. Asking theuser to supply time would force them to think whether they want to gowith `now()` or `clock_timestamp()` or something else.I think using `now()` is quite prone to sequence rollover. With thecurrent patch inserting more than 2^18~=0.26M rows into a table with`gen_uuid_v7()` as the default in a single transaction would alreadycause sequence rollover. I think using a monotonic clock source is theonly reasonable thing to do. From the RFC:Implementations SHOULD use the current timestamp from a reliable source to provide values that are time-ordered and continually increasing. Care SHOULD be taken to ensure that timestamp changes from the environment or operating system are handled in a way that is consistent with implementation requirements. For example, if it is possible for the system clock to move backward due to either manual adjustment or corrections from a time synchronization protocol, implementations must decide how to handle such cases. (See Altering, Fuzzing, or Smearing bullet below.)",
"msg_date": "Mon, 9 Oct 2023 11:42:20 -0700",
"msg_from": "Brad Peabody <brad@peabody.io>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 11:11 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> I think using `now()` is quite prone to sequence rollover. With the\n> current patch inserting more than 2^18~=0.26M rows into a table with\n> `gen_uuid_v7()` as the default in a single transaction would already\n> cause sequence rollover.\n\nWell, the current patch will just use now()+1ms when 2^18 is\nexhausted. Even if now() would be passed as an argument (however\ncurrent patch does not support an argument).\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Mon, 9 Oct 2023 23:46:06 +0500",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 9 Oct 2023, at 23:46, Andrey Borodin <amborodin86@gmail.com> wrote:\n\nHere's next iteration of the patch. I've added get_uuid_v7_time().\nThis function extracts timestamp from uuid, iff it is v7. Timestamp correctness only guaranteed if the timestamp was generated by the same implementation (6 bytes for milliseconds obtained by gettimeofday()).\nTests verify that get_uuid_v7_time(gen_uuid_v7()) differs no more than 1ms from now(). Maybe we should allow more tolerant values for slow test machines.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 2 Jan 2024 14:17:42 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 2 Jan 2024, at 14:17, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> Tests verify that get_uuid_v7_time(gen_uuid_v7()) differs no more than 1ms from now(). Maybe we should allow more tolerant values for slow test machines.\n\nIndeed, CFbot complained about flaky tests. I've increased test tolerance to 100ms. (this does not affect test time)\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 2 Jan 2024 23:18:09 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Dear Andrey,\n\n1. Is it possible to add a function that returns the version of the \ngenerated uuid?\nIt will be very useful.\nI don't know if it's possible, but I think there are bits in the UUID \nthat inform about the version.\n\n2. If there is any doubt about adding the function to the main sources \n(standard development in progress), in my opinion you can definitely add \nthis function to the uuid-ossp extension.\n\n3. Wouldn't it be worth including UUID version 6 as well?\n\n4. Sometimes you will need to generate a uuid for historical time. There \nshould be an additional function gen_uuid_v7(timestamp).\n\nNevertheless, the need for uuid v6/7/8 is very high and I'm glad it's \ncoming to PostgreSQL. It should be a PG17 version.\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nDear Andrey,\n\n1. Is it possible to add a function that returns the version of the \ngenerated uuid?\nIt will be very useful. \nI don't know if it's possible, but I think there are bits in the UUID \nthat inform about the version.\n\n2. If there is any doubt about adding the function to the main sources \n(standard development in progress), in my opinion you can definitely add\n this function to the uuid-ossp extension.\n\n3. Wouldn't it be worth including UUID version 6 as well?\n\n4. Sometimes you will need to generate a uuid for historical time. There\n should be an additional function gen_uuid_v7(timestamp).\n\nNevertheless, the need for uuid v6/7/8 is very high and \nI'm glad it's coming to PostgreSQL. It should be a PG17 version.\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Wed, 3 Jan 2024 00:37:28 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": false,
"msg_subject": "Re: Re: UUID v7"
},
{
"msg_contents": "Hello Przemysław,\n\nthanks for your interest in this patch!\n\n> On 3 Jan 2024, at 04:37, Przemysław Sztoch <przemyslaw@sztoch.pl> wrote:\n> \n> 1. Is it possible to add a function that returns the version of the generated uuid?\n> It will be very useful. \n> I don't know if it's possible, but I think there are bits in the UUID that inform about the version.\nWhat do you think if we have functions get_uuid_v7_ver(uuid) and get_uuid_v7_var(uuid) to extract bit fields according to [0] ? Or, perhaps, this should be one function with two return parameters?\nIt's not in a patch yet, I'm just considering how this functionality should look like.\n\n> 2. If there is any doubt about adding the function to the main sources (standard development in progress), in my opinion you can definitely add this function to the uuid-ossp extension.\nFrom my POV we can just have this function in the core. OSSP support for UUID seems more or less dead [1]: \"Newsflash: 04-Jul-2008: Released OSSP uuid 1.6.2\". Or do I look into wrong place?\n\n> 3. Wouldn't it be worth including UUID version 6 as well?\nThe standard in [0] says \"Systems that do not involve legacy UUIDv1 SHOULD use UUIDv7 Section 5.7 instead.\" If there's a point in developing v6 - I'm OK to do so.\n\n> 4. Sometimes you will need to generate a uuid for historical time. There should be an additional function gen_uuid_v7(timestamp).\nDone, please see patch attached. But I changed signature to gen_uuid_v7(int8), to avoid messing with bytes from user who knows what they want. Or do you think gen_uuid_v7(timestamp) would be more convenient?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n[0] https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-14#uuidv7\n[1] http://www.ossp.org/",
"msg_date": "Thu, 4 Jan 2024 23:20:11 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "First of all, I'm a huge fan of UUID v7. So I'm very excited that this\nis progressing. I'm definitely going to look closer at this patch\nsoon. Some tiny initial feedback:\n\n(bikeshed) I'd prefer renaming `get_uuid_v7_time` to the shorter\n`uuid_v7_time`, the `get_` prefix seems rarely used in Postgres\nfunctions (e.g. `date_part` is not called `get_date_part`). Also it's\nvisually very similar to the gen_ prefix.\n\nOn Thu, 4 Jan 2024 at 19:20, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > On 3 Jan 2024, at 04:37, Przemysław Sztoch <przemyslaw@sztoch.pl> wrote:\n> > 1. Is it possible to add a function that returns the version of the generated uuid?\n> > It will be very useful.\n> > I don't know if it's possible, but I think there are bits in the UUID that inform about the version.\n> What do you think if we have functions get_uuid_v7_ver(uuid) and get_uuid_v7_var(uuid) to extract bit fields according to [0] ? Or, perhaps, this should be one function with two return parameters?\n> It's not in a patch yet, I'm just considering how this functionality should look like.\n\nI do agree that those functions would be useful, especially now that\nwe're introducing a function that errors when it's passed a UUID\nthat's not of version 7. With the version extraction function you\ncould return something else for other uuids if you have many and not\nall of them are version 7.\n\nI do think though that these functions should not have v7 in their\nname, since they would apply to all uuids of all versions (so if also\nremoving the get_ prefix they would be called uuid_ver and uuid_var)\n\n> > 4. Sometimes you will need to generate a uuid for historical time. There should be an additional function gen_uuid_v7(timestamp).\n> Done, please see patch attached. But I changed signature to gen_uuid_v7(int8), to avoid messing with bytes from user who knows what they want. Or do you think gen_uuid_v7(timestamp) would be more convenient?\n\nI think timestamp would be quite useful. timestamp would encode the\ntime in the same way as gen_uuid_v7() would, but based on the given\ntime instead of the current time.\n\n\n",
"msg_date": "Thu, 4 Jan 2024 23:44:27 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Andrey M. Borodin wrote on 1/4/2024 7:20 PM:\n> Hello Przemysław,\n>\n> thanks for your interest in this patch!\n>\n>> On 3 Jan 2024, at 04:37, Przemysław Sztoch <przemyslaw@sztoch.pl> wrote:\n>>\n>> 1. Is it possible to add a function that returns the version of the generated uuid?\n>> It will be very useful.\n>> I don't know if it's possible, but I think there are bits in the UUID that inform about the version.\n> What do you think if we have functions get_uuid_v7_ver(uuid) and get_uuid_v7_var(uuid) to extract bit fields according to [0] ? Or, perhaps, this should be one function with two return parameters?\n> It's not in a patch yet, I'm just considering how this functionality should look like.\nuuid_ver(uuid) -> smallint/integer 1/3/4/5/6/7/8\n\nOf course there is RFC 4122 Variant \"bits: 10x\". If it is other variant \nthen uuid_ver should return -1 OR NULL.\nFor UUIDs generated by your patch this function should always return 7.\n>> 2. If there is any doubt about adding the function to the main sources (standard development in progress), in my opinion you can definitely add this function to the uuid-ossp extension.\n> From my POV we can just have this function in the core. OSSP support for UUID seems more or less dead [1]: \"Newsflash: 04-Jul-2008: Released OSSP uuid 1.6.2\". Or do I look into wrong place?\nAfter two days of thinking about UUID v7, I consider it a very important \nfunctionality that should be included in PG17.\n>> 3. Wouldn't it be worth including UUID version 6 as well?\n> The standard in [0] says \"Systems that do not involve legacy UUIDv1 SHOULD use UUIDv7 Section 5.7 instead.\" If there's a point in developing v6 - I'm OK to do so.\nIETF standard should provide information about possibility of conversion \nfrom v1 to v6.\nThen the usefulness of v6 is much greater and it would be worth \nimplementing this version as well.\n>> 4. Sometimes you will need to generate a uuid for historical time. There should be an additional function gen_uuid_v7(timestamp).\n> Done, please see patch attached. But I changed signature to gen_uuid_v7(int8), to avoid messing with bytes from user who knows what they want. Or do you think gen_uuid_v7(timestamp) would be more convenient?\nI talked to my colleagues and everyone chooses the timestamp version.\nIf timestamp is outside the allowed range, the function must return an \nerror.\n\nWe also talked about uuid-ossp. Still, v5 is a great solution in some \napplications.\nIt is worth moving this function from extension to PG17. Many people \ndon't use it because they don't know it and this uuid schema.\n\nWe think it would be quite reasonable to add:\nuuid_generate_v5 (/|namespace|/ |uuid|, /|name|/ |text|) -> uuid\nuuid_generate_v6 () -> uuid\nuuid_generate_v6 (timestamptz) -> uuid\nuuid_generate_v7() -> uuid\nuuid_generate_v7(timestamptz) -> uuid\nuuid_ver(uuid) -> smallint -1/1/2/3/4/5/6/7/8\nuuid_ts(uuid) -> timestamptz (for 1/6/7 version, for other should return \nNULL, error is too heavy in our opinion)\nuuid_v1_to_v6 (uuid) -> uuid\n\nThe naming of this family of functions needs to be rethought.\nDo we adopt the naming standard from Postgres and the uuid-ossp extension?\nOr should we continue with a slightly less accurate name for PG: \nget_random_uuid (get_random_uuid, get_uuid_v7)?\n\n5. Please add in docs reference to RFC4122 \n(https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-14#uuid)\nPeople should read standards. :-)\n> Thanks!\n>\n> Best regards, Andrey Borodin.\n>\n>\n> [0] https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-14#uuidv7\n> [1] http://www.ossp.org/\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nAndrey M. Borodin wrote on 1/4/2024 \n7:20 PM:\n\nHello Przemysław,\n\nthanks for your interest in this patch!\n\n\nOn 3 Jan 2024, at 04:37, Przemysław Sztoch <przemyslaw@sztoch.pl> wrote:\n\n1. Is it possible to add a function that returns the version of the generated uuid?\nIt will be very useful. \nI don't know if it's possible, but I think there are bits in the UUID that inform about the version.\n\nWhat do you think if we have functions get_uuid_v7_ver(uuid) and get_uuid_v7_var(uuid) to extract bit fields according to [0] ? Or, perhaps, this should be one function with two return parameters?\nIt's not in a patch yet, I'm just considering how this functionality should look like.\n\nuuid_ver(uuid) -> \nsmallint/integer 1/3/4/5/6/7/8\n\n\nOf course there is RFC 4122 Variant \"bits: 10x\". If it is other variant \nthen uuid_ver should return -1 OR NULL.\n\nFor UUIDs generated by your patch this function should always return 7.\n\n2. If there is any doubt about adding the function to the main sources (standard development in progress), in my opinion you can definitely add this function to the uuid-ossp extension.\n\nFrom my POV we can just have this function in the core. OSSP support for UUID seems more or less dead [1]: \"Newsflash: 04-Jul-2008: Released OSSP uuid 1.6.2\". Or do I look into wrong place?\n\n\nAfter two days of thinking about UUID v7, I consider it a very important\n functionality that should be included in PG17.\n\n3. Wouldn't it be worth including UUID version 6 as well?\n\nThe standard in [0] says \"Systems that do not involve legacy UUIDv1 SHOULD use UUIDv7 Section 5.7 instead.\" If there's a point in developing v6 - I'm OK to do so.\n\n\nIETF standard should provide information about possibility of conversion\n from v1 to v6.\n\nThen the usefulness of v6 is much greater and it would be worth \nimplementing this version as well.\n\n4. Sometimes you will need to generate a uuid for historical time. There should be an additional function gen_uuid_v7(timestamp).\n\nDone, please see patch attached. But I changed signature to gen_uuid_v7(int8), to avoid messing with bytes from user who knows what they want. Or do you think gen_uuid_v7(timestamp) would be more convenient?\n\n\nI talked to my colleagues and everyone chooses the timestamp version.\n\nIf timestamp is outside the allowed range, the function must return an \nerror.\n\n\nWe also talked about uuid-ossp. Still, v5 is a great solution in some \napplications.\n\nIt is worth moving this function from extension to PG17. Many people \ndon't use it because they don't know it and this uuid schema.\n\n\nWe think it would be quite reasonable to add:\n\nuuid_generate_v5 (namespace uuid, name text) -> uuid\n\nuuid_generate_v6 () -> uuid\n\nuuid_generate_v6 (timestamptz) -> uuid\n\nuuid_generate_v7() -> uuid\n\nuuid_generate_v7(timestamptz) -> uuid\n\nuuid_ver(uuid) -> smallint -1/1/2/3/4/5/6/7/8\n\nuuid_ts(uuid) -> timestamptz (for 1/6/7 version, for other should \nreturn NULL, error is too heavy in our opinion)\n\nuuid_v1_to_v6 (uuid) -> uuid\n\nThe naming of this family of functions needs to be \nrethought.\n\nDo we adopt the naming standard from Postgres and the uuid-ossp \nextension?\n\nOr should we continue with a slightly less accurate name for PG: \nget_random_uuid (get_random_uuid, get_uuid_v7)?\n\n\n5. Please add in docs reference to RFC4122 \n(https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-14#uuid)\n\nPeople should read standards. :-)\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n[0] https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-14#uuidv7\n[1] http://www.ossp.org/\n\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Fri, 5 Jan 2024 09:52:41 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hello Przemysław and Andrey,\nWhen naming functions, I would advise using the shorter abbreviation uuidv7 from the new version of the RFC instead of uuid_v7. When people search Google for new versions of UUIDs, they enter the abbreviation uuidv7 into the search bar. The name generate_uuidv7() looks good, as well as uuidv1_to_uuidv6() and timestamp_to_uuidv7().\nBest regards,\n\nSergey Prokhorenkosergeyprokhorenko@yahoo.com.au \n\n On Friday, 5 January 2024 at 11:53:04 am GMT+3, Przemysław Sztoch <przemyslaw@sztoch.pl> wrote: \n \n Andrey M. Borodin wrote on 1/4/2024 7:20 PM:\n\n Hello Przemysław,\n\nthanks for your interest in this patch!\n\n \nOn 3 Jan 2024, at 04:37, Przemysław Sztoch <przemyslaw@sztoch.pl> wrote:\n\n1. Is it possible to add a function that returns the version of the generated uuid?\nIt will be very useful. \nI don't know if it's possible, but I think there are bits in the UUID that inform about the version.\n\n What do you think if we have functions get_uuid_v7_ver(uuid) and get_uuid_v7_var(uuid) to extract bit fields according to [0] ? Or, perhaps, this should be one function with two return parameters?\nIt's not in a patch yet, I'm just considering how this functionality should look like.\nuuid_ver(uuid) -> smallint/integer 1/3/4/5/6/7/8\n\nOf course there is RFC 4122 Variant \"bits: 10x\". If it is other variant then uuid_ver should return -1 OR NULL.\nFor UUIDs generated by your patch this function should always return 7.\n\n \n2. If there is any doubt about adding the function to the main sources (standard development in progress), in my opinion you can definitely add this function to the uuid-ossp extension.\n\n From my POV we can just have this function in the core. OSSP support for UUID seems more or less dead [1]: \"Newsflash: 04-Jul-2008: Released OSSP uuid 1.6.2\". Or do I look into wrong place?\nAfter two days of thinking about UUID v7, I consider it a very important functionality that should be included in PG17.\n\n \n3. Wouldn't it be worth including UUID version 6 as well?\n\n The standard in [0] says \"Systems that do not involve legacy UUIDv1 SHOULD use UUIDv7 Section 5.7 instead.\" If there's a point in developing v6 - I'm OK to do so.\nIETF standard should provide information about possibility of conversion from v1 to v6.\nThen the usefulness of v6 is much greater and it would be worth implementing this version as well.\n\n \n4. Sometimes you will need to generate a uuid for historical time. There should be an additional function gen_uuid_v7(timestamp).\n\n Done, please see patch attached. But I changed signature to gen_uuid_v7(int8), to avoid messing with bytes from user who knows what they want. Or do you think gen_uuid_v7(timestamp) would be more convenient?\nI talked to my colleagues and everyone chooses the timestamp version.\nIf timestamp is outside the allowed range, the function must return an error.\n\nWe also talked about uuid-ossp. Still, v5 is a great solution in some applications.\nIt is worth moving this function from extension to PG17. Many people don't use it because they don't know it and this uuid schema.\n\nWe think it would be quite reasonable to add:\nuuid_generate_v5 (namespace uuid, name text) -> uuid\nuuid_generate_v6 () -> uuid\nuuid_generate_v6 (timestamptz) -> uuid\nuuid_generate_v7() -> uuid\nuuid_generate_v7(timestamptz) -> uuid\nuuid_ver(uuid) -> smallint -1/1/2/3/4/5/6/7/8\nuuid_ts(uuid) -> timestamptz (for 1/6/7 version, for other should return NULL, error is too heavy in our opinion)\nuuid_v1_to_v6 (uuid) -> uuid\n\nThe naming of this family of functions needs to be rethought.\nDo we adopt the naming standard from Postgres and the uuid-ossp extension?\nOr should we continue with a slightly less accurate name for PG: get_random_uuid (get_random_uuid, get_uuid_v7)?\n\n5. Please add in docs reference to RFC4122 (https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-14#uuid)\nPeople should read standards. :-)\n\n Thanks!\n\nBest regards, Andrey Borodin.\n\n\n[0] https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-14#uuidv7\n[1] http://www.ossp.org/\n\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n \nHello Przemysław and Andrey,When naming functions, I would advise using the shorter abbreviation uuidv7 from the new version of the RFC instead of uuid_v7. When people search Google for new versions of UUIDs, they enter the abbreviation uuidv7 into the search bar. The name generate_uuidv7() looks good, as well as uuidv1_to_uuidv6() and timestamp_to_uuidv7().Best regards,Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Friday, 5 January 2024 at 11:53:04 am GMT+3, Przemysław Sztoch <przemyslaw@sztoch.pl> wrote:\n \n\n\nAndrey M. Borodin wrote on 1/4/2024 \n7:20 PM:\n\nHello Przemysław,\n\nthanks for your interest in this patch!\n\n\nOn 3 Jan 2024, at 04:37, Przemysław Sztoch <przemyslaw@sztoch.pl> wrote:\n\n1. Is it possible to add a function that returns the version of the generated uuid?\nIt will be very useful. \nI don't know if it's possible, but I think there are bits in the UUID that inform about the version.\n\nWhat do you think if we have functions get_uuid_v7_ver(uuid) and get_uuid_v7_var(uuid) to extract bit fields according to [0] ? Or, perhaps, this should be one function with two return parameters?\nIt's not in a patch yet, I'm just considering how this functionality should look like.\n\nuuid_ver(uuid) -> \nsmallint/integer 1/3/4/5/6/7/8\n\n\nOf course there is RFC 4122 Variant \"bits: 10x\". If it is other variant \nthen uuid_ver should return -1 OR NULL.\n\nFor UUIDs generated by your patch this function should always return 7.\n\n2. If there is any doubt about adding the function to the main sources (standard development in progress), in my opinion you can definitely add this function to the uuid-ossp extension.\n\nFrom my POV we can just have this function in the core. OSSP support for UUID seems more or less dead [1]: \"Newsflash: 04-Jul-2008: Released OSSP uuid 1.6.2\". Or do I look into wrong place?\n\n\nAfter two days of thinking about UUID v7, I consider it a very important\n functionality that should be included in PG17.\n\n3. Wouldn't it be worth including UUID version 6 as well?\n\nThe standard in [0] says \"Systems that do not involve legacy UUIDv1 SHOULD use UUIDv7 Section 5.7 instead.\" If there's a point in developing v6 - I'm OK to do so.\n\n\nIETF standard should provide information about possibility of conversion\n from v1 to v6.\n\nThen the usefulness of v6 is much greater and it would be worth \nimplementing this version as well.\n\n4. Sometimes you will need to generate a uuid for historical time. There should be an additional function gen_uuid_v7(timestamp).\n\nDone, please see patch attached. But I changed signature to gen_uuid_v7(int8), to avoid messing with bytes from user who knows what they want. Or do you think gen_uuid_v7(timestamp) would be more convenient?\n\n\nI talked to my colleagues and everyone chooses the timestamp version.\n\nIf timestamp is outside the allowed range, the function must return an \nerror.\n\n\nWe also talked about uuid-ossp. Still, v5 is a great solution in some \napplications.\n\nIt is worth moving this function from extension to PG17. Many people \ndon't use it because they don't know it and this uuid schema.\n\n\nWe think it would be quite reasonable to add:\n\nuuid_generate_v5 (namespace uuid, name text) -> uuid\n\nuuid_generate_v6 () -> uuid\n\nuuid_generate_v6 (timestamptz) -> uuid\n\nuuid_generate_v7() -> uuid\n\nuuid_generate_v7(timestamptz) -> uuid\n\nuuid_ver(uuid) -> smallint -1/1/2/3/4/5/6/7/8\n\nuuid_ts(uuid) -> timestamptz (for 1/6/7 version, for other should \nreturn NULL, error is too heavy in our opinion)\n\nuuid_v1_to_v6 (uuid) -> uuid\n\nThe naming of this family of functions needs to be \nrethought.\n\nDo we adopt the naming standard from Postgres and the uuid-ossp \nextension?\n\nOr should we continue with a slightly less accurate name for PG: \nget_random_uuid (get_random_uuid, get_uuid_v7)?\n\n\n5. Please add in docs reference to RFC4122 \n(https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-14#uuid)\n\nPeople should read standards. :-)\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n[0] https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-14#uuidv7\n[1] http://www.ossp.org/\n\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Fri, 5 Jan 2024 10:57:47 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 5 Jan 2024, at 15:57, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:\n\nSergey, Przemysław, Jelte, thanks for your feedback.\nHere's v9. Changes:\n1. Swapped type of the argument to timestamptz in gen_uuid_v7()\n2. Renamed get_uuid_v7_time() to uuid_v7_time()\n3. Added uuid_ver() and uuid_var().\n\nWhat do you think?\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 16 Jan 2024 17:15:07 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi Andrey,\n\n> Sergey, Przemysław, Jelte, thanks for your feedback.\n> Here's v9. Changes:\n> 1. Swapped type of the argument to timestamptz in gen_uuid_v7()\n> 2. Renamed get_uuid_v7_time() to uuid_v7_time()\n> 3. Added uuid_ver() and uuid_var().\n>\n> What do you think?\n\nMany thanks for the updated patch. It's an important work and I very\nmuch hope we will see this in the upcoming PG release.\n\n```\n+Datum\n+pg_node_tree_in(PG_FUNCTION_ARGS)\n+{\n+ if (!IsBootstrapProcessingMode())\n+ elog(ERROR, \"cannot accept a value of type pg_node_tree_in\");\n+ return textin(fcinfo);\n+}\n```\n\nNot 100% sure what this is for. Any chance this could be part of another patch?\n\nOne thing I don't particularly like about the tests is the fact that\nthey don't check if a correct UUID was actually generated. I realize\nthat's not quite trivial due to the random nature of the function, but\nmaybe we could use some substring/regex magic here? Something like:\n\n```\nselect gen_uuid_v7() :: text ~ '^[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}$';\n ?column?\n----------\n t\n\nselect regexp_replace(gen_uuid_v7('2024-01-16 15:45:33 MSK') :: text,\n'[0-9a-f]{4}-[0-9a-f]{12}$', 'XXXX-' || repeat('X', 12));\n regexp_replace\n--------------------------------------\n 018d124e-39c8-74c7-XXXX-XXXXXXXXXXXX\n```\n\n\n```\n+ proname => 'uuid_v7_time', proleakproof => 't', provolatile => 'i',\n```\n\nI don't think we conventionally specify IMMUTABLE volatility, it's the\ndefault. Other values also are worth checking.\n\nAnother question: how did you choose between using TimestampTz and\nTimestamp types? I realize that internally it's all the same. Maybe\nTimestamp will be slightly better since the way it is displayed\ndoesn't depend on the session settings. Many people I talked to find\nthis part of TimestampTz confusing.\n\nAlso I would like to point out that part of the documentation is\nmissing, but I guess at this stage of the game it's OK.\n\nLast but not least: maybe we should support casting Timestamp[Tz] to\nUUIDv7 and vice versa? Shouldn't be difficult to implement and I\nsuspect somebody will request this eventually. During the cast to UUID\nwe will always get the same value for the given Timestamp[Tz], which\nprobably can be useful in certain applications. It can't be done with\ngen_uuid_v7() and its volatility doesn't permit it.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 16 Jan 2024 16:00:57 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Thanks for your review, Aleksander!\n\n> On 16 Jan 2024, at 18:00, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> \n> \n> ```\n> +Datum\n> +pg_node_tree_in(PG_FUNCTION_ARGS)\n> +{\n> + if (!IsBootstrapProcessingMode())\n> + elog(ERROR, \"cannot accept a value of type pg_node_tree_in\");\n> + return textin(fcinfo);\n> +}\n> ```\n> \n> Not 100% sure what this is for. Any chance this could be part of another patch?\nNope, it’s necessary there. Without these changes catalog functions cannot have defaults for arguments. These defaults have type pg_node_tree which has no-op in function.\n\n> One thing I don't particularly like about the tests is the fact that\n> they don't check if a correct UUID was actually generated. I realize\n> that's not quite trivial due to the random nature of the function, but\n> maybe we could use some substring/regex magic here? Something like:\n> \n> ```\n> select gen_uuid_v7() :: text ~ '^[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}$';\n> ?column?\n> ----------\n> t\n> \n> select regexp_replace(gen_uuid_v7('2024-01-16 15:45:33 MSK') :: text,\n> '[0-9a-f]{4}-[0-9a-f]{12}$', 'XXXX-' || repeat('X', 12));\n> regexp_replace\n> --------------------------------------\n> 018d124e-39c8-74c7-XXXX-XXXXXXXXXXXX\n> ```\nAny 8 bytes which have ver and var bits (6 bits total) are correct UUID.\nThis is checked by tests when uuid_var() and uuid_ver() functions are exercised.\n\n> ```\n> + proname => 'uuid_v7_time', proleakproof => 't', provolatile => 'i',\n> ```\n> \n> I don't think we conventionally specify IMMUTABLE volatility, it's the\n> default. Other values also are worth checking.\nMakes sense, I’ll drop this values in next version.\nBTW I’m in doubt if provided functions are leakproof. They ERROR-out with messages that can give a clue about several bits of UUID. Does this break leakproofness? I think yest, but I’m not sure.\ngen_uuid_v7() seems leakproof to me.\n\n> Another question: how did you choose between using TimestampTz and\n> Timestamp types? I realize that internally it's all the same. Maybe\n> Timestamp will be slightly better since the way it is displayed\n> doesn't depend on the session settings. Many people I talked to find\n> this part of TimestampTz confusing.\n\nI mean, this argument is expected to be used to implement K-way sorted identifiers. In this context, it seems to me, it’s good to remember to developer that time shift also depend on timezones.\nBut this is too vague.\nDo you have any reasons that apply to UUID generation?\n\n> Also I would like to point out that part of the documentation is\n> missing, but I guess at this stage of the game it's OK.\n> \n> Last but not least: maybe we should support casting Timestamp[Tz] to\n> UUIDv7 and vice versa? Shouldn't be difficult to implement and I\n> suspect somebody will request this eventually. During the cast to UUID\n> we will always get the same value for the given Timestamp[Tz], which\n> probably can be useful in certain applications. It can't be done with\n> gen_uuid_v7() and its volatility doesn't permit it.\nI’m strongly opposed to doing this cast. I was not adding this function to extract timestamp from UUID, because standard does not recommend it. But a lot of people asked for this.\nBut supporting easy way to do unrecommended thing seem bad.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 16 Jan 2024 19:44:33 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Andrey,\n\nIt is not clear how to interpret uuid_v7_time(): \n - uuid_v7 to time() (extracting the timestamp)\n - time() to uuid_v7 (generation of the uuid_v7)\n It is worth improving the naming, for example, adding prepositions.\n\n\nSergey Prokhorenkosergeyprokhorenko@yahoo.com.au \n\n On Tuesday, 16 January 2024 at 05:44:51 pm GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote: \n \n Thanks for your review, Aleksander!\n\n> On 16 Jan 2024, at 18:00, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> \n> \n> ```\n> +Datum\n> +pg_node_tree_in(PG_FUNCTION_ARGS)\n> +{\n> + if (!IsBootstrapProcessingMode())\n> + elog(ERROR, \"cannot accept a value of type pg_node_tree_in\");\n> + return textin(fcinfo);\n> +}\n> ```\n> \n> Not 100% sure what this is for. Any chance this could be part of another patch?\nNope, it’s necessary there. Without these changes catalog functions cannot have defaults for arguments. These defaults have type pg_node_tree which has no-op in function.\n\n> One thing I don't particularly like about the tests is the fact that\n> they don't check if a correct UUID was actually generated. I realize\n> that's not quite trivial due to the random nature of the function, but\n> maybe we could use some substring/regex magic here? Something like:\n> \n> ```\n> select gen_uuid_v7() :: text ~ '^[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}$';\n> ?column?\n> ----------\n> t\n> \n> select regexp_replace(gen_uuid_v7('2024-01-16 15:45:33 MSK') :: text,\n> '[0-9a-f]{4}-[0-9a-f]{12}$', 'XXXX-' || repeat('X', 12));\n> regexp_replace\n> --------------------------------------\n> 018d124e-39c8-74c7-XXXX-XXXXXXXXXXXX\n> ```\nAny 8 bytes which have ver and var bits (6 bits total) are correct UUID.\nThis is checked by tests when uuid_var() and uuid_ver() functions are exercised.\n\n> ```\n> + proname => 'uuid_v7_time', proleakproof => 't', provolatile => 'i',\n> ```\n> \n> I don't think we conventionally specify IMMUTABLE volatility, it's the\n> default. Other values also are worth checking.\nMakes sense, I’ll drop this values in next version.\nBTW I’m in doubt if provided functions are leakproof. They ERROR-out with messages that can give a clue about several bits of UUID. Does this break leakproofness? I think yest, but I’m not sure.\ngen_uuid_v7() seems leakproof to me.\n\n> Another question: how did you choose between using TimestampTz and\n> Timestamp types? I realize that internally it's all the same. Maybe\n> Timestamp will be slightly better since the way it is displayed\n> doesn't depend on the session settings. Many people I talked to find\n> this part of TimestampTz confusing.\n\nI mean, this argument is expected to be used to implement K-way sorted identifiers. In this context, it seems to me, it’s good to remember to developer that time shift also depend on timezones.\nBut this is too vague.\nDo you have any reasons that apply to UUID generation?\n\n> Also I would like to point out that part of the documentation is\n> missing, but I guess at this stage of the game it's OK.\n> \n> Last but not least: maybe we should support casting Timestamp[Tz] to\n> UUIDv7 and vice versa? Shouldn't be difficult to implement and I\n> suspect somebody will request this eventually. During the cast to UUID\n> we will always get the same value for the given Timestamp[Tz], which\n> probably can be useful in certain applications. It can't be done with\n> gen_uuid_v7() and its volatility doesn't permit it.\nI’m strongly opposed to doing this cast. I was not adding this function to extract timestamp from UUID, because standard does not recommend it. But a lot of people asked for this.\nBut supporting easy way to do unrecommended thing seem bad.\n\n\nBest regards, Andrey Borodin. \nAndrey,It is not clear how to interpret uuid_v7_time():uuid_v7 to time() (extracting the timestamp)time() to uuid_v7 (generation of the uuid_v7) It is worth improving the naming, for example, adding prepositions.Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Tuesday, 16 January 2024 at 05:44:51 pm GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n \n\n\nThanks for your review, Aleksander!> On 16 Jan 2024, at 18:00, Aleksander Alekseev <aleksander@timescale.com> wrote:> > > ```> +Datum> +pg_node_tree_in(PG_FUNCTION_ARGS)> +{> + if (!IsBootstrapProcessingMode())> + elog(ERROR, \"cannot accept a value of type pg_node_tree_in\");> + return textin(fcinfo);> +}> ```> > Not 100% sure what this is for. Any chance this could be part of another patch?Nope, it’s necessary there. Without these changes catalog functions cannot have defaults for arguments. These defaults have type pg_node_tree which has no-op in function.> One thing I don't particularly like about the tests is the fact that> they don't check if a correct UUID was actually generated. I realize> that's not quite trivial due to the random nature of the function, but> maybe we could use some substring/regex magic here? Something like:> > ```> select gen_uuid_v7() :: text ~ '^[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}$';> ?column?> ----------> t> > select regexp_replace(gen_uuid_v7('2024-01-16 15:45:33 MSK') :: text,> '[0-9a-f]{4}-[0-9a-f]{12}$', 'XXXX-' || repeat('X', 12));> regexp_replace> --------------------------------------> 018d124e-39c8-74c7-XXXX-XXXXXXXXXXXX> ```Any 8 bytes which have ver and var bits (6 bits total) are correct UUID.This is checked by tests when uuid_var() and uuid_ver() functions are exercised.> ```> + proname => 'uuid_v7_time', proleakproof => 't', provolatile => 'i',> ```> > I don't think we conventionally specify IMMUTABLE volatility, it's the> default. Other values also are worth checking.Makes sense, I’ll drop this values in next version.BTW I’m in doubt if provided functions are leakproof. They ERROR-out with messages that can give a clue about several bits of UUID. Does this break leakproofness? I think yest, but I’m not sure.gen_uuid_v7() seems leakproof to me.> Another question: how did you choose between using TimestampTz and> Timestamp types? I realize that internally it's all the same. Maybe> Timestamp will be slightly better since the way it is displayed> doesn't depend on the session settings. Many people I talked to find> this part of TimestampTz confusing.I mean, this argument is expected to be used to implement K-way sorted identifiers. In this context, it seems to me, it’s good to remember to developer that time shift also depend on timezones.But this is too vague.Do you have any reasons that apply to UUID generation?> Also I would like to point out that part of the documentation is> missing, but I guess at this stage of the game it's OK.> > Last but not least: maybe we should support casting Timestamp[Tz] to> UUIDv7 and vice versa? Shouldn't be difficult to implement and I> suspect somebody will request this eventually. During the cast to UUID> we will always get the same value for the given Timestamp[Tz], which> probably can be useful in certain applications. It can't be done with> gen_uuid_v7() and its volatility doesn't permit it.I’m strongly opposed to doing this cast. I was not adding this function to extract timestamp from UUID, because standard does not recommend it. But a lot of people asked for this.But supporting easy way to do unrecommended thing seem bad.Best regards, Andrey Borodin.",
"msg_date": "Tue, 16 Jan 2024 16:49:13 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 16 Jan 2024, at 21:49, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:\n> \n> It is not clear how to interpret uuid_v7_time():\n> \t• uuid_v7 to time() (extracting the timestamp)\n> \t• time() to uuid_v7 (generation of the uuid_v7)\n> It is worth improving the naming, for example, adding prepositions.\n\nPreviously, Jelte had some thoughts on idiomatic function names.\n\nJelte, what is your opinion on naming the function which extracts timestamp from UUID v7?\nOf cause, it would be great to hear opinion from anyone else.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 16 Jan 2024 23:17:40 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Tue, 16 Jan 2024 at 15:44, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > On 16 Jan 2024, at 18:00, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> > Not 100% sure what this is for. Any chance this could be part of another patch?\n> Nope, it’s necessary there. Without these changes catalog functions cannot have defaults for arguments. These defaults have type pg_node_tree which has no-op in function.\n\nThat seems like the wrong way to make that work then. How about\ninstead we define the same function name twice, once with and once\nwithout a timestamp argument. That's how this is done for other\nfunctions that are overloaded in pg_catalog.\n\n\n",
"msg_date": "Tue, 16 Jan 2024 21:10:37 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Andrey Borodin wrote on 1/16/2024 1:15 PM:\n> Sergey, Przemysław, Jelte, thanks for your feedback.\n> Here's v9. Changes:\n> 1. Swapped type of the argument to timestamptz in gen_uuid_v7()\nPlease update docs part about optional timestamp argument.\n> 2. Renamed get_uuid_v7_time() to uuid_v7_time()\nPleaserename uuid_v7_time to uuid_time() and add support for v1 and v6.\nIf version is incompatible then return NULL.\n> 3. Added uuid_ver() and uuid_var().\nLooks good.\nBut for me, throwing an error is problematic. Wouldn't it be better to \nreturn -1.\n> What do you think?\n> Best regards, Andrey Borodin.\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nAndrey Borodin wrote on 1/16/2024 1:15\n PM:\n\nSergey, Przemysław, Jelte, thanks for your feedback.\nHere's v9. Changes:\n1. Swapped type of the argument to timestamptz in gen_uuid_v7()\n\nPlease update docs part about optional timestamp argument. \n\n\n2. Renamed get_uuid_v7_time() to uuid_v7_time()\n\nPlease rename uuid_v7_time \nto uuid_time() and add support for v1 and v6.\nIf version is incompatible then return NULL.\n\n3. Added uuid_ver() and uuid_var().\n\nLooks good.\nBut for me, throwing an error is problematic. Wouldn't it be better to \nreturn -1.\n\nWhat do you think?\nBest regards, Andrey Borodin.\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Tue, 16 Jan 2024 21:20:58 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Tue, 16 Jan 2024 at 19:17, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> Jelte, what is your opinion on naming the function which extracts timestamp from UUID v7?\n\nI looked at a few more datatypes: json, jsonb & hstore. The get_\nprefix is not used there at all, so I'm still opposed to that. But\nthey seem to use either an _to_ or an _extract_ infix. _to_ is then\nused for conversion of the whole object, and _extract_ is used to\nextract a subset. So I think _extract_ would fit well here.\n\nOn Fri, 5 Jan 2024 at 11:57, Sergey Prokhorenko\n<sergeyprokhorenko@yahoo.com.au> wrote:\n> When naming functions, I would advise using the shorter abbreviation uuidv7 from the new version of the RFC instead of uuid_v7.\n\nI also agree with that, uuid_v7 looks weird to my eyes. The RFC also\nabbreviates them as UUIDv7 (without a space).\n\nThe more I look at it the more I also think the gen_ prefix is quite\nstrange, and I already thought the gen_random_uuid name was quite\nweird. But now that we will also have a uuidv7 I think it's even\nstranger that one uses the name from the RFC.\n\nThe name of gen_random_uuid was taken verbatim from pgcrypto, without\nany discussion on the list[0]:\n\n> Here is a proposed patch for this. I did a fair bit of looking around\n> in other systems for a naming pattern but didn't find anything\n> consistent. So I ended up just taking the function name and code from\n> pgcrypto.\n\n\nSo currently my preference for the function names would be:\n\n- uuidv4() -> alias for gen_random_uuid()\n- uuidv7()\n- uuidv7(timestamptz)\n- uuid_extract_ver(uuid)\n- uuid_extract_var(uuid)\n- uuidv7_extract_time(uuid)\n\n[0]: https://www.postgresql.org/message-id/flat/6a65610c-46fc-2323-6b78-e8086340a325%402ndquadrant.com#76e40e950a44aa8b6844297e8d2efe2c\n\n\n",
"msg_date": "Tue, 16 Jan 2024 21:25:58 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Jelte Fennema-Nio wrote on 1/16/2024 9:25 PM:\n> On Tue, 16 Jan 2024 at 19:17, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> So currently my preference for the function names would be:\n>\n> - uuidv4() -> alias for gen_random_uuid()\n> - uuidv7()\n> - uuidv7(timestamptz)\n> - uuid_extract_ver(uuid)\n> - uuid_extract_var(uuid)\n> - uuidv7_extract_time(uuid)\n+1\nBut replaceuuidv7_extract_time(uuid)with uuid_extract_time(uuid) - \nfunction should be able extract timestamp from v1/v6/v7\n\nI would highly recommend to add:\nuuidv5(namespace uuid, name text) -> uuid\nusing uuid_generate_v5 from uuid-ossp extension \n(https://www.postgresql.org/docs/current/uuid-ossp.html)\nThere is an important version and it should be included into the main PG \ncode.\n\nJelte: Please propose the name of the function that will convert uuid \nfrom version 1 to 6.\nv6 is almost as good as v7 for indexes. And v6 allows you to convert \nfrom v1 which some people use.\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nJelte Fennema-Nio wrote on 1/16/2024 \n9:25 PM:\n\nOn Tue, 16 Jan 2024 at 19:17, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n\nSo currently my preference for the function names would be:\n\n- uuidv4() -> alias for gen_random_uuid()\n- uuidv7()\n- uuidv7(timestamptz)\n- uuid_extract_ver(uuid)\n- uuid_extract_var(uuid)\n- uuidv7_extract_time(uuid)\n\n+1\n\n\nBut replace uuidv7_extract_time(uuid) with uuid_extract_time(uuid) -\n function should be able extract timestamp from v1/v6/v7\n\nI would highly recommend to add:\nuuidv5(namespace uuid, name text) -> uuid\n\nusing uuid_generate_v5 from uuid-ossp extension \n(https://www.postgresql.org/docs/current/uuid-ossp.html)\nThere is an important version and it should be included into the main PG\n code.\n\nJelte: Please propose the name of the function that will convert uuid \nfrom version 1 to 6.\nv6 is almost as good as v7 for indexes. And v6 allows you to convert \nfrom v1 which some people use.\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Tue, 16 Jan 2024 22:02:00 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> Another question: how did you choose between using TimestampTz and\n> Timestamp types? I realize that internally it's all the same. Maybe\n> Timestamp will be slightly better since the way it is displayed\n> doesn't depend on the session settings. Many people I talked to find\n> this part of TimestampTz confusing.\ntimstamptz internally always store UTC.\nI believe that in SQL, when operating with time in UTC, you should \nalways use timestamptz.\ntimestamp is theoretically the same thing. But internally it does not \nconvert time to UTC and will lead to incorrect use.\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\n\n\nAnother question: how did you choose between using TimestampTz and\nTimestamp types? I realize that internally it's all the same. Maybe\nTimestamp will be slightly better since the way it is displayed\ndoesn't depend on the session settings. Many people I talked to find\nthis part of TimestampTz confusing.\n\ntimstamptz internally always store UTC. \nI believe that in SQL, when operating with time in UTC, you should \nalways use timestamptz.\ntimestamp is theoretically the same thing. But internally it does not \nconvert time to UTC and will lead to incorrect use.\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Tue, 16 Jan 2024 22:09:56 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Tue, 16 Jan 2024 at 22:02, Przemysław Sztoch <przemyslaw@sztoch.pl> wrote:\n> But replace uuidv7_extract_time(uuid) with uuid_extract_time(uuid) - function should be able extract timestamp from v1/v6/v7\n\nI'm fine with this.\n\n> I would highly recommend to add:\n> uuidv5(namespace uuid, name text) -> uuid\n> using uuid_generate_v5 from uuid-ossp extension (https://www.postgresql.org/docs/current/uuid-ossp.html)\n> There is an important version and it should be included into the main PG code.\n\nI think adding more uuid versions would probably be desirable. But I\ndon't think it makes sense to clutter this patchset with that. I feel\nlike on this uuidv7 patchset we've had enough discussion that it could\nreasonably get into PG17, but I think adding even more uuid versions\nto this patchset would severely reduce the chances of that happening.\n\n\n",
"msg_date": "Tue, 16 Jan 2024 22:19:31 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 17 Jan 2024, at 02:19, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n\nI want to ask Kyzer or Brad, I hope they will see this message. I'm working on the patch for time extraction for v1, v6 and v7.\n\nDo I understand correctly, that UUIDs contain local time, not UTC time? For examples in [0] I see that \"A.6. Example of a UUIDv7 Value\" I see that February 22, 2022 2:22:22.00 PM GMT-05:00 results in unix_ts_ms = 0x017F22E279B0, which is not UTC, but local time.\nIs it intentional? Section \"5.1. UUID Version 1\" states otherwise.\n\nIf so, I should swap signatures of functions from TimestampTz to Timestamp.\nI'm hard-coding examples from this standard to tests, so I want to be precise...\n\nIf I follow the standard I see this in tests:\n+-- extract UUID v1, v6 and v7 timestamp\n+SELECT uuid_extract_time('C232AB00-9414-11EC-B3C8-9F6BDECED846') at time zone 'GMT-05';\n+ timezone \n+--------------------------\n+ Wed Feb 23 00:22:22 2022\n+(1 row)\n\nCurrent patch version attached. I've addressed all other requests: function renames, aliases, multiple functions instead of optional params, cleaner catalog definitions, not throwing error when [var,ver,time] value is unknown.\nWhat is left: deal with timezones, improve documentation.\n\n\nBest regards, Andrey Borodin.\n\n[0] https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis#name-example-of-a-uuidv1-value",
"msg_date": "Thu, 18 Jan 2024 18:17:54 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi,\n\n> Another question: how did you choose between using TimestampTz and\n> Timestamp types? I realize that internally it's all the same. Maybe\n> Timestamp will be slightly better since the way it is displayed\n> doesn't depend on the session settings. Many people I talked to find\n> this part of TimestampTz confusing.\n>\n> timstamptz internally always store UTC.\n> I believe that in SQL, when operating with time in UTC, you should always use timestamptz.\n> timestamp is theoretically the same thing. But internally it does not convert time to UTC and will lead to incorrect use.\n\nNo.\n\nTimestamp and TimestampTz are absolutely the same thing. The only\ndifference is how they are shown to the user. TimestampTz uses session\ncontext in order to be displayed in the TZ chosen by the user. Thus\ntypically it is somewhat more confusing to the users and thus I asked\nwhether there was a good reason to choose TimestampTz over Timestamp.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 18 Jan 2024 17:20:49 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 18 Jan 2024, at 19:20, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> \n> Timestamp and TimestampTz are absolutely the same thing.\nMy question is not about Postgres data types. I'm asking about examples in the standard.\n\nThere's an example 017F22E2-79B0-7CC3-98C4-DC0C0C07398F. It is expected to be generated on \"Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00\".\nIt's exaplained to be 164555774200000ns after 1582-10-15 00:00:00 UTC.\n\nBut 164555774200000ns after 1582-10-15 00:00:00 UTC was 2022-02-22 19:22:22 UTC. And that was 2022-02-23 00:22:22 in UTC-05.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 18 Jan 2024 20:39:43 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi Andrey,\n\n> > Timestamp and TimestampTz are absolutely the same thing.\n> My question is not about Postgres data types. I'm asking about examples in the standard.\n>\n> There's an example 017F22E2-79B0-7CC3-98C4-DC0C0C07398F. It is expected to be generated on \"Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00\".\n> It's exaplained to be 164555774200000ns after 1582-10-15 00:00:00 UTC.\n>\n> But 164555774200000ns after 1582-10-15 00:00:00 UTC was 2022-02-22 19:22:22 UTC. And that was 2022-02-23 00:22:22 in UTC-05.\n\nNot 100% sure which text you are referring to exactly, but I'm\nguessing it's section B.2 of [1]\n\n\"\"\"\nThis example UUIDv7 test vector utilizes a well-known 32 bit Unix\nepoch with additional millisecond precision to fill the first 48 bits\n[...]\nThe timestamp is Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00\nrepresented as 0x17F22E279B0 or 1645557742000\n\"\"\"\n\nIf this is the case, I think the example is indeed wrong:\n\n```\n=# select extract(epoch from 'Tuesday, February 22, 2022 2:22:22.00 PM\nGMT-05:00' :: timestamptz)*1000;\n ?column?\n----------------------\n 1645521742000.000000\n(1 row)\n```\n\nAnd the difference between the value in the text and the actual value\nis 10 hours as you pointed out.\n\nAlso you named the date 1582-10-15 00:00:00 UTC. Maybe you actually\nmeant 1970-01-01 00:00:00 UTC?\n\n[1]: https://www.ietf.org/archive/id/draft-peabody-dispatch-new-uuid-format-04.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 18 Jan 2024 19:21:52 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi Andrey,\n\nAleksander Alekseev wrote: \"If this is the case, I think the example is indeed wrong\". \n\nThis is one of the reasons why I was categorically against any examples of implementation in the new RFC. The examples have been very poorly studied and discussed, and therefore it is better not to use them at all. But the text of the RFC itself clearly refers to UTC, and not at all about local time: \"UUID version 7 features a time-ordered value field derived from the widely implemented and well known Unix Epoch timestamp source, the number of milliseconds since midnight 1 Jan 1970 UTC, leap seconds excluded\". The main reason for using UTC is so that UUIDv7's, generated approximately simultaneously in different time zones, are correctly ordered in time when they get into one database.\n\n\nSergey Prokhorenko\nsergeyprokhorenko@yahoo.com.au \n\n On Thursday, 18 January 2024 at 07:22:05 pm GMT+3, Aleksander Alekseev <aleksander@timescale.com> wrote: \n \n Hi Andrey,\n\n> > Timestamp and TimestampTz are absolutely the same thing.\n> My question is not about Postgres data types. I'm asking about examples in the standard.\n>\n> There's an example 017F22E2-79B0-7CC3-98C4-DC0C0C07398F. It is expected to be generated on \"Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00\".\n> It's exaplained to be 164555774200000ns after 1582-10-15 00:00:00 UTC.\n>\n> But 164555774200000ns after 1582-10-15 00:00:00 UTC was 2022-02-22 19:22:22 UTC. And that was 2022-02-23 00:22:22 in UTC-05.\n\nNot 100% sure which text you are referring to exactly, but I'm\nguessing it's section B.2 of [1]\n\n\"\"\"\nThis example UUIDv7 test vector utilizes a well-known 32 bit Unix\nepoch with additional millisecond precision to fill the first 48 bits\n[...]\nThe timestamp is Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00\nrepresented as 0x17F22E279B0 or 1645557742000\n\"\"\"\n\nIf this is the case, I think the example is indeed wrong:\n\n```\n=# select extract(epoch from 'Tuesday, February 22, 2022 2:22:22.00 PM\nGMT-05:00' :: timestamptz)*1000;\n ?column?\n----------------------\n 1645521742000.000000\n(1 row)\n```\n\nAnd the difference between the value in the text and the actual value\nis 10 hours as you pointed out.\n\nAlso you named the date 1582-10-15 00:00:00 UTC. Maybe you actually\nmeant 1970-01-01 00:00:00 UTC?\n\n[1]: https://www.ietf.org/archive/id/draft-peabody-dispatch-new-uuid-format-04.html\n\n-- \nBest regards,\nAleksander Alekseev\n \nHi Andrey,Aleksander Alekseev wrote: \"If this is the case, I think the example is indeed wrong\". This is one of the reasons why I was categorically against any examples of implementation in the new RFC. The examples have been very poorly studied and discussed, and therefore it is better not to use them at all. But the text of the RFC itself clearly refers to UTC, and not at all about local time: \"UUID version 7 features a time-ordered value field derived from the widely implemented and well known Unix Epoch timestamp source, the number of milliseconds since midnight 1 Jan 1970 UTC, leap seconds excluded\". The main reason for using UTC is so that UUIDv7's, generated approximately simultaneously in different time zones, are correctly ordered in time when they get into one database.Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Thursday, 18 January 2024 at 07:22:05 pm GMT+3, Aleksander Alekseev <aleksander@timescale.com> wrote:\n \n\n\nHi Andrey,> > Timestamp and TimestampTz are absolutely the same thing.> My question is not about Postgres data types. I'm asking about examples in the standard.>> There's an example 017F22E2-79B0-7CC3-98C4-DC0C0C07398F. It is expected to be generated on \"Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00\".> It's exaplained to be 164555774200000ns after 1582-10-15 00:00:00 UTC.>> But 164555774200000ns after 1582-10-15 00:00:00 UTC was 2022-02-22 19:22:22 UTC. And that was 2022-02-23 00:22:22 in UTC-05.Not 100% sure which text you are referring to exactly, but I'mguessing it's section B.2 of [1]\"\"\"This example UUIDv7 test vector utilizes a well-known 32 bit Unixepoch with additional millisecond precision to fill the first 48 bits[...]The timestamp is Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00represented as 0x17F22E279B0 or 1645557742000\"\"\"If this is the case, I think the example is indeed wrong:```=# select extract(epoch from 'Tuesday, February 22, 2022 2:22:22.00 PMGMT-05:00' :: timestamptz)*1000; ?column?---------------------- 1645521742000.000000(1 row)```And the difference between the value in the text and the actual valueis 10 hours as you pointed out.Also you named the date 1582-10-15 00:00:00 UTC. Maybe you actuallymeant 1970-01-01 00:00:00 UTC?[1]: https://www.ietf.org/archive/id/draft-peabody-dispatch-new-uuid-format-04.html-- Best regards,Aleksander Alekseev",
"msg_date": "Thu, 18 Jan 2024 18:28:10 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 18 Jan 2024, at 20:39, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> But 164555774200000ns after 1582-10-15 00:00:00 UTC was 2022-02-22 19:22:22 UTC. And that was 2022-02-23 00:22:22 in UTC-05.\n\n\n'2022-02-22 19:22:22 UTC' is exactly that moment which was encoded into example UUIDs. It's not '2022-02-23 00:22:22 in UTC-05' as I thought.\nI got confused by \"at timezone\" changes which in fact removes timezone information. And that's per SQL standard...\n\nNow I'm completely lost in time... I've set local time to NY (UTC-5).\n\npostgres=# select TIMESTAMP WITH TIME ZONE '2022-02-22 14:22:22-05' - TIMESTAMP WITH TIME ZONE 'Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00';\n ?column? \n----------\n 10:00:00\n(1 row)\n\npostgres=# select TIMESTAMP WITH TIME ZONE 'Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00';\n timestamptz \n------------------------\n 2022-02-22 04:22:22-05\n(1 row)\n\nI cannot wrap my mind around it... Any pointers would be appreciated.\nI'm certain that code extracted UTC time correctly, I just want a reliable test that verifies timestamp constant (+ I understand what is going on).\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 18 Jan 2024 13:31:10 -0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi Andrey,\n\nYou'd better generate a test UUIDv7 for midnight 1 Jan 1970 UTC. In this case, the timestamp in UUIDv7 according to the new RFC must be filled with zeros. By extracting the timestamp from this test UUIDv7, you should get exactly midnight 1 Jan 1970 UTC.\nI also recommend this article: https://habr.com/ru/articles/772954/\n\n\nSergey Prokhorenko\nsergeyprokhorenko@yahoo.com.au \n\n On Thursday, 18 January 2024 at 09:31:16 pm GMT+3, Andrey Borodin <x4mmm@yandex-team.ru> wrote: \n \n \n\n> On 18 Jan 2024, at 20:39, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> But 164555774200000ns after 1582-10-15 00:00:00 UTC was 2022-02-22 19:22:22 UTC. And that was 2022-02-23 00:22:22 in UTC-05.\n\n\n'2022-02-22 19:22:22 UTC' is exactly that moment which was encoded into example UUIDs. It's not '2022-02-23 00:22:22 in UTC-05' as I thought.\nI got confused by \"at timezone\" changes which in fact removes timezone information. And that's per SQL standard...\n\nNow I'm completely lost in time... I've set local time to NY (UTC-5).\n\npostgres=# select TIMESTAMP WITH TIME ZONE '2022-02-22 14:22:22-05' - TIMESTAMP WITH TIME ZONE 'Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00';\n ?column? \n----------\n 10:00:00\n(1 row)\n\npostgres=# select TIMESTAMP WITH TIME ZONE 'Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00';\n timestamptz \n------------------------\n 2022-02-22 04:22:22-05\n(1 row)\n\nI cannot wrap my mind around it... Any pointers would be appreciated.\nI'm certain that code extracted UTC time correctly, I just want a reliable test that verifies timestamp constant (+ I understand what is going on).\n\n\nBest regards, Andrey Borodin. \nHi Andrey,You'd better generate a test UUIDv7 for midnight 1 Jan 1970 UTC. In this case, the timestamp in UUIDv7 according to the new RFC must be filled with zeros. By extracting the timestamp from this test UUIDv7, you should get exactly midnight 1 Jan 1970 UTC.I also recommend this article: https://habr.com/ru/articles/772954/Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Thursday, 18 January 2024 at 09:31:16 pm GMT+3, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n \n\n\n> On 18 Jan 2024, at 20:39, Andrey Borodin <x4mmm@yandex-team.ru> wrote:> > But 164555774200000ns after 1582-10-15 00:00:00 UTC was 2022-02-22 19:22:22 UTC. And that was 2022-02-23 00:22:22 in UTC-05.'2022-02-22 19:22:22 UTC' is exactly that moment which was encoded into example UUIDs. It's not '2022-02-23 00:22:22 in UTC-05' as I thought.I got confused by \"at timezone\" changes which in fact removes timezone information. And that's per SQL standard...Now I'm completely lost in time... I've set local time to NY (UTC-5).postgres=# select TIMESTAMP WITH TIME ZONE '2022-02-22 14:22:22-05' - TIMESTAMP WITH TIME ZONE 'Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00'; ?column? ---------- 10:00:00(1 row)postgres=# select TIMESTAMP WITH TIME ZONE 'Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00'; timestamptz ------------------------ 2022-02-22 04:22:22-05(1 row)I cannot wrap my mind around it... Any pointers would be appreciated.I'm certain that code extracted UTC time correctly, I just want a reliable test that verifies timestamp constant (+ I understand what is going on).Best regards, Andrey Borodin.",
"msg_date": "Thu, 18 Jan 2024 18:59:38 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Using localtime would be absurd. Especially since time goes back during \nsummer time change.\nI believe our implementation should use UTC. No one forbids us from \nassuming that our local time for generating uuid is UTC.\n\nAndrey Borodin wrote on 1/18/2024 2:17 PM:\n>\n>> On 17 Jan 2024, at 02:19, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> I want to ask Kyzer or Brad, I hope they will see this message. I'm working on the patch for time extraction for v1, v6 and v7.\n>\n> Do I understand correctly, that UUIDs contain local time, not UTC time? For examples in [0] I see that \"A.6. Example of a UUIDv7 Value\" I see that February 22, 2022 2:22:22.00 PM GMT-05:00 results in unix_ts_ms = 0x017F22E279B0, which is not UTC, but local time.\n> Is it intentional? Section \"5.1. UUID Version 1\" states otherwise.\n>\n> If so, I should swap signatures of functions from TimestampTz to Timestamp.\n> I'm hard-coding examples from this standard to tests, so I want to be precise...\n>\n> If I follow the standard I see this in tests:\n> +-- extract UUID v1, v6 and v7 timestamp\n> +SELECT uuid_extract_time('C232AB00-9414-11EC-B3C8-9F6BDECED846') at time zone 'GMT-05';\n> + timezone\n> +--------------------------\n> + Wed Feb 23 00:22:22 2022\n> +(1 row)\n>\n> Current patch version attached. I've addressed all other requests: function renames, aliases, multiple functions instead of optional params, cleaner catalog definitions, not throwing error when [var,ver,time] value is unknown.\n> What is left: deal with timezones, improve documentation.\n>\n>\n> Best regards, Andrey Borodin.\n>\n> [0] https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis#name-example-of-a-uuidv1-value\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nUsing localtime would be absurd. Especially \nsince time goes back during summer time change.\n\nI believe our implementation should use UTC. No one forbids us from \nassuming that our local time for generating uuid is UTC.\n\nAndrey Borodin wrote on 1/18/2024 2:17 PM:\n\n\n\n\nOn 17 Jan 2024, at 02:19, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n\n\nI want to ask Kyzer or Brad, I hope they will see this message. I'm working on the patch for time extraction for v1, v6 and v7.\n\nDo I understand correctly, that UUIDs contain local time, not UTC time? For examples in [0] I see that \"A.6. Example of a UUIDv7 Value\" I see that February 22, 2022 2:22:22.00 PM GMT-05:00 results in unix_ts_ms = 0x017F22E279B0, which is not UTC, but local time.\nIs it intentional? Section \"5.1. UUID Version 1\" states otherwise.\n\nIf so, I should swap signatures of functions from TimestampTz to Timestamp.\nI'm hard-coding examples from this standard to tests, so I want to be precise...\n\nIf I follow the standard I see this in tests:\n+-- extract UUID v1, v6 and v7 timestamp\n+SELECT uuid_extract_time('C232AB00-9414-11EC-B3C8-9F6BDECED846') at time zone 'GMT-05';\n+ timezone \n+--------------------------\n+ Wed Feb 23 00:22:22 2022\n+(1 row)\n\nCurrent patch version attached. I've addressed all other requests: function renames, aliases, multiple functions instead of optional params, cleaner catalog definitions, not throwing error when [var,ver,time] value is unknown.\nWhat is left: deal with timezones, improve documentation.\n\n\nBest regards, Andrey Borodin.\n\n[0] https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis#name-example-of-a-uuidv1-value\n\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Thu, 18 Jan 2024 21:26:55 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Aleksander Alekseev wrote on 1/18/2024 3:20 PM:\n> Hi,\n>\n>> Another question: how did you choose between using TimestampTz and\n>> Timestamp types? I realize that internally it's all the same. Maybe\n>> Timestamp will be slightly better since the way it is displayed\n>> doesn't depend on the session settings. Many people I talked to find\n>> this part of TimestampTz confusing.\n>>\n>> timstamptz internally always store UTC.\n>> I believe that in SQL, when operating with time in UTC, you should always use timestamptz.\n>> timestamp is theoretically the same thing. But internally it does not convert time to UTC and will lead to incorrect use.\n> No.\n>\n> Timestamp and TimestampTz are absolutely the same thing. The only\n> difference is how they are shown to the user. TimestampTz uses session\n> context in order to be displayed in the TZ chosen by the user. Thus\n> typically it is somewhat more confusing to the users and thus I asked\n> whether there was a good reason to choose TimestampTz over Timestamp.\n>\n\nTheoretically, you're right. But look at this example:\n\nSET timezone TO 'Europe/Warsaw';\nSELECT extract(epoch from '2024-01-18 9:27:30'::timestamp), \nextract(epoch from '2024-01-18 9:27:30'::timestamptz);\n\n date_part | date_part\n------------+------------\n 1705570050 | 1705566450\n(1 row)\n\nIn my opinion, timestamptz gives greater guarantees that the time \ninternally is in UTC and the user gets the time in his/her time zone.\n\nIn the case of timestamp, it is never certain whether it keeps time in \nUTC or in the local zone.\n\nIn the case of argument's type, there would be no problem because we \ncould create two functions.\nOf course timestamp would be treated the same as timestamptz.\nBut here we have a problem with the function return type, which can only \nbe one. And since the time returned is in UTC, it should be timestamptz.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nAleksander Alekseev wrote on 1/18/2024\n 3:20 PM:\n\nHi,\n\n\nAnother question: how did you choose between using TimestampTz and\nTimestamp types? I realize that internally it's all the same. Maybe\nTimestamp will be slightly better since the way it is displayed\ndoesn't depend on the session settings. Many people I talked to find\nthis part of TimestampTz confusing.\n\ntimstamptz internally always store UTC.\nI believe that in SQL, when operating with time in UTC, you should always use timestamptz.\ntimestamp is theoretically the same thing. But internally it does not convert time to UTC and will lead to incorrect use.\n\n\nNo.\n\nTimestamp and TimestampTz are absolutely the same thing. The only\ndifference is how they are shown to the user. TimestampTz uses session\ncontext in order to be displayed in the TZ chosen by the user. Thus\ntypically it is somewhat more confusing to the users and thus I asked\nwhether there was a good reason to choose TimestampTz over Timestamp.\n\n\n\n\nTheoretically, you're right. But look at this example:\n\nSET timezone TO 'Europe/Warsaw';\nSELECT extract(epoch from \n'2024-01-18 9:27:30'::timestamp), extract(epoch from '2024-01-18 \n9:27:30'::timestamptz);\n\n date_part | date_part\n------------+------------\n 1705570050 | 1705566450\n(1 row)\n\nIn my opinion, timestamptz gives greater guarantees that the time \ninternally is in UTC and the user gets the time in his/her time zone.\n\nIn the case of timestamp, it is never certain whether it keeps time in \nUTC or in the local zone.\n\nIn the case of argument's type, there would be no problem because we \ncould create two functions.\nOf course timestamp would be treated the same as timestamptz.\nBut here we have a problem with the function return type, which can only\n be one. And since the time returned is in UTC, it should be \ntimestamptz.\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Thu, 18 Jan 2024 21:39:41 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "We are not allowed to consider any time other than UTC.\n\nYou need to write to the authors of the standard. I suppose this is a \nmistake.\n\nI know from experience that errors in such standards most often appear \nin examples.\nNobody detects them at first.\nEveryone reads and checks ideas, not calculations.\nThen developers during implementation tears out their hair.\n\nAndrey Borodin wrote on 1/18/2024 4:39 PM:\n>\n>> On 18 Jan 2024, at 19:20, Aleksander Alekseev <aleksander@timescale.com> wrote:\n>>\n>> Timestamp and TimestampTz are absolutely the same thing.\n> My question is not about Postgres data types. I'm asking about examples in the standard.\n>\n> There's an example 017F22E2-79B0-7CC3-98C4-DC0C0C07398F. It is expected to be generated on \"Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00\".\n> It's exaplained to be 164555774200000ns after 1582-10-15 00:00:00 UTC.\n>\n> But 164555774200000ns after 1582-10-15 00:00:00 UTC was 2022-02-22 19:22:22 UTC. And that was 2022-02-23 00:22:22 in UTC-05.\n>\n>\n> Best regards, Andrey Borodin.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nWe are not allowed to consider any time \nother than UTC.\n\n\nYou need to write to the authors of the standard. I suppose this is a \nmistake.\n\n\nI know from experience that errors in such standards most often appear \nin examples.\n\nNobody detects them at first. \n\nEveryone reads and checks ideas, not calculations.\n\nThen developers during implementation tears out their hair.\n\nAndrey Borodin wrote on 1/18/2024 4:39 PM:\n\n\n\n\nOn 18 Jan 2024, at 19:20, Aleksander Alekseev <aleksander@timescale.com> wrote:\n\nTimestamp and TimestampTz are absolutely the same thing.\n\nMy question is not about Postgres data types. I'm asking about examples in the standard.\n\nThere's an example 017F22E2-79B0-7CC3-98C4-DC0C0C07398F. It is expected to be generated on \"Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00\".\nIt's exaplained to be 164555774200000ns after 1582-10-15 00:00:00 UTC.\n\nBut 164555774200000ns after 1582-10-15 00:00:00 UTC was 2022-02-22 19:22:22 UTC. And that was 2022-02-23 00:22:22 in UTC-05.\n\n\nBest regards, Andrey Borodin.\n\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Thu, 18 Jan 2024 21:49:26 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 5:18 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> Current patch version attached. I've addressed all other requests:\n> function renames, aliases, multiple functions instead of optional params,\n> cleaner catalog definitions, not throwing error when [var,ver,time] value\n> is unknown.\n> What is left: deal with timezones, improve documentation.\n>\n\nI've done a test of the v10 patch, and ran into an interesting behavior\nwhen passing in a timestamp to the function (which, as a side note, is\nactually very useful to have as a feature, to support creating time-based\nrange partitions on UUIDv7 fields):\n\npostgres=# SELECT uuid_extract_time(uuidv7());\n uuid_extract_time\n---------------------------\n 2024-01-18 18:49:00.01-08\n(1 row)\n\npostgres=# SELECT uuid_extract_time(uuidv7('2024-04-01'));\n uuid_extract_time\n------------------------\n 2024-04-01 00:00:00-07\n(1 row)\n\npostgres=# SELECT uuid_extract_time(uuidv7());\n uuid_extract_time\n------------------------\n 2024-04-01 00:00:00-07\n(1 row)\n\nNote how calling the uuidv7 function again after having called it with a\nfixed future timestamp, returns the future timestamp, even though it should\nreturn the current time.\n\nI believe this is caused by incorrectly re-using the cached\nprevious_timestamp. In the second call here (with a fixed future\ntimestamp), we end up setting ts and tms to 2024-04-01, with\nincrement_counter = false, which leads us to set previous_timestamp to the\npassed in timestamp (else branch of the second if in uuidv7). When we then\ncall the function again without an argument, we end up getting a new\ntimestamp from gettimeofday, but because we try to detect backwards leaps,\nwe set increment_counter to true, and thus end up reusing the previous\n(future) timestamp here:\n\n/* protection from leap backward */\ntms = previous_timestamp;\n\nNot sure how to fix this, but clearly something is amiss here.\n\nThanks,\nLukas\n\n-- \nLukas Fittl\n\nOn Thu, Jan 18, 2024 at 5:18 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:Current patch version attached. I've addressed all other requests: function renames, aliases, multiple functions instead of optional params, cleaner catalog definitions, not throwing error when [var,ver,time] value is unknown.\nWhat is left: deal with timezones, improve documentation.I've done a test of the v10 patch, and ran into an interesting behavior when passing in a timestamp to the function (which, as a side note, is actually very useful to have as a feature, to support creating time-based range partitions on UUIDv7 fields):postgres=# SELECT uuid_extract_time(uuidv7()); uuid_extract_time --------------------------- 2024-01-18 18:49:00.01-08(1 row)postgres=# SELECT uuid_extract_time(uuidv7('2024-04-01')); uuid_extract_time ------------------------ 2024-04-01 00:00:00-07(1 row)postgres=# SELECT uuid_extract_time(uuidv7()); uuid_extract_time ------------------------ 2024-04-01 00:00:00-07(1 row)Note how calling the uuidv7 function again after having called it with a fixed future timestamp, returns the future timestamp, even though it should return the current time.I believe this is caused by incorrectly re-using the cached previous_timestamp. In the second call here (with a fixed future timestamp), we end up setting ts and tms to 2024-04-01, with increment_counter = false, which leads us to set previous_timestamp to the passed in timestamp (else branch of the second if in uuidv7). When we then call the function again without an argument, we end up getting a new timestamp from gettimeofday, but because we try to detect backwards leaps, we set increment_counter to true, and thus end up reusing the previous (future) timestamp here:/* protection from leap backward */tms = previous_timestamp;Not sure how to fix this, but clearly something is amiss here.Thanks,Lukas-- Lukas Fittl",
"msg_date": "Thu, 18 Jan 2024 18:58:40 -0800",
"msg_from": "Lukas Fittl <lukas@fittl.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 11:31 AM Andrey Borodin <x4mmm@yandex-team.ru>\nwrote:\n\n>\n> Now I'm completely lost in time... I've set local time to NY (UTC-5).\n>\n> postgres=# select TIMESTAMP WITH TIME ZONE '2022-02-22 14:22:22-05' -\n> TIMESTAMP WITH TIME ZONE 'Tuesday, February 22, 2022 2:22:22.00 PM\n> GMT-05:00';\n> ?column?\n> ----------\n> 10:00:00\n> (1 row)\n>\n>\nYou are mixing POSIX and ISO-8601 conventions and, as noted in our\nappendix, they disagree on the direction that is positive.\n\nhttps://www.postgresql.org/docs/current/datetime-posix-timezone-specs.html\n\nThe offset fields specify the hours, and optionally minutes and seconds,\ndifference from UTC. They have the format hh[:mm[:ss]] optionally with a\nleading sign (+ or -). The positive sign is used for zones west of\nGreenwich. (Note that this is the opposite of the ISO-8601 sign convention\nused elsewhere in PostgreSQL.)\n\nDavid J.\n\nOn Thu, Jan 18, 2024 at 11:31 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\nNow I'm completely lost in time... I've set local time to NY (UTC-5).\n\npostgres=# select TIMESTAMP WITH TIME ZONE '2022-02-22 14:22:22-05' - TIMESTAMP WITH TIME ZONE 'Tuesday, February 22, 2022 2:22:22.00 PM GMT-05:00';\n ?column? \n----------\n 10:00:00\n(1 row)You are mixing POSIX and ISO-8601 conventions and, as noted in our appendix, they disagree on the direction that is positive.https://www.postgresql.org/docs/current/datetime-posix-timezone-specs.html The offset fields specify the hours, and optionally minutes and seconds, difference from UTC. They have the format hh[:mm[:ss]] optionally with a leading sign (+ or -). The positive sign is used for zones west of Greenwich. (Note that this is the opposite of the ISO-8601 sign convention used elsewhere in PostgreSQL.)David J.",
"msg_date": "Thu, 18 Jan 2024 20:24:23 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 19 Jan 2024, at 08:24, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> \n> You are mixing POSIX and ISO-8601 conventions and, as noted in our appendix, they disagree on the direction that is positive.\n\nThanks! Now everything seems on its place.\n\nI want to include in the patch following tests:\n-- extract UUID v1, v6 and v7 timestamp\nSELECT uuid_extract_time('C232AB00-9414-11EC-B3C8-9F6BDECED846') = 'Tuesday, February 22, 2022 2:22:22.00 PM GMT+05:00';\nSELECT uuid_extract_time('1EC9414C-232A-6B00-B3C8-9F6BDECED846') = 'Tuesday, February 22, 2022 2:22:22.00 PM GMT+05:00';\nSELECT uuid_extract_time('017F22E2-79B0-7CC3-98C4-DC0C0C07398F') = 'Tuesday, February 22, 2022 2:22:22.00 PM GMT+05:00';\n\nHow do you think, will it be stable all across buildfarm? Or should we change anything to avoid false positives inferred from different timestamp parsing?\n\n\n> On 19 Jan 2024, at 07:58, Lukas Fittl <lukas@fittl.com> wrote:\n> \n> Note how calling the uuidv7 function again after having called it with a fixed future timestamp, returns the future timestamp, even though it should return the current time.\n\nThanks for the review.\nWell, that was intentional. But now I see it's kind of confusing behaviour. I've changed it to more expected version.\n\nAlso, I've added some documentation on all functions.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 19 Jan 2024 13:25:51 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi,\n\n> No.\n>\n> Timestamp and TimestampTz are absolutely the same thing. The only\n> difference is how they are shown to the user. TimestampTz uses session\n> context in order to be displayed in the TZ chosen by the user. Thus\n> typically it is somewhat more confusing to the users and thus I asked\n> whether there was a good reason to choose TimestampTz over Timestamp.\n>\n>\n> Theoretically, you're right. But look at this example:\n>\n> SET timezone TO 'Europe/Warsaw';\n> SELECT extract(epoch from '2024-01-18 9:27:30'::timestamp), extract(epoch from '2024-01-18 9:27:30'::timestamptz);\n>\n> date_part | date_part\n> ------------+------------\n> 1705570050 | 1705566450\n> (1 row)\n>\n> In my opinion, timestamptz gives greater guarantees that the time internally is in UTC and the user gets the time in his/her time zone.\n\nI believe you didn't notice, but this example just proves my point.\n\nIn this case you have two timestamps that are different _internally_,\nbut the way they are _shown_ is the same because the first one is in\nUTC and the second one in your local session timezone, Europe/Warsaw.\nextract(epoch ...) extract UNIX epoch, i.e. relies on the _internal_\nrepresentation. This is why you got different results.\n\nThis demonstrates that TimestampTz is a permanent source of confusion\nfor the users and the reason why personally I would prefer if UUIDv7\nalways used Timestamp (no Tz). TimestampTz can be converted to\nTimestampTz by users who need them and have experience using them.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 19 Jan 2024 14:07:31 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 19 Jan 2024, at 13:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> Also, I've added some documentation on all functions.\n\nHere's v12. Changes:\n1. Documentation improvements\n2. Code comments\n3. Better commit message and reviews list\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 19 Jan 2024 23:07:35 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Fri, Jan 19, 2024 at 10:07 AM Andrey Borodin <x4mmm@yandex-team.ru>\nwrote:\n\n>\n>\n> > On 19 Jan 2024, at 13:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> >\n> > Also, I've added some documentation on all functions.\n>\n> Here's v12. Changes:\n> 1. Documentation improvements\n> 2. Code comments\n> 3. Better commit message and reviews list\n>\n\nThank you, Andrey! I have just checked v12 – cleanly applied to HEAD, and\nfunctions work well. I especially like that fact that we keep\nuuid_extract_time(..) here – this is a great thing to have for time-based\npartitioning, and in many cases we will be able to decide not to have a\ncreation column timestamp (e.g., \"created_at\") at all, saving 8 bytes.\n\nThe docs and comments look great too.\n\nOverall, the patch looks mature enough. It would be great to have it in\npg17. Yes, the RFC is not fully finalized yet, but it's very close. And\nmany libraries are already including implementation of UUIDv7 – here are\nsome examples:\n\n- https://www.npmjs.com/package/uuidv7\n- https://crates.io/crates/uuidv7\n- https://github.com/google/uuid/pull/139\n\nNik\n\nOn Fri, Jan 19, 2024 at 10:07 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> On 19 Jan 2024, at 13:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> Also, I've added some documentation on all functions.\n\nHere's v12. Changes:\n1. Documentation improvements\n2. Code comments\n3. Better commit message and reviews listThank you, Andrey! I have just checked v12 – cleanly applied to HEAD, and functions work well. I especially like that fact that we keep uuid_extract_time(..) here – this is a great thing to have for time-based partitioning, and in many cases we will be able to decide not to have a creation column timestamp (e.g., \"created_at\") at all, saving 8 bytes.The docs and comments look great too.Overall, the patch looks mature enough. It would be great to have it in pg17. Yes, the RFC is not fully finalized yet, but it's very close. And many libraries are already including implementation of UUIDv7 – here are some examples:- https://www.npmjs.com/package/uuidv7- https://crates.io/crates/uuidv7- https://github.com/google/uuid/pull/139Nik",
"msg_date": "Sun, 21 Jan 2024 20:22:19 -0800",
"msg_from": "Nikolay Samokhvalov <nik@postgres.ai>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nManually tested uuidv7(), uuid_extract_time() – they work as expected. The basic docs provided look clear.\r\n\r\nI haven't checked the tests though and possible edge cases, so leaving it as \"needs review\" waiting for more reviewers",
"msg_date": "Mon, 22 Jan 2024 04:24:31 +0000",
"msg_from": "Nikolay Samokhvalov <nikolay@samokhvalov.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi,\n\n> But now (after big timeseries project with multiple time zones and DST problems) I think differently.\n> Even though timestamp and timestamptz are practically the same, timestamptz should be used to store the time in UTC.\n> Choosing timestamp is more likely to lead to problems and misunderstandings than timestamptz.\n\nAs somebody who contributed TZ support to TimescaleDB I'm more or less\naware about the pros and cons of Timestamp and TimestampTz :)\nEngineering is all about compromises. I can imagine a project where it\nmakes sense to use only TimestampTz for the entire database, and the\nopposite - when it's better to use only UTC and Timestamp. In this\nparticular case I was merely concerned that the particular choice\ncould be confusing for the users but I think I changed my mind by now,\nsee below.\n\n>> Here's v12. Changes:\n>> 1. Documentation improvements\n>> 2. Code comments\n>> 3. Better commit message and reviews list\n>\n>\n> Thank you, Andrey! I have just checked v12 – cleanly applied to HEAD, and functions work well. I especially like that fact that we keep uuid_extract_time(..) here – this is a great thing to have for time-based partitioning, and in many cases we will be able to decide not to have a creation column timestamp (e.g., \"created_at\") at all, saving 8 bytes.\n>\n> The docs and comments look great too.\n>\n> Overall, the patch looks mature enough. It would be great to have it in pg17. Yes, the RFC is not fully finalized yet, but it's very close. And many libraries are already including implementation of UUIDv7 – here are some examples:\n>\n> - https://www.npmjs.com/package/uuidv7\n> - https://crates.io/crates/uuidv7\n> - https://github.com/google/uuid/pull/139\n\nThanks!\n\nAfter playing with v12 I'm inclined to agree that it's RfC.\n\nI only have a couple of silly nitpicks:\n\n- It could make sense to decompose the C implementation of uuidv7() in\ntwo functions, for readability.\n- It could make sense to get rid of curly braces in SQL tests when\ncalling uuid_extract_ver() and uuid_extract_ver(), for consistency.\n\nI'm not going to insist on these changes though and prefer leaving it\nto the author and the committer to decide.\n\nAlso I take back what I said above about using Timestamp instead of\nTimestampTz. I forgot that Timestamps are implicitly casted to\nTimestampTz's, so users preferring Timestamps can do this:\n\n```\n=# select uuidv7('2024-01-22 12:34:56' :: timestamp);\n uuidv7\n--------------------------------------\n 018d3085-de00-77c1-9e7b-7b04ddb9ebb9\n```\n\nCfbot also seems to be happy with the patch so I'm changing the CF\nentry status to RfC.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 22 Jan 2024 18:02:34 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi,\n\n> Cfbot also seems to be happy with the patch so I'm changing the CF\n> entry status to RfC.\n\nI've found a bug:\n\n```\n=# select now() - interval '5000 years';\n ?column?\n----------------------------------------\n 2977-01-24 15:29:01.779462+02:30:17 BC\n\nTime: 0.957 ms\n\n=# select uuidv7(now() - interval '5000 years');\n uuidv7\n--------------------------------------\n 720c1868-0764-7677-99cd-265b84ea08b9\n\n=# select uuid_extract_time('720c1868-0764-7677-99cd-265b84ea08b9');\n uuid_extract_time\n----------------------------\n 5943-08-26 21:30:44.836+03\n```\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 24 Jan 2024 15:31:28 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 24 Jan 2024, at 17:31, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> \n> Hi,\n> \n>> Cfbot also seems to be happy with the patch so I'm changing the CF\n>> entry status to RfC.\n> \n> I've found a bug:\n> \n> ```\n> =# select now() - interval '5000 years';\n> ?column?\n> ----------------------------------------\n> 2977-01-24 15:29:01.779462+02:30:17 BC\n> \n> Time: 0.957 ms\n> \n> =# select uuidv7(now() - interval '5000 years');\n> uuidv7\n> --------------------------------------\n> 720c1868-0764-7677-99cd-265b84ea08b9\n> \n> =# select uuid_extract_time('720c1868-0764-7677-99cd-265b84ea08b9');\n> uuid_extract_time\n> ----------------------------\n> 5943-08-26 21:30:44.836+03\n> ```\n\nUUIDv7 range does not correspond to timestamp range. But it’s purpose is not in storing timestamp, but in being unique identifier. So I don’t think it worth throwing an error when overflowing value is given. BTW if you will subtract some nanoseconds - you will not get back timestamp you put into UUID too.\nUUID does not store timpestamp, it only uses it to generate an identifier. Some value can be extracted back, but with limited precision, limited range and only if UUID was generated precisely by the specification in standard (and standard allows deviation! Most of implementation try to tradeoff something).\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 24 Jan 2024 17:40:36 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi,\n\n> UUIDv7 range does not correspond to timestamp range. But it’s purpose is not in storing timestamp, but in being unique identifier. So I don’t think it worth throwing an error when overflowing value is given. BTW if you will subtract some nanoseconds - you will not get back timestamp you put into UUID too.\n> UUID does not store timpestamp, it only uses it to generate an identifier. Some value can be extracted back, but with limited precision, limited range and only if UUID was generated precisely by the specification in standard (and standard allows deviation! Most of implementation try to tradeoff something).\n\nI don't claim that UUIDv7 purpose is storing timestamps, but I think\nthe invariant:\n\n```\nuuid_extract_time(uidv7(X)) == X\n```\n\nand (!) even more importantly:\n\n```\nif X > Y then uuidv7(X) > uuidv7(Y)\n```\n\n... should hold. Otherwise you can calculate crc64(X) or sha256(X)\ninternally in order to generate an unique ID and claim that it's fine.\n\nValues that violate named invariants should be rejected with an error.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 24 Jan 2024 16:02:13 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 24 Jan 2024, at 18:02, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> \n> Hi,\n> \n>> UUIDv7 range does not correspond to timestamp range. But it’s purpose is not in storing timestamp, but in being unique identifier. So I don’t think it worth throwing an error when overflowing value is given. BTW if you will subtract some nanoseconds - you will not get back timestamp you put into UUID too.\n>> UUID does not store timpestamp, it only uses it to generate an identifier. Some value can be extracted back, but with limited precision, limited range and only if UUID was generated precisely by the specification in standard (and standard allows deviation! Most of implementation try to tradeoff something).\n> \n> I don't claim that UUIDv7 purpose is storing timestamps, but I think\n> the invariant:\n> \n> ```\n> uuid_extract_time(uidv7(X)) == X\n> ```\n> \n> and (!) even more importantly:\n> \n> ```\n> if X > Y then uuidv7(X) > uuidv7(Y)\n> ```\n> \n> ... should hold.\nFunction to extract timestamp does not provide any guarantees at all. Standard states this, see Kyzer answers upthread.\nMoreover, standard urges against relying on that if uuidX was generated before uuidY, then uuidX<uuid. The standard is doing a lot to make this happen, but does not guaranty that.\nAll what is guaranteed is the uniqueness at certain conditions.\n\n> Otherwise you can calculate crc64(X) or sha256(X)\n> internally in order to generate an unique ID and claim that it's fine.\n> \n> Values that violate named invariants should be rejected with an error.\n\nThink about the value that you pass to uuid generation function as an entropy. It’s there to ensure uniqueness and promote ordering (but not guarantee).\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 24 Jan 2024 18:16:09 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi,\n\n> Values that violate named invariants should be rejected with an error.\n\nTo clarify, I don't think we should bother about the precision part.\n\"Equals\" in the example above means \"equal within UUIDv7 precision\",\nsame for \"more\" and \"less\". However, years 2977 BC and 5943 AC are\nclearly not equal, thus 2977 BC should be rejected as an invalid value\nfor UUIDv7.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 24 Jan 2024 16:16:23 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi,\n\n> Function to extract timestamp does not provide any guarantees at all. Standard states this, see Kyzer answers upthread.\n> Moreover, standard urges against relying on that if uuidX was generated before uuidY, then uuidX<uuid. The standard is doing a lot to make this happen, but does not guaranty that.\n> All what is guaranteed is the uniqueness at certain conditions.\n>\n> > Otherwise you can calculate crc64(X) or sha256(X)\n> > internally in order to generate an unique ID and claim that it's fine.\n> >\n> > Values that violate named invariants should be rejected with an error.\n>\n> Think about the value that you pass to uuid generation function as an entropy. It’s there to ensure uniqueness and promote ordering (but not guarantee).\n\nIf the standard doesn't guarantee something it doesn't mean it forbids\nus to give stronger guarantees. I'm convinced that these guarantees\nwill be useful in real-world applications, at least the ones acting\nexclusively within Postgres.\n\nThis being said, I understand your point of view too. Let's see what\nother people think.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 24 Jan 2024 16:29:50 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 24 Jan 2024, at 18:29, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> \n> Hi,\n> \n>> Function to extract timestamp does not provide any guarantees at all. Standard states this, see Kyzer answers upthread.\n>> Moreover, standard urges against relying on that if uuidX was generated before uuidY, then uuidX<uuid. The standard is doing a lot to make this happen, but does not guaranty that.\n>> All what is guaranteed is the uniqueness at certain conditions.\n>> \n>>> Otherwise you can calculate crc64(X) or sha256(X)\n>>> internally in order to generate an unique ID and claim that it's fine.\n>>> \n>>> Values that violate named invariants should be rejected with an error.\n>> \n>> Think about the value that you pass to uuid generation function as an entropy. It’s there to ensure uniqueness and promote ordering (but not guarantee).\n> \n> If the standard doesn't guarantee something it doesn't mean it forbids\n> us to give stronger guarantees.\nNo, the standard makes these guarantees impossible.\nIf we insist that uuid_extract_time(uuidv7(time))==time, we won't be able to generate uuidv7 most of the time. uuidv7(now()) will always ERROR-out.\nStandard implies more coarse-grained timestamp that we have.\n\nAlso, please not that uuidv7(time+1us) and uuidv7(time) will have the same internal timestamp, so despite time+1us > time, still second uuid will be greater.\n\nBoth invariants you proposed cannot be reasonably guaranteed. Upholding any of them greatly reduces usability of UUID v7.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 24 Jan 2024 20:29:36 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi,\n\n> Also, please not that uuidv7(time+1us) and uuidv7(time) will have the same internal timestamp, so despite time+1us > time, still second uuid will be greater.\n>\n> Both invariants you proposed cannot be reasonably guaranteed. Upholding any of them greatly reduces usability of UUID v7.\n\nAgain, personally I don't insist on the 1us precision [1]. Only the\nfact that timestamp from the far past generates UUID from the future\nbothers me.\n\n[1]: https://postgr.es/m/CAJ7c6TPCSprWwVNdOB%3D%3DpgKZPqO5q%3DHRgmU7zmYqz9Dz5ffVYw%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 24 Jan 2024 18:46:14 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 24 Jan 2024, at 20:46, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> \n> Only the\n> fact that timestamp from the far past generates UUID from the future\n> bothers me.\n\nPFA implementation of guard checks, but I'm afraid that this can cause failures in ID generation unexpected to the user...\nSee tests\n\n+-- errors in edge cases of UUID v7\n+SELECT 1 FROM uuidv7('1970-01-01 00:00:00+00'::timestamptz - interval '0ms');\n+SELECT uuidv7('1970-01-01 00:00:00+00'::timestamptz - interval '1ms'); -- ERROR expected\n+SELECT 1 FROM uuidv7(uuid_extract_time('FFFFFFFF-FFFF-7FFF-B000-000000000000'));\n+SELECT uuidv7(uuid_extract_time('FFFFFFFF-FFFF-7FFF-B000-000000000000')+'1ms'); -- ERROR expected\n\nRange is from 1970-01-01 00:00:00 to 10889-08-02 05:31:50.655. I'm not sure we should give this information in error message...\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 24 Jan 2024 21:54:37 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Is enough from 1970 ?\nHow about if user wants to have an UUID of his birth date ?\n\nregards\nMarcos\n\nEm qua., 24 de jan. de 2024 às 13:54, Andrey M. Borodin <\nx4mmm@yandex-team.ru> escreveu:\n\n>\n>\n> > On 24 Jan 2024, at 20:46, Aleksander Alekseev <aleksander@timescale.com>\n> wrote:\n> >\n> > Only the\n> > fact that timestamp from the far past generates UUID from the future\n> > bothers me.\n>\n> PFA implementation of guard checks, but I'm afraid that this can cause\n> failures in ID generation unexpected to the user...\n> See tests\n>\n> +-- errors in edge cases of UUID v7\n> +SELECT 1 FROM uuidv7('1970-01-01 00:00:00+00'::timestamptz - interval\n> '0ms');\n> +SELECT uuidv7('1970-01-01 00:00:00+00'::timestamptz - interval '1ms'); --\n> ERROR expected\n> +SELECT 1 FROM\n> uuidv7(uuid_extract_time('FFFFFFFF-FFFF-7FFF-B000-000000000000'));\n> +SELECT\n> uuidv7(uuid_extract_time('FFFFFFFF-FFFF-7FFF-B000-000000000000')+'1ms'); --\n> ERROR expected\n>\n> Range is from 1970-01-01 00:00:00 to 10889-08-02 05:31:50.655. I'm not\n> sure we should give this information in error message...\n> Thanks!\n>\n>\n> Best regards, Andrey Borodin.\n>\n\n\n\nIs enough from 1970 ?How about if user wants to have an UUID of his birth date ?regardsMarcosEm qua., 24 de jan. de 2024 às 13:54, Andrey M. Borodin <x4mmm@yandex-team.ru> escreveu:\n\n> On 24 Jan 2024, at 20:46, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> \n> Only the\n> fact that timestamp from the far past generates UUID from the future\n> bothers me.\n\nPFA implementation of guard checks, but I'm afraid that this can cause failures in ID generation unexpected to the user...\nSee tests\n\n+-- errors in edge cases of UUID v7\n+SELECT 1 FROM uuidv7('1970-01-01 00:00:00+00'::timestamptz - interval '0ms');\n+SELECT uuidv7('1970-01-01 00:00:00+00'::timestamptz - interval '1ms'); -- ERROR expected\n+SELECT 1 FROM uuidv7(uuid_extract_time('FFFFFFFF-FFFF-7FFF-B000-000000000000'));\n+SELECT uuidv7(uuid_extract_time('FFFFFFFF-FFFF-7FFF-B000-000000000000')+'1ms'); -- ERROR expected\n\nRange is from 1970-01-01 00:00:00 to 10889-08-02 05:31:50.655. I'm not sure we should give this information in error message...\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 24 Jan 2024 14:00:52 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 24 Jan 2024, at 22:00, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> \n> Is enough from 1970 ?\nPer standard unix_ts_ms field is a number of milliseconds from UNIX start date 1970-01-01.\n\n> How about if user wants to have an UUID of his birth date ?\n\nI've claimed my\n0078c135-bd00-70b1-865a-63c3741922a5\n\nBut again, UUIDs are not designed to store timestamp. They are unique and v7 promote data locality via time-ordering.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 24 Jan 2024 22:51:49 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "I understand your point, but\n'2000-01-01' :: timestamp and '1900-01-01' :: timestamp are both valid\ntimestamps.\n\nSo looks strange if user can do\nselect uuidv7(TIMESTAMP '2000-01-01')\nbut cannot do\nselect uuidv7(TIMESTAMP '1900-01-01')\n\nRegards\nMarcos\n\n\nEm qua., 24 de jan. de 2024 às 14:51, Andrey Borodin <x4mmm@yandex-team.ru>\nescreveu:\n\n>\n>\n> > On 24 Jan 2024, at 22:00, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> >\n> > Is enough from 1970 ?\n> Per standard unix_ts_ms field is a number of milliseconds from UNIX start\n> date 1970-01-01.\n>\n> > How about if user wants to have an UUID of his birth date ?\n>\n> I've claimed my\n> 0078c135-bd00-70b1-865a-63c3741922a5\n>\n> But again, UUIDs are not designed to store timestamp. They are unique and\n> v7 promote data locality via time-ordering.\n>\n>\n> Best regards, Andrey Borodin.\n\nI understand your point, but '2000-01-01' :: timestamp and '1900-01-01' :: timestamp are both valid timestamps.So looks strange if user can doselect uuidv7(TIMESTAMP '2000-01-01')but cannot doselect uuidv7(TIMESTAMP '1900-01-01')RegardsMarcosEm qua., 24 de jan. de 2024 às 14:51, Andrey Borodin <x4mmm@yandex-team.ru> escreveu:\n\n> On 24 Jan 2024, at 22:00, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> \n> Is enough from 1970 ?\nPer standard unix_ts_ms field is a number of milliseconds from UNIX start date 1970-01-01.\n\n> How about if user wants to have an UUID of his birth date ?\n\nI've claimed my\n0078c135-bd00-70b1-865a-63c3741922a5\n\nBut again, UUIDs are not designed to store timestamp. They are unique and v7 promote data locality via time-ordering.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 24 Jan 2024 17:47:07 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Wed, 24 Jan 2024 at 21:47, Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> I understand your point, but\n> '2000-01-01' :: timestamp and '1900-01-01' :: timestamp are both valid timestamps.\n>\n> So looks strange if user can do\n> select uuidv7(TIMESTAMP '2000-01-01')\n> but cannot do\n> select uuidv7(TIMESTAMP '1900-01-01')\n\n\n\nI think that would be okay honestly. I don't think there's any\nreasonable value for the uuid when a timestamp is given outside of the\ndate range that the uuid7 \"algorithm\" supports.\n\nSo +1 for erroring when you provide a timestamp outside of that range\n(either too far in the past or too far in the future).\n\n\n",
"msg_date": "Wed, 24 Jan 2024 22:15:43 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\"Other people\" think that extracting the timestamp from UUIDv7 in violation of the new RFC, and generating UUIDv7 from the timestamp were both terrible and poorly thought out ideas. The authors of the new RFC had very good reasons to prohibit this. And the problems you face are the best confirmation of the correctness of the new RFC. It’s better to throw all this gag out of the official patch. Don't tempt developers to break the new RFC with these error-producing functions.\n\n\nSergey Prokhorenkosergeyprokhorenko@yahoo.com.au \n\n On Wednesday, 24 January 2024 at 04:30:02 pm GMT+3, Aleksander Alekseev <aleksander@timescale.com> wrote: \n \n Hi,\n\n> Function to extract timestamp does not provide any guarantees at all. Standard states this, see Kyzer answers upthread.\n> Moreover, standard urges against relying on that if uuidX was generated before uuidY, then uuidX<uuid. The standard is doing a lot to make this happen, but does not guaranty that.\n> All what is guaranteed is the uniqueness at certain conditions.\n>\n> > Otherwise you can calculate crc64(X) or sha256(X)\n> > internally in order to generate an unique ID and claim that it's fine.\n> >\n> > Values that violate named invariants should be rejected with an error.\n>\n> Think about the value that you pass to uuid generation function as an entropy. It’s there to ensure uniqueness and promote ordering (but not guarantee).\n\nIf the standard doesn't guarantee something it doesn't mean it forbids\nus to give stronger guarantees. I'm convinced that these guarantees\nwill be useful in real-world applications, at least the ones acting\nexclusively within Postgres.\n\nThis being said, I understand your point of view too. Let's see what\nother people think.\n\n-- \nBest regards,\nAleksander Alekseev\n \n\"Other people\" think that extracting the timestamp from UUIDv7 in violation of the new RFC, and generating UUIDv7 from the timestamp were both terrible and poorly thought out ideas. The authors of the new RFC had very good reasons to prohibit this. And the problems you face are the best confirmation of the correctness of the new RFC. It’s better to throw all this gag out of the official patch. Don't tempt developers to break the new RFC with these error-producing functions.Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Wednesday, 24 January 2024 at 04:30:02 pm GMT+3, Aleksander Alekseev <aleksander@timescale.com> wrote:\n \n\n\nHi,> Function to extract timestamp does not provide any guarantees at all. Standard states this, see Kyzer answers upthread.> Moreover, standard urges against relying on that if uuidX was generated before uuidY, then uuidX<uuid. The standard is doing a lot to make this happen, but does not guaranty that.> All what is guaranteed is the uniqueness at certain conditions.>> > Otherwise you can calculate crc64(X) or sha256(X)> > internally in order to generate an unique ID and claim that it's fine.> >> > Values that violate named invariants should be rejected with an error.>> Think about the value that you pass to uuid generation function as an entropy. It’s there to ensure uniqueness and promote ordering (but not guarantee).If the standard doesn't guarantee something it doesn't mean it forbidsus to give stronger guarantees. I'm convinced that these guaranteeswill be useful in real-world applications, at least the ones actingexclusively within Postgres.This being said, I understand your point of view too. Let's see whatother people think.-- Best regards,Aleksander Alekseev",
"msg_date": "Wed, 24 Jan 2024 21:30:33 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "That's right! There is no point in waiting for the official approval of the new RFC, which obviously will not change anything. I have been a contributor to this RFC for several years, and I can testify that every aspect imaginable has been thoroughly researched and agreed upon. Nothing new will definitely appear in the new RFC.\n\n\nSergey Prokhorenkosergeyprokhorenko@yahoo.com.au \n\n On Monday, 22 January 2024 at 07:22:32 am GMT+3, Nikolay Samokhvalov <nik@postgres.ai> wrote: \n \n On Fri, Jan 19, 2024 at 10:07 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n\n\n> On 19 Jan 2024, at 13:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> Also, I've added some documentation on all functions.\n\nHere's v12. Changes:\n1. Documentation improvements\n2. Code comments\n3. Better commit message and reviews list\n\n\nThank you, Andrey! I have just checked v12 – cleanly applied to HEAD, and functions work well. I especially like that fact that we keep uuid_extract_time(..) here – this is a great thing to have for time-based partitioning, and in many cases we will be able to decide not to have a creation column timestamp (e.g., \"created_at\") at all, saving 8 bytes.\nThe docs and comments look great too.\nOverall, the patch looks mature enough. It would be great to have it in pg17. Yes, the RFC is not fully finalized yet, but it's very close. And many libraries are already including implementation of UUIDv7 – here are some examples:\n- https://www.npmjs.com/package/uuidv7\n- https://crates.io/crates/uuidv7\n- https://github.com/google/uuid/pull/139\nNik \nThat's right! There is no point in waiting for the official approval of the new RFC, which obviously will not change anything. I have been a contributor to this RFC for several years, and I can testify that every aspect imaginable has been thoroughly researched and agreed upon. Nothing new will definitely appear in the new RFC.Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Monday, 22 January 2024 at 07:22:32 am GMT+3, Nikolay Samokhvalov <nik@postgres.ai> wrote:\n \n\n\nOn Fri, Jan 19, 2024 at 10:07 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> On 19 Jan 2024, at 13:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> Also, I've added some documentation on all functions.\n\nHere's v12. Changes:\n1. Documentation improvements\n2. Code comments\n3. Better commit message and reviews listThank you, Andrey! I have just checked v12 – cleanly applied to HEAD, and functions work well. I especially like that fact that we keep uuid_extract_time(..) here – this is a great thing to have for time-based partitioning, and in many cases we will be able to decide not to have a creation column timestamp (e.g., \"created_at\") at all, saving 8 bytes.The docs and comments look great too.Overall, the patch looks mature enough. It would be great to have it in pg17. Yes, the RFC is not fully finalized yet, but it's very close. And many libraries are already including implementation of UUIDv7 – here are some examples:- https://www.npmjs.com/package/uuidv7- https://crates.io/crates/uuidv7- https://github.com/google/uuid/pull/139Nik",
"msg_date": "Wed, 24 Jan 2024 21:49:45 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 1:52 PM Sergey Prokhorenko <\nsergeyprokhorenko@yahoo.com.au> wrote:\n\n> That's right! There is no point in waiting for the official approval of\n> the new RFC, which obviously will not change anything. I have been a\n> contributor to this RFC\n> <https://www.ietf.org/archive/id/draft-ietf-uuidrev-rfc4122bis-14.html#name-acknowledgements>\n> for several years, and I can testify that every aspect imaginable has been\n> thoroughly researched and agreed upon. Nothing new will definitely appear\n> in the new RFC.\n>\n\n From a practical point of view, these two things are extremely important to\nhave to support partitioning. It is better to implement limitations than\nthrow them away.\n\nWithout them, this functionality will be of a very limited use in\ndatabases. We need to think about large tables – which means partitioning.\n\nNik\n\nOn Wed, Jan 24, 2024 at 1:52 PM Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:That's right! There is no point in waiting for the official approval of the new RFC, which obviously will not change anything. I have been a contributor to this RFC for several years, and I can testify that every aspect imaginable has been thoroughly researched and agreed upon. Nothing new will definitely appear in the new RFC.From a practical point of view, these two things are extremely important to have to support partitioning. It is better to implement limitations than throw them away.Without them, this functionality will be of a very limited use in databases. We need to think about large tables – which means partitioning.Nik",
"msg_date": "Wed, 24 Jan 2024 20:40:30 -0800",
"msg_from": "Nikolay Samokhvalov <nik@postgres.ai>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 8:40 PM Nikolay Samokhvalov <nik@postgres.ai> wrote:\n\n> On Wed, Jan 24, 2024 at 1:52 PM Sergey Prokhorenko <\n> sergeyprokhorenko@yahoo.com.au> wrote:\n>\n>> That's right! There is no point in waiting for the official approval of\n>> the new RFC, which obviously will not change anything. I have been a\n>> contributor to this RFC\n>> <https://www.ietf.org/archive/id/draft-ietf-uuidrev-rfc4122bis-14.html#name-acknowledgements>\n>> for several years, and I can testify that every aspect imaginable has been\n>> thoroughly researched and agreed upon. Nothing new will definitely\n>> appear in the new RFC.\n>>\n>\n> From a practical point of view, these two things are extremely important\n> to have to support partitioning. It is better to implement limitations than\n> throw them away.\n>\n> Without them, this functionality will be of a very limited use in\n> databases. We need to think about large tables – which means partitioning.\n>\n\napologies -- this was a response to another email from you:\n\n> \"Other people\" think that extracting the timestamp from UUIDv7 in\nviolation of the new RFC, and generating UUIDv7 from the timestamp were\nboth terrible and poorly thought out ideas. The authors of the new RFC had\nvery good reasons to prohibit this. And the problems you face are the best\nconfirmation of the correctness of the new RFC. It’s better to throw all\nthis gag out of the official patch. Don't tempt developers to break the new\nRFC with these error-producing functions.\n\nOn Wed, Jan 24, 2024 at 8:40 PM Nikolay Samokhvalov <nik@postgres.ai> wrote:On Wed, Jan 24, 2024 at 1:52 PM Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:That's right! There is no point in waiting for the official approval of the new RFC, which obviously will not change anything. I have been a contributor to this RFC for several years, and I can testify that every aspect imaginable has been thoroughly researched and agreed upon. Nothing new will definitely appear in the new RFC.From a practical point of view, these two things are extremely important to have to support partitioning. It is better to implement limitations than throw them away.Without them, this functionality will be of a very limited use in databases. We need to think about large tables – which means partitioning.apologies -- this was a response to another email from you:> \"Other people\" think that extracting the timestamp from UUIDv7 in violation of the new RFC, and generating UUIDv7 from the timestamp were both terrible and poorly thought out ideas. The authors of the new RFC had very good reasons to prohibit this. And the problems you face are the best confirmation of the correctness of the new RFC. It’s better to throw all this gag out of the official patch. Don't tempt developers to break the new RFC with these error-producing functions.",
"msg_date": "Wed, 24 Jan 2024 20:41:43 -0800",
"msg_from": "Nikolay Samokhvalov <nik@postgres.ai>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 25 Jan 2024, at 09:40, Nikolay Samokhvalov <nik@postgres.ai> wrote:\n> \n> From a practical point of view, these two things are extremely important to have to support partitioning. It is better to implement limitations than throw them away.\n\nPostgres always was a bit hackerish, allowing slightly more then is safe. I.e. you can define immutable function that is not really immutable, turn off autovacuum or fsync. Why bother with safety guards here?\nMy opinion is that we should have this function to extract timestamp. Even if it can return strange values for imprecise RFC implementation.\n\n\n> On 25 Jan 2024, at 02:15, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> \n> So +1 for erroring when you provide a timestamp outside of that range\n> (either too far in the past or too far in the future).\n\n\nOK, it seems like we have some consensus on ERRORing..\n\nDo we have any other open items? Does v13 address all open items? Maybe let’s compose better error message?\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 25 Jan 2024 11:51:40 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "I am against turning the DBMS into another C++, in which they do not so much design something new as fix bugs in production after a crash.\nAs for partitioning, I already wrote to Andrey Borodin that we need a special function to generate a partition id using the UUIDv7 timestamp or even simultaneously with the generation of the timestamp. For example, every month (or so, since precision is not needed here) a new partition is created. Here's a good example: https://elixirforum.com/t/partitioning-postgres-tables-by-timestamp-based-uuids/60916\nBut without a separate function for extracting the entire timestamp from the UUID! Let's solve this specific problem, and not give the developers a grenade with the safety removed. Many developers have already decided to store the timestamp in UUIDv7, so as not to create a separate created_at field. Then they will delete table records with the old timestamp, etc. Horrible mistakes are simply guaranteed.\n\nSergey Prokhorenko sergeyprokhorenko@yahoo.com.au \n\n On Thursday, 25 January 2024 at 09:51:58 am GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote: \n \n \n\n> On 25 Jan 2024, at 09:40, Nikolay Samokhvalov <nik@postgres.ai> wrote:\n> \n> From a practical point of view, these two things are extremely important to have to support partitioning. It is better to implement limitations than throw them away.\n\nPostgres always was a bit hackerish, allowing slightly more then is safe. I.e. you can define immutable function that is not really immutable, turn off autovacuum or fsync. Why bother with safety guards here?\nMy opinion is that we should have this function to extract timestamp. Even if it can return strange values for imprecise RFC implementation.\n\n\n> On 25 Jan 2024, at 02:15, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> \n> So +1 for erroring when you provide a timestamp outside of that range\n> (either too far in the past or too far in the future).\n\n\nOK, it seems like we have some consensus on ERRORing..\n\nDo we have any other open items? Does v13 address all open items? Maybe let’s compose better error message?\n\n\nBest regards, Andrey Borodin. \nI am against turning the DBMS into another C++, in which they do not so much design something new as fix bugs in production after a crash.As for partitioning, I already wrote to Andrey Borodin that we need a special function to generate a partition id using the UUIDv7 timestamp or even simultaneously with the generation of the timestamp. For example, every month (or so, since precision is not needed here) a new partition is created. Here's a good example: https://elixirforum.com/t/partitioning-postgres-tables-by-timestamp-based-uuids/60916But without a separate function for extracting the entire timestamp from the UUID! Let's solve this specific problem, and not give the developers a grenade with the safety removed. Many developers have already decided to store the timestamp in UUIDv7, so as not to create a separate created_at field. Then they will delete table records with the old timestamp, etc. Horrible mistakes are simply guaranteed.Sergey Prokhorenko sergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Thursday, 25 January 2024 at 09:51:58 am GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n \n\n\n> On 25 Jan 2024, at 09:40, Nikolay Samokhvalov <nik@postgres.ai> wrote:> > From a practical point of view, these two things are extremely important to have to support partitioning. It is better to implement limitations than throw them away.Postgres always was a bit hackerish, allowing slightly more then is safe. I.e. you can define immutable function that is not really immutable, turn off autovacuum or fsync. Why bother with safety guards here?My opinion is that we should have this function to extract timestamp. Even if it can return strange values for imprecise RFC implementation.> On 25 Jan 2024, at 02:15, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:> > So +1 for erroring when you provide a timestamp outside of that range> (either too far in the past or too far in the future).OK, it seems like we have some consensus on ERRORing..Do we have any other open items? Does v13 address all open items? Maybe let’s compose better error message?Best regards, Andrey Borodin.",
"msg_date": "Thu, 25 Jan 2024 08:09:18 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Andrey M. Borodin wrote on 25.01.2024 07:51:\n>\n>> On 25 Jan 2024, at 09:40, Nikolay Samokhvalov <nik@postgres.ai> wrote:\n>>\n>> From a practical point of view, these two things are extremely important to have to support partitioning. It is better to implement limitations than throw them away.\n> Postgres always was a bit hackerish, allowing slightly more then is safe. I.e. you can define immutable function that is not really immutable, turn off autovacuum or fsync. Why bother with safety guards here?\n> My opinion is that we should have this function to extract timestamp. Even if it can return strange values for imprecise RFC implementation.\n>\n>\n>> On 25 Jan 2024, at 02:15, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n>>\n>> So +1 for erroring when you provide a timestamp outside of that range\n>> (either too far in the past or too far in the future).\n>\n> OK, it seems like we have some consensus on ERRORing..\n>\n> Do we have any other open items? Does v13 address all open items? Maybe let’s compose better error message?\n+1 for erroring when ts is outside range.\n\nv13 looks good for me. I think we have reached a optimal compromise.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nAndrey M. Borodin wrote on 25.01.2024 \n07:51:\n\n\n\n\nOn 25 Jan 2024, at 09:40, Nikolay Samokhvalov <nik@postgres.ai> wrote:\n\n>From a practical point of view, these two things are extremely important to have to support partitioning. It is better to implement limitations than throw them away.\n\n\nPostgres always was a bit hackerish, allowing slightly more then is safe. I.e. you can define immutable function that is not really immutable, turn off autovacuum or fsync. Why bother with safety guards here?\nMy opinion is that we should have this function to extract timestamp. Even if it can return strange values for imprecise RFC implementation.\n\n\n\nOn 25 Jan 2024, at 02:15, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n\nSo +1 for erroring when you provide a timestamp outside of that range\n(either too far in the past or too far in the future).\n\n\n\nOK, it seems like we have some consensus on ERRORing..\n\nDo we have any other open items? Does v13 address all open items? Maybe let’s compose better error message?\n\n+1 for erroring when ts is outside range.\n\nv13 looks good for me. I think we have reached a optimal compromise.\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Thu, 25 Jan 2024 12:14:38 +0100",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi,\n\n> Postgres always was a bit hackerish, allowing slightly more then is safe. I.e. you can define immutable function that is not really immutable, turn off autovacuum or fsync. Why bother with safety guards here?\n> My opinion is that we should have this function to extract timestamp. Even if it can return strange values for imprecise RFC implementation.\n\nCompletely agree.\n\nUsers that don't like or don't need it can pretend there are no\nuuid_extract_time() and uuidv7(T) in Postgres. If we don't provide\nthem however, users that need them will end up writing their own\nprobably buggy and not compatible implementations. That would be much\nworse.\n\n> So +1 for erroring when you provide a timestamp outside of that range\n> (either too far in the past or too far in the future).\n>\n> OK, it seems like we have some consensus on ERRORing..\n>\n> Do we have any other open items? Does v13 address all open items? Maybe let’s compose better error message?\n>\n> +1 for erroring when ts is outside range.\n>\n> v13 looks good for me. I think we have reached a optimal compromise.\n\nAndrey, many thanks for the updated patch.\n\nLGTM, cfbot is happy and I don't think we have any open items left. So\nchanging CF entry status back to RfC.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 25 Jan 2024 15:06:38 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi,\n\n> Andrey, many thanks for the updated patch.\n>\n> LGTM, cfbot is happy and I don't think we have any open items left. So\n> changing CF entry status back to RfC.\n\nPFA v14. I changed:\n\n```\nelog(ERROR, \"Time argument of UUID v7 cannot exceed 6 bytes\");\n```\n\n... to:\n\n```\nereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"Time argument of UUID v7 is outside of the valid range\")));\n```\n\nWhich IMO tells a bit more to the average user and is translatable.\n\n> At a quick glance, the patch needs improving English, IMO.\n\nAgree. We could use some help from a native English speaker for this.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 25 Jan 2024 15:31:44 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Aleksander,\n\nIn this case the documentation must state that the functions uuid_extract_time() and uuidv7(T) are against the RFC requirements, and that developers may use these functions with caution at their own risk, and these functions are not recommended for production environment.\n\nThe function uuidv7(T) is not better than uuid_extract_time(). Careless developers may well pass any business date into this function: document date, registration date, payment date, reporting date, start date of the current month, data download date, and even a constant. This would be a profanation of UUIDv7 with very negative consequences.\n\nSergey Prokhorenkosergeyprokhorenko@yahoo.com.au \n\n On Thursday, 25 January 2024 at 03:06:50 pm GMT+3, Aleksander Alekseev <aleksander@timescale.com> wrote: \n \n Hi,\n\n> Postgres always was a bit hackerish, allowing slightly more then is safe. I.e. you can define immutable function that is not really immutable, turn off autovacuum or fsync. Why bother with safety guards here?\n> My opinion is that we should have this function to extract timestamp. Even if it can return strange values for imprecise RFC implementation.\n\nCompletely agree.\n\nUsers that don't like or don't need it can pretend there are no\nuuid_extract_time() and uuidv7(T) in Postgres. If we don't provide\nthem however, users that need them will end up writing their own\nprobably buggy and not compatible implementations. That would be much\nworse.\n\n> So +1 for erroring when you provide a timestamp outside of that range\n> (either too far in the past or too far in the future).\n>\n> OK, it seems like we have some consensus on ERRORing..\n>\n> Do we have any other open items? Does v13 address all open items? Maybe let’s compose better error message?\n>\n> +1 for erroring when ts is outside range.\n>\n> v13 looks good for me. I think we have reached a optimal compromise.\n\nAndrey, many thanks for the updated patch.\n\nLGTM, cfbot is happy and I don't think we have any open items left. So\nchanging CF entry status back to RfC.\n\n-- \nBest regards,\nAleksander Alekseev\n \nAleksander,In this case the documentation must state that the functions uuid_extract_time() and uuidv7(T) are against the RFC requirements, and that developers may use these functions with caution at their own risk, and these functions are not recommended for production environment.The function uuidv7(T) is not better than uuid_extract_time(). Careless developers may well pass any business date into this function: document date, registration date, payment date, reporting date, start date of the current month, data download date, and even a constant. This would be a profanation of UUIDv7 with very negative consequences.Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Thursday, 25 January 2024 at 03:06:50 pm GMT+3, Aleksander Alekseev <aleksander@timescale.com> wrote:\n \n\n\nHi,> Postgres always was a bit hackerish, allowing slightly more then is safe. I.e. you can define immutable function that is not really immutable, turn off autovacuum or fsync. Why bother with safety guards here?> My opinion is that we should have this function to extract timestamp. Even if it can return strange values for imprecise RFC implementation.Completely agree.Users that don't like or don't need it can pretend there are nouuid_extract_time() and uuidv7(T) in Postgres. If we don't providethem however, users that need them will end up writing their ownprobably buggy and not compatible implementations. That would be muchworse.> So +1 for erroring when you provide a timestamp outside of that range> (either too far in the past or too far in the future).>> OK, it seems like we have some consensus on ERRORing..>> Do we have any other open items? Does v13 address all open items? Maybe let’s compose better error message?>> +1 for erroring when ts is outside range.>> v13 looks good for me. I think we have reached a optimal compromise.Andrey, many thanks for the updated patch.LGTM, cfbot is happy and I don't think we have any open items left. Sochanging CF entry status back to RfC.-- Best regards,Aleksander Alekseev",
"msg_date": "Thu, 25 Jan 2024 17:04:05 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "By the way, the Go language has also already implemented a function for UUIDv7: https://pkg.go.dev/github.com/gofrs/uuid#NewV7\n\n\n\n\nSergey Prokhorenko sergeyprokhorenko@yahoo.com.au \n\n On Thursday, 25 January 2024 at 12:49:46 am GMT+3, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote: \n \n That's right! There is no point in waiting for the official approval of the new RFC, which obviously will not change anything. I have been a contributor to this RFC for several years, and I can testify that every aspect imaginable has been thoroughly researched and agreed upon. Nothing new will definitely appear in the new RFC.\n\n\nSergey Prokhorenkosergeyprokhorenko@yahoo.com.au \n\n On Monday, 22 January 2024 at 07:22:32 am GMT+3, Nikolay Samokhvalov <nik@postgres.ai> wrote: \n \n On Fri, Jan 19, 2024 at 10:07 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n\n\n> On 19 Jan 2024, at 13:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> Also, I've added some documentation on all functions.\n\nHere's v12. Changes:\n1. Documentation improvements\n2. Code comments\n3. Better commit message and reviews list\n\n\nThank you, Andrey! I have just checked v12 – cleanly applied to HEAD, and functions work well. I especially like that fact that we keep uuid_extract_time(..) here – this is a great thing to have for time-based partitioning, and in many cases we will be able to decide not to have a creation column timestamp (e.g., \"created_at\") at all, saving 8 bytes.\nThe docs and comments look great too.\nOverall, the patch looks mature enough. It would be great to have it in pg17. Yes, the RFC is not fully finalized yet, but it's very close. And many libraries are already including implementation of UUIDv7 – here are some examples:\n- https://www.npmjs.com/package/uuidv7\n- https://crates.io/crates/uuidv7\n- https://github.com/google/uuid/pull/139\nNik \nBy the way, the Go language has also already implemented a function for UUIDv7: https://pkg.go.dev/github.com/gofrs/uuid#NewV7Sergey Prokhorenko sergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Thursday, 25 January 2024 at 12:49:46 am GMT+3, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:\n \n\n\nThat's right! There is no point in waiting for the official approval of the new RFC, which obviously will not change anything. I have been a contributor to this RFC for several years, and I can testify that every aspect imaginable has been thoroughly researched and agreed upon. Nothing new will definitely appear in the new RFC.Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Monday, 22 January 2024 at 07:22:32 am GMT+3, Nikolay Samokhvalov <nik@postgres.ai> wrote:\n \n\n\nOn Fri, Jan 19, 2024 at 10:07 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> On 19 Jan 2024, at 13:25, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> Also, I've added some documentation on all functions.\n\nHere's v12. Changes:\n1. Documentation improvements\n2. Code comments\n3. Better commit message and reviews listThank you, Andrey! I have just checked v12 – cleanly applied to HEAD, and functions work well. I especially like that fact that we keep uuid_extract_time(..) here – this is a great thing to have for time-based partitioning, and in many cases we will be able to decide not to have a creation column timestamp (e.g., \"created_at\") at all, saving 8 bytes.The docs and comments look great too.Overall, the patch looks mature enough. It would be great to have it in pg17. Yes, the RFC is not fully finalized yet, but it's very close. And many libraries are already including implementation of UUIDv7 – here are some examples:- https://www.npmjs.com/package/uuidv7- https://crates.io/crates/uuidv7- https://github.com/google/uuid/pull/139Nik",
"msg_date": "Sun, 28 Jan 2024 12:42:11 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "tl;dr I believe we should remove the uuidv7(timestamp) function from\nthis patchset.\n\nOn Thu, 25 Jan 2024 at 18:04, Sergey Prokhorenko\n<sergeyprokhorenko@yahoo.com.au> wrote:\n> In this case the documentation must state that the functions uuid_extract_time() and uuidv7(T) are against the RFC requirements, and that developers may use these functions with caution at their own risk, and these functions are not recommended for production environment.\n>\n> The function uuidv7(T) is not better than uuid_extract_time(). Careless developers may well pass any business date into this function: document date, registration date, payment date, reporting date, start date of the current month, data download date, and even a constant. This would be a profanation of UUIDv7 with very negative consequences.\n\nAfter re-reading the RFC more diligently, I'm inclined to agree with\nSergey that uuidv7(timestamp) is quite problematic. And I would even\nsay that we should not provide uuidv7(timestamp) at all, and instead\nshould only provide uuidv7(). Providing an explicit timestamp for\nUUIDv7 is explicitly against the spec (in my reading):\n\n> Implementations acquire the current timestamp from a reliable\n> source to provide values that are time-ordered and continually\n> increasing. Care must be taken to ensure that timestamp changes\n> from the environment or operating system are handled in a way that\n> is consistent with implementation requirements. For example, if\n> it is possible for the system clock to move backward due to either\n> manual adjustment or corrections from a time synchronization\n> protocol, implementations need to determine how to handle such\n> cases. (See Altering, Fuzzing, or Smearing below.)\n>\n> ...\n>\n> UUID version 1 and 6 both utilize a Gregorian epoch timestamp\n> while UUIDv7 utilizes a Unix Epoch timestamp. If other timestamp\n> sources or a custom timestamp epoch are required, UUIDv8 MUST be\n> used.\n>\n> ...\n>\n> Monotonicity (each subsequent value being greater than the last) is\n> the backbone of time-based sortable UUIDs.\n\nBy allowing users to provide a timestamp we're not using a continually\nincreasing timestamp for our UUIDv7 generation, and thus it would not\nbe a valid UUIDv7 implementation.\n\nI do agree with others however, that being able to pass in an\narbitrary timestamp for UUID generation would be very useful. For\nexample to be able to partition by the timestamp in the UUID and then\nbeing able to later load data for an older timestamp and have it be\nadded to to the older partition. But it's possible to do that while\nstill following the spec, by using a UUIDv8 instead of UUIDv7. So for\nthis usecase we could make a helper function that generates a UUIDv8\nusing the same format as a UUIDv7, but allows storing arbitrary\ntimestamps. You might say, why not sligthly change UUIDv7 then? Well\nmainly because of this critical sentence in the RFC:\n\n> UUIDv8's uniqueness will be implementation-specific and MUST NOT be assumed.\n\nThat would allow us to say that using this UUIDv8 helper requires\ncareful usage and checks if uniqueness is required.\n\nSo I believe we should remove the uuidv7(timestamp) function from this patchset.\n\nI don't see a problem with including uuid_extract_time though. Afaict\nthe only thing the RFC says about extracting timestamps is that the\nRFC does not give a requirement or guarantee about how close the\nstored timestamp is to the actual time:\n\n> Implementations MAY alter the actual timestamp. Some examples\n> include security considerations around providing a real clock\n> value within a UUID, to correct inaccurate clocks, to handle leap\n> seconds, or instead of dividing a number of microseconds by 1000\n> to obtain a millisecond value; dividing by 1024 (or some other\n> value) for performance reasons. This specification makes no\n> requirement or guarantee about how close the clock value needs to\n> be to the actual time.\n\nI see no reason why we cannot make stronger guarantees about the\ntimestamps that we use to generate UUIDs with our uuidv7() function.\nAnd then we can update the documentation for\nuuid_extract_time to something like this:\n\n> This function extracts a timestamptz from UUID versions 1, 6 and 7. For other\n> versions and variants this function returns NULL. The extracted timestamp\n> does not necessarily equate to the time of UUID generation. How close it is\n> to the actual time depends on the implementation that generated to UUID.\n> The uuidv7() function provided PostgreSQL will normally store the actual time of\n> generation to in the UUID, but if large batches of UUIDs are generated at the\n> same time it's possible that some UUIDs will store a time that is slightly later\n> than their actual generation time.\n\n\n",
"msg_date": "Mon, 29 Jan 2024 12:38:24 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Thu, 25 Jan 2024 at 13:31, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> PFA v14.\n\n+<function>uuidv4</function> () <returnvalue>uuid</returnvalue>\n+</synopsis>\n+ Both functions return a version 4 (random) UUID. This is the most commonly\n+ used type of UUID and is appropriate when random distribution of keys does\n+ not affect performance of an application.\n+<synopsis>\n+<function>uuidv7</function> () <returnvalue>uuid</returnvalue>\n+</synopsis>\n+ This function returns a version 7 (time-ordered + random) UUID. This UUID\n+ version should be used when application prefers locality of identifiers.\n+<synopsis>\n\nI think it would be good to explain the tradeoffs between uuidv4 and\nuuidv7 a bit better. How about changing the docs to something like\nthis:\n\n<function>uuidv4</function> () <returnvalue>uuid</returnvalue>\n</synopsis>\nBoth functions return a version 4 (random) UUID. UUIDv4 is one of the\nmost commonly used types of UUID. It is appropriate when random\ndistribution of keys does not affect performance of an application or\nwhen exposing the generation time of a UUID has unacceptable security\nor business intelligence implications.\n<synopsis>\n<function>uuidv7</function> () <returnvalue>uuid</returnvalue>\n</synopsis>\nThis function returns a version 7 (time-ordered + random) UUID. It\nprovides much better data locality than UUIDv4, which can greatly\nimprove performance when UUID is used in a BTREE index (the default\nindex type in PostgreSQL). To achieve this data locality, UUIDv7\nembeds its own generation time into the UUID. If exposing such a\ntimestamp has unacceptable security or business intelligence\nimplications, then uuidv4() should be used instead.\n<synopsis>\n\n\n",
"msg_date": "Mon, 29 Jan 2024 13:01:36 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Mon, Jan 29, 2024 at 7:38 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n>\n> tl;dr I believe we should remove the uuidv7(timestamp) function from\n> this patchset.\n>\n> On Thu, 25 Jan 2024 at 18:04, Sergey Prokhorenko\n> <sergeyprokhorenko@yahoo.com.au> wrote:\n> > In this case the documentation must state that the functions uuid_extract_time() and uuidv7(T) are against the RFC requirements, and that developers may use these functions with caution at their own risk, and these functions are not recommended for production environment.\n> >\n> > The function uuidv7(T) is not better than uuid_extract_time(). Careless developers may well pass any business date into this function: document date, registration date, payment date, reporting date, start date of the current month, data download date, and even a constant. This would be a profanation of UUIDv7 with very negative consequences.\n>\n> After re-reading the RFC more diligently, I'm inclined to agree with\n> Sergey that uuidv7(timestamp) is quite problematic. And I would even\n> say that we should not provide uuidv7(timestamp) at all, and instead\n> should only provide uuidv7(). Providing an explicit timestamp for\n> UUIDv7 is explicitly against the spec (in my reading):\n>\n> > Implementations acquire the current timestamp from a reliable\n> > source to provide values that are time-ordered and continually\n> > increasing. Care must be taken to ensure that timestamp changes\n> > from the environment or operating system are handled in a way that\n> > is consistent with implementation requirements. For example, if\n> > it is possible for the system clock to move backward due to either\n> > manual adjustment or corrections from a time synchronization\n> > protocol, implementations need to determine how to handle such\n> > cases. (See Altering, Fuzzing, or Smearing below.)\n> >\n> > ...\n> >\n> > UUID version 1 and 6 both utilize a Gregorian epoch timestamp\n> > while UUIDv7 utilizes a Unix Epoch timestamp. If other timestamp\n> > sources or a custom timestamp epoch are required, UUIDv8 MUST be\n> > used.\n> >\n> > ...\n> >\n> > Monotonicity (each subsequent value being greater than the last) is\n> > the backbone of time-based sortable UUIDs.\n>\n> By allowing users to provide a timestamp we're not using a continually\n> increasing timestamp for our UUIDv7 generation, and thus it would not\n> be a valid UUIDv7 implementation.\n>\n> I do agree with others however, that being able to pass in an\n> arbitrary timestamp for UUID generation would be very useful. For\n> example to be able to partition by the timestamp in the UUID and then\n> being able to later load data for an older timestamp and have it be\n> added to to the older partition. But it's possible to do that while\n> still following the spec, by using a UUIDv8 instead of UUIDv7. So for\n> this usecase we could make a helper function that generates a UUIDv8\n> using the same format as a UUIDv7, but allows storing arbitrary\n> timestamps. You might say, why not sligthly change UUIDv7 then? Well\n> mainly because of this critical sentence in the RFC:\n>\n> > UUIDv8's uniqueness will be implementation-specific and MUST NOT be assumed.\n>\n> That would allow us to say that using this UUIDv8 helper requires\n> careful usage and checks if uniqueness is required.\n>\n> So I believe we should remove the uuidv7(timestamp) function from this patchset.\n\nAgreed, the RFC section 6.1[1] has the following statements:\n\n```\nUUID version 1 and 6 both utilize a Gregorian epoch timestamp while\nUUIDv7 utilizes a Unix Epoch timestamp. If other timestamp sources or\na custom timestamp epoch are required, UUIDv8 MUST be used.\n```\n\nIn contrib/uuid-ossp, uuidv1 does not allow the user to supply a\ncustom timestamp,\nso I think it should be the same for uuidv6 and uuidv7.\n\nAnd I have the same feeling that we should not consider v6 and v8 in\nthis patch.\n\n\n[1]: https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-14#section-6.1-2.4.1\n\n>\n> I don't see a problem with including uuid_extract_time though. Afaict\n> the only thing the RFC says about extracting timestamps is that the\n> RFC does not give a requirement or guarantee about how close the\n> stored timestamp is to the actual time:\n>\n> > Implementations MAY alter the actual timestamp. Some examples\n> > include security considerations around providing a real clock\n> > value within a UUID, to correct inaccurate clocks, to handle leap\n> > seconds, or instead of dividing a number of microseconds by 1000\n> > to obtain a millisecond value; dividing by 1024 (or some other\n> > value) for performance reasons. This specification makes no\n> > requirement or guarantee about how close the clock value needs to\n> > be to the actual time.\n>\n> I see no reason why we cannot make stronger guarantees about the\n> timestamps that we use to generate UUIDs with our uuidv7() function.\n> And then we can update the documentation for\n> uuid_extract_time to something like this:\n>\n> > This function extracts a timestamptz from UUID versions 1, 6 and 7. For other\n> > versions and variants this function returns NULL. The extracted timestamp\n> > does not necessarily equate to the time of UUID generation. How close it is\n> > to the actual time depends on the implementation that generated to UUID.\n> > The uuidv7() function provided PostgreSQL will normally store the actual time of\n> > generation to in the UUID, but if large batches of UUIDs are generated at the\n> > same time it's possible that some UUIDs will store a time that is slightly later\n> > than their actual generation time.\n>\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Mon, 29 Jan 2024 21:58:55 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 25 Jan 2024, at 22:04, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:\n> \n> Aleksander,\n> \n> In this case the documentation must state that the functions uuid_extract_time() and uuidv7(T) are against the RFC requirements, and that developers may use these functions with caution at their own risk, and these functions are not recommended for production environment.\n\nRefining documentation is good. However, saying that these functions are not recommended for production must be based on some real threats.\n\n> \n> The function uuidv7(T) is not better than uuid_extract_time(). Careless developers may well pass any business date into this function: document date, registration date, payment date, reporting date, start date of the current month, data download date, and even a constant. This would be a profanation of UUIDv7 with very negative consequences.\n\nEven if the developer pass constant time to uuidv7(T) they will get what they asked for - unique identifier. Moreover - it still will be keeping locality. There will be no negative consequences at all.\nOn the contrary, experienced developer can leverage parameter when data locality should be reduced. If you have serveral streams of data, you might want to introduce some shift in reduce contention.\nFor example, you can generate uuidv7(now() + '1 day' * random(0,10)). This will split 1 contention point to 10 and increase ingestion performance 10x-fold.\n\n> On 29 Jan 2024, at 18:58, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> \n> If other timestamp sources or\n> a custom timestamp epoch are required, UUIDv8 MUST be used.\n\nWell, yeah. RFC says this... in 4 capital letters :) I believe it's kind of a big deficiency that k-way sortable identifiers are not implementable on top of UUIDv7. Well, let's go without this function. UUIDv7 is still an improvement over previous versions.\n\n\nJelte, your documentation corrections looks good to me, I'll include them in next version.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 29 Jan 2024 23:32:38 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Mon, 29 Jan 2024 at 19:32, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> Even if the developer pass constant time to uuidv7(T) they will get what they asked for - unique identifier. Moreover - it still will be keeping locality. There will be no negative consequences at all.\n\nIt will be significantly \"less unique\" than if they wouldn't pass a\nconstant time. Basically it would become a UUIDv4, but with 74 bits of\nrandom data instead of 122. That might not be enough anymore to\n\"guarantee\" uniqueness. I guess that's why it is required to use\nUUIDv8 in these cases, because correct usage is now a requirement for\nassuming uniqueness. And for UUIDv8 the spec says this:\n\n> UUIDv8's uniqueness will be implementation-specific and MUST NOT be assumed.\n\n> > On 29 Jan 2024, at 18:58, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > If other timestamp sources or\n> > a custom timestamp epoch are required, UUIDv8 MUST be used.\n>\n> Well, yeah. RFC says this... in 4 capital letters :)\n\nAs an FYI, there is an RFC that defines these keywords that's why they\nare capital letters: https://www.ietf.org/rfc/rfc2119.txt\n\n> I believe it's kind of a big deficiency that k-way sortable identifiers are not implementable on top of UUIDv7. Well, let's go without this function. UUIDv7 is still an improvement over previous versions.\n\nYeah, I liked the feature to generate UUIDv7 based on timestamp too.\nBut following the spec seems more important than a nice feature to me.\n\n\n",
"msg_date": "Mon, 29 Jan 2024 21:38:27 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Andrey,\n I understand and agree with your goals. But instead of dangerous universal functions, it is better to develop safe highly specialized functions that implement only these goals.\nThere should not be a function uuidv7(T) from an arbitrary timestamp, but there should be a special function that implements your algorithm: uuidv8(now() + '1 century' * random(0,10)).\nI replaced 1 day with 1 century because the spread of 1 day is too small. Over time, records will be inserted between existing records, which is undesirable.\nSimilarly, if we need to calculate the partition id, then we do not need to use the uuid_extract_time() function to provide the extracted timestamp, the accuracy of which cannot be guaranteed. Instead, we need to give exactly the partition id, calculated using the uuidv7 timestamp. For example, partitions may have approximately a month interval between each other.\nAs for the documentation, it must be indicated that the UUIDv7 structure is not timestamp + random, but timestamp + randomly seeded counter + random, like in all advanced implementations.\n\nSergey Prokhorenko\nsergeyprokhorenko@yahoo.com.au\n______________________________________________________________ \n\n On Monday, 29 January 2024 at 09:32:54 pm GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote: \n \n \n\n> On 25 Jan 2024, at 22:04, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:\n> \n> Aleksander,\n> \n> In this case the documentation must state that the functions uuid_extract_time() and uuidv7(T) are against the RFC requirements, and that developers may use these functions with caution at their own risk, and these functions are not recommended for production environment.\n\nRefining documentation is good. However, saying that these functions are not recommended for production must be based on some real threats.\n\n> \n> The function uuidv7(T) is not better than uuid_extract_time(). Careless developers may well pass any business date into this function: document date, registration date, payment date, reporting date, start date of the current month, data download date, and even a constant. This would be a profanation of UUIDv7 with very negative consequences.\n\nEven if the developer pass constant time to uuidv7(T) they will get what they asked for - unique identifier. Moreover - it still will be keeping locality. There will be no negative consequences at all.\nOn the contrary, experienced developer can leverage parameter when data locality should be reduced. If you have serveral streams of data, you might want to introduce some shift in reduce contention.\nFor example, you can generate uuidv7(now() + '1 day' * random(0,10)). This will split 1 contention point to 10 and increase ingestion performance 10x-fold.\n\n> On 29 Jan 2024, at 18:58, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> \n> If other timestamp sources or\n> a custom timestamp epoch are required, UUIDv8 MUST be used.\n\nWell, yeah. RFC says this... in 4 capital letters :) I believe it's kind of a big deficiency that k-way sortable identifiers are not implementable on top of UUIDv7. Well, let's go without this function. UUIDv7 is still an improvement over previous versions.\n\n\nJelte, your documentation corrections looks good to me, I'll include them in next version.\n\nThanks!\n\n\nBest regards, Andrey Borodin. \nAndrey, I understand and agree with your goals. But instead of dangerous universal functions, it is better to develop safe highly specialized functions that implement only these goals.There should not be a function uuidv7(T) from an arbitrary timestamp, but there should be a special function that implements your algorithm: uuidv8(now() + '1 century' * random(0,10)).I replaced 1 day with 1 century because the spread of 1 day is too small. Over time, records will be inserted between existing records, which is undesirable.Similarly, if we need to calculate the partition id, then we do not need to use the uuid_extract_time() function to provide the extracted timestamp, the accuracy of which cannot be guaranteed. Instead, we need to give exactly the partition id, calculated using the uuidv7 timestamp. For example, partitions may have approximately a month interval between each other.As for the documentation, it must be indicated that the UUIDv7 structure is not timestamp + random, but timestamp + randomly seeded counter + random, like in all advanced implementations.Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au______________________________________________________________\n\n\n\n\n On Monday, 29 January 2024 at 09:32:54 pm GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n \n\n\n> On 25 Jan 2024, at 22:04, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:> > Aleksander,> > In this case the documentation must state that the functions uuid_extract_time() and uuidv7(T) are against the RFC requirements, and that developers may use these functions with caution at their own risk, and these functions are not recommended for production environment.Refining documentation is good. However, saying that these functions are not recommended for production must be based on some real threats.> > The function uuidv7(T) is not better than uuid_extract_time(). Careless developers may well pass any business date into this function: document date, registration date, payment date, reporting date, start date of the current month, data download date, and even a constant. This would be a profanation of UUIDv7 with very negative consequences.Even if the developer pass constant time to uuidv7(T) they will get what they asked for - unique identifier. Moreover - it still will be keeping locality. There will be no negative consequences at all.On the contrary, experienced developer can leverage parameter when data locality should be reduced. If you have serveral streams of data, you might want to introduce some shift in reduce contention.For example, you can generate uuidv7(now() + '1 day' * random(0,10)). This will split 1 contention point to 10 and increase ingestion performance 10x-fold.> On 29 Jan 2024, at 18:58, Junwang Zhao <zhjwpku@gmail.com> wrote:> > If other timestamp sources or> a custom timestamp epoch are required, UUIDv8 MUST be used.Well, yeah. RFC says this... in 4 capital letters :) I believe it's kind of a big deficiency that k-way sortable identifiers are not implementable on top of UUIDv7. Well, let's go without this function. UUIDv7 is still an improvement over previous versions.Jelte, your documentation corrections looks good to me, I'll include them in next version.Thanks!Best regards, Andrey Borodin.",
"msg_date": "Tue, 30 Jan 2024 00:27:21 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 30 Jan 2024, at 01:38, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> \n> Yeah, I liked the feature to generate UUIDv7 based on timestamp too.\n> But following the spec seems more important than a nice feature to me.\n\nPFA v15. Changes: removed timestamp argument, incorporated Jelte’s documentation addons.\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 30 Jan 2024 11:54:48 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Andrey,\n\nI think this phrase is outdated: \"This function can optionally accept a timestamp used instead of current time.This allows implementation of k-way sotable identifiers.\"\nThis phrase is wrong: \"Both functions return a version 4 (random) UUID.\"\nFor this phrase the reason is unclear and the phrase is most likely incorrect:\nif large batches of UUIDs are generated at the+ same time it's possible that some UUIDs will store a time that is slightly later+ than their actual generation time\n\nSergey Prokhorenko\n\nsergeyprokhorenko@yahoo.com.au \n\n On Tuesday, 30 January 2024 at 09:55:04 am GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote: \n \n \n\n> On 30 Jan 2024, at 01:38, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> \n> Yeah, I liked the feature to generate UUIDv7 based on timestamp too.\n> But following the spec seems more important than a nice feature to me.\n\nPFA v15. Changes: removed timestamp argument, incorporated Jelte’s documentation addons.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n \nAndrey,I think this phrase is outdated: \"This function can optionally accept a timestamp used instead of current time.This allows implementation of k-way sotable identifiers.\"This phrase is wrong: \"Both functions return a version 4 (random) UUID.\"For this phrase the reason is unclear and the phrase is most likely incorrect:if large batches of UUIDs are generated at the+ same time it's possible that some UUIDs will store a time that is slightly later+ than their actual generation timeSergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Tuesday, 30 January 2024 at 09:55:04 am GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n \n\n\n> On 30 Jan 2024, at 01:38, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:> > Yeah, I liked the feature to generate UUIDv7 based on timestamp too.> But following the spec seems more important than a nice feature to me.PFA v15. Changes: removed timestamp argument, incorporated Jelte’s documentation addons.Thanks!Best regards, Andrey Borodin.",
"msg_date": "Tue, 30 Jan 2024 07:28:28 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 30 Jan 2024, at 12:28, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:\n> \n> \n> I think this phrase is outdated: \"This function can optionally accept a timestamp used instead of current time.\n> This allows implementation of k-way sotable identifiers.”\nFixed.\n\n> This phrase is wrong: \"Both functions return a version 4 (random) UUID.”\nThis applies to functions gen_random_uuid() and uuidv4().\n> \n> For this phrase the reason is unclear and the phrase is most likely incorrect:\n> if large batches of UUIDs are generated at the\n> + same time it's possible that some UUIDs will store a time that is slightly later\n> + than their actual generation time\n\nI’ve rewritten this phrase, hope it’s more clear now.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 30 Jan 2024 14:56:10 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi Andrey,\n\nOn Tue, Jan 30, 2024 at 5:56 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>\n>\n>\n> > On 30 Jan 2024, at 12:28, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:\n> >\n> >\n> > I think this phrase is outdated: \"This function can optionally accept a timestamp used instead of current time.\n> > This allows implementation of k-way sotable identifiers.”\n> Fixed.\n>\n> > This phrase is wrong: \"Both functions return a version 4 (random) UUID.”\n> This applies to functions gen_random_uuid() and uuidv4().\n> >\n> > For this phrase the reason is unclear and the phrase is most likely incorrect:\n> > if large batches of UUIDs are generated at the\n> > + same time it's possible that some UUIDs will store a time that is slightly later\n> > + than their actual generation time\n>\n> I’ve rewritten this phrase, hope it’s more clear now.\n>\n>\n> Best regards, Andrey Borodin.\n\n+Datum\n+uuid_extract_var(PG_FUNCTION_ARGS)\n+{\n+ pg_uuid_t *uuid = PG_GETARG_UUID_P(0);\n+ uint16_t result;\n+ result = uuid->data[8] >> 6;\n+\n+ PG_RETURN_UINT16(result);\n+}\n\\ No newline at end of file\n\nIt's always good to add a newline at the end of a source file, though\nthis might be nitpicky.\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Tue, 30 Jan 2024 18:33:21 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 30 Jan 2024, at 15:33, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> \n> It's always good to add a newline at the end of a source file, though\n> this might be nitpicky.\n\nThanks, also fixed warning found by CFBot.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 30 Jan 2024 18:35:28 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "typo:\nbeing carried to time step\n\nshould be:being carried to timestemp\n\nSergey Prokhorenko sergeyprokhorenko@yahoo.com.au \n\n On Tuesday, 30 January 2024 at 04:35:45 pm GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote: \n \n \n\n> On 30 Jan 2024, at 15:33, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> \n> It's always good to add a newline at the end of a source file, though\n> this might be nitpicky.\n\nThanks, also fixed warning found by CFBot.\n\n\nBest regards, Andrey Borodin.\n \ntypo:being carried to time stepshould be:being carried to timestempSergey Prokhorenko sergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Tuesday, 30 January 2024 at 04:35:45 pm GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n \n\n\n> On 30 Jan 2024, at 15:33, Junwang Zhao <zhjwpku@gmail.com> wrote:> > It's always good to add a newline at the end of a source file, though> this might be nitpicky.Thanks, also fixed warning found by CFBot.Best regards, Andrey Borodin.",
"msg_date": "Tue, 30 Jan 2024 18:37:56 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On 30.01.24 14:35, Andrey M. Borodin wrote:\n>> On 30 Jan 2024, at 15:33, Junwang Zhao <zhjwpku@gmail.com> wrote:\n>>\n>> It's always good to add a newline at the end of a source file, though\n>> this might be nitpicky.\n> \n> Thanks, also fixed warning found by CFBot.\n\nI have various comments on this patch:\n\n\n- doc/src/sgml/func.sgml\n\nThe documentation of the new functions should be broken up a bit.\nIt's all one paragraph now. At least make it several paragraphs, or\npossibly tables or something else.\n\nAvoid listing the functions twice: Once before the description and\nthen again in the description. That's just going to get out of date.\nThe first listing is not necessary, I think.\n\nThe return values in the documentation should use the public-facing\ntype names, like \"timestamp with time zone\" and \"smallint\".\n\nThe descriptions of the UUID generation functions use handwavy\nlanguage in their descriptions, like \"It provides much better data\nlocality\" or \"unacceptable security or business intelligence\nimplications\", which isn't useful. Either we cut that all out and\njust say, it creates a UUIDv7, done, look elsewhere for more\ninformation, or we provide some more concretely useful details.\n\nWe shouldn't label a link as \"IETF standard\" when it's actually a\ndraft.\n\n\n- src/include/catalog/pg_proc.dat\n\nThe description of uuidv4 should be \"generate UUID version 4\", so that\nit parallels uuidv7.\n\nThe description of uuid_extract_time says 'extract timestamp from UUID\nversion 7', the implementation is not limited to version 7.\n\nI think uuid_extract_time should be named uuid_extract_timestamp,\nbecause it extracts a timestamp, not a time.\n\nThe functions uuid_extract_ver and uuid_extract_var could be named\nuuid_extract_version and uuid_extract_variant. Otherwise, it's hard\nto tell them apart, with only one letter different.\n\n\n- src/test/regress/sql/uuid.sql\n\nWhy are the tests using the input format '{...}', which is not the\nstandard one?\n\n\n- src/backend/utils/adt/uuid.c\n\nAll this new code should have more comments. There is a lot of bit\ntwiddling going on, and I suppose one is expected to follow along in\nthe RFC? At least each function should have a header comment, so one\ndoesn't have to check in pg_proc.dat what it's supposed to do.\n\nI'm suspicious that these functions all appear to return null for\nerroneous input, rather than raising errors. I think at least some\nexplanation for this should be recorded somewhere.\n\nI think the behavior of uuid_extract_var(iant) is wrong. The code\ntakes just two bits to return, but the draft document is quite clear\nthat the variant is 4 bits (see Table 1).\n\nThe uuidv7 function could really use a header comment that explains\nthe choices that were made. The RFC draft provides various options\nthat implementations could use; we should describe which ones we\nchose.\n\nI would have expected that, since gettimeofday() provides microsecond\nprecision, we'd put the extra precision into \"rand_a\" as per Section 6.2 \nmethod 3.\n\nYou use some kind of counter, but could you explain which method that\ncounter implements?\n\nI don't see any acknowledgment of issues relating to concurrency or\nrestarts. Like, how do we prevent duplicates being generated by\nconcurrent sessions or between restarts? Maybe the counter or random\nstuff does that, but it's not explained.\n\n\n\n",
"msg_date": "Wed, 6 Mar 2024 08:13:02 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi Peter,\n\nthank you for so thoughtful review.\n\n> On 6 Mar 2024, at 12:13, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> I have various comments on this patch:\n> \n> \n> - doc/src/sgml/func.sgml\n> \n> The documentation of the new functions should be broken up a bit.\n> It's all one paragraph now. At least make it several paragraphs, or\n> possibly tables or something else.\nI've split functions to generate UUIDs from functions to extract stuff.\n\n> \n> Avoid listing the functions twice: Once before the description and\n> then again in the description. That's just going to get out of date.\n> The first listing is not necessary, I think.\n\nFixed.\n\n> The return values in the documentation should use the public-facing\n> type names, like \"timestamp with time zone\" and \"smallint\".\n\nFixed.\n\n> The descriptions of the UUID generation functions use handwavy\n> language in their descriptions, like \"It provides much better data\n> locality\" or \"unacceptable security or business intelligence\n> implications\", which isn't useful. Either we cut that all out and\n> just say, it creates a UUIDv7, done, look elsewhere for more\n> information, or we provide some more concretely useful details.\n\nI've removed all that stuff entirely.\n\n> We shouldn't label a link as \"IETF standard\" when it's actually a\n> draft.\n\nFixed.\n\nWell, all my modifications of documentation are kind of blind... I tried to \"make docs\", but it gives me gazilion of errors... Is there an easy way to see resulting HTML?\n\n\n> - src/include/catalog/pg_proc.dat\n> \n> The description of uuidv4 should be \"generate UUID version 4\", so that\n> it parallels uuidv7.\n\nFixed.\n\n> The description of uuid_extract_time says 'extract timestamp from UUID\n> version 7', the implementation is not limited to version 7.\n\nFixed.\n\n> I think uuid_extract_time should be named uuid_extract_timestamp,\n> because it extracts a timestamp, not a time.\n\nRenamed.\n\n> The functions uuid_extract_ver and uuid_extract_var could be named\n> uuid_extract_version and uuid_extract_variant. Otherwise, it's hard\n> to tell them apart, with only one letter different.\n\nRenamed.\n\n> - src/test/regress/sql/uuid.sql\n> \n> Why are the tests using the input format '{...}', which is not the\n> standard one?\n\nFixed.\n\n> - src/backend/utils/adt/uuid.c\n> \n> All this new code should have more comments. There is a lot of bit\n> twiddling going on, and I suppose one is expected to follow along in\n> the RFC? At least each function should have a header comment, so one\n> doesn't have to check in pg_proc.dat what it's supposed to do.\n\nI've added some header comment. One big comment is attached to v7, I tried to take parts mostly from RFC. Yet there are a lot of my additions that now need review...\n\n> I'm suspicious that these functions all appear to return null for\n> erroneous input, rather than raising errors. I think at least some\n> explanation for this should be recorded somewhere.\n\nThe input is not erroneous per se.\nBut the fact that\n# select 1/0;\nERROR: division by zero\nmakes me consider throwing an error. There was some argumentation upthread for not throwing error though, but now I cannot find it... maybe I accepted this behaviour as more user-friendly.\n\n> I think the behavior of uuid_extract_var(iant) is wrong. The code\n> takes just two bits to return, but the draft document is quite clear\n> that the variant is 4 bits (see Table 1).\n\nWell, it was correct only for implemented variant. I've made version that implements full table 1 from section 4.1.\n\n> The uuidv7 function could really use a header comment that explains\n> the choices that were made. The RFC draft provides various options\n> that implementations could use; we should describe which ones we\n> chose.\n\nDone.\n\n> \n> I would have expected that, since gettimeofday() provides microsecond\n> precision, we'd put the extra precision into \"rand_a\" as per Section 6.2 method 3.\n\nI had chosen method 2 over method 3 as most portable. Can we be sure how many bits (after reading milliseconds) are there across different OSes? Even if we put extra 10 bits of timestamp, we cannot extract safely them.\nThese bits could promote inter-backend stortability. E.i. when many backends generate data fast - this data is still somewhat ordered even within 1ms. But I think benefits of this sortability are outweighed by portability(unknown real resolution), simplicity(we don't store microseconds, thus do not try to extract them).\nAll this arguments are weak, but if one method would be strictly better than another - there would be only one method.\n\n> \n> You use some kind of counter, but could you explain which method that\n> counter implements?\nI described counter in uuidv7() header.\n\n> \n> I don't see any acknowledgment of issues relating to concurrency or\n> restarts. Like, how do we prevent duplicates being generated by\n> concurrent sessions or between restarts? Maybe the counter or random\n> stuff does that, but it's not explained.\n\nI think restart takes more than 1ms, so this is covered with time tick.\nI've added paragraph about frequency of generation in uuidv7() header.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 10 Mar 2024 17:59:10 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 10 Mar 2024, at 17:59, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> I tried to \"make docs\", but it gives me gazilion of errors... Is there an easy way to see resulting HTML?\n\nOops, CFbot expectedly found a problem...\nSorry for the noise, this version, I hope, will pass all the tests.\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 10 Mar 2024 21:08:24 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi,\n\n> Oops, CFbot expectedly found a problem...\n> Sorry for the noise, this version, I hope, will pass all the tests.\n> Thanks!\n>\n> Best regards, Andrey Borodin.\n\nI had some issues applying v19 against the current `master` branch.\nPFA the rebased and minorly tweaked v20.\n\nThe patch LGTM. I think it could be merged unless there are any open\nissues left. I don't think so, but maybe I missed something.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 11 Mar 2024 15:44:58 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Attached a few comment fixes/improvements and a pgindent run (patch 0002-0004)\n\nNow with the added comments, one thing pops out to me: The comments\nmention that we use \"Monotonic Random\", but when I read the spec that\nexplicitly recommends against using an increment of 1 when using\nmonotonic random. I feel like if we use an increment of 1, we're\nbetter off going for the \"Fixed-Length Dedicated Counter Bits\" method\n(i.e. change the code to start the counter at 0). See patch 0005 for\nan example of that change.\n\nI'm also wondering if we really want to use the extra rand_b bits for\nthis. The spec says we MAY, but it does remove the amount of\nrandomness in our UUIDs.",
"msg_date": "Mon, 11 Mar 2024 16:56:23 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 11 Mar 2024, at 20:56, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> \n> Attached a few comment fixes/improvements and a pgindent run (patch 0002-0004)\n\nThanks!\n\n> Now with the added comments, one thing pops out to me: The comments\n> mention that we use \"Monotonic Random\", but when I read the spec that\n> explicitly recommends against using an increment of 1 when using\n> monotonic random. I feel like if we use an increment of 1, we're\n> better off going for the \"Fixed-Length Dedicated Counter Bits\" method\n> (i.e. change the code to start the counter at 0). See patch 0005 for\n> an example of that change.\n> \n> I'm also wondering if we really want to use the extra rand_b bits for\n> this. The spec says we MAY, but it does remove the amount of\n> randomness in our UUIDs.\n\nMethod 1 is a just a Method 2 with specifically picked constants.\nBut I'll have to use some hand-wavy wordings...\n\nUUID consists of these 128 bits:\na. Mandatory 2 var and 4 ver bits.\nb. Flexible but strongly recommended 48 bits unix_ts_ms. These bits contribute to global sortability of values generated at frequency less than 1KHz.\nc. Counter bits:\nc1. Initialised with 0 on any time tick.\nc2. Initialised with randomness.\nc3*. bit width of a counter step (*not counted in 128 bit capacity, can be non-integral)\nd. Randomness bits.\n\nMethod 1 is when c2=0. My implementation of method 2 uses c1=1, c2=17\n\nConsider all UUIDs generated at any given milliseconds. Probability of a collision of two UUIDs generated at frequency less than 1KHz is p = 2^-(c2+d)\nCapacity of a counter has expected value of c = 2^(c1)*2^(c2-1)/2^c3\nTo guess next UUID you can correctly pick one of u = 2^(d+c3)\n\nFirst, observe that c3 contributes unguessability at exactly same scale as decreases counter capacity. There is no difference between using bits in d directly, or in c3. There is no point in non-zero c3. Every bit that could be given to c3 can equally be given to d.\n\nSecond, observe that c2 bits contribute to both collision protection and counter capacity! And when the time ticks, c2 also contribute to unguessability! So, technically, we should consider using all available bits as c2 bits.\n\nHow many c1 bits do we need? I've chosen one - to prevent occasional counter capacity reduction.\n\nIf c1 = 1, we can distribute 73 bits between c2 and d. I've chosen c2 = 17 and d = 56 as an arbitrary compromise between capacity of one backend per ms and prevention of global collision.\nThis compromise is mostly dictated by maximum frequency of UUID generation by one backend, I've chosen 200MHz as a sane value.\n\n\nThis compromise is much easier when you do not have 74 spare bits, this crazy amount of information forgives almost any mistake. Imagine you have to distribute 10 bits between c2 and d. And you try to prevent collision between 10 independent devices which need capacity to generate IDs with frequency of 10KHz each and keep sortability. You would have something like c1=1, c2=3,d=6.\n\nSorry for this long and vague explanation, if it still seems too uncertain we can have a chat or something like that. I don't think this number picking stuff deserve to be commented, because it still is quite close to random. RFC gives us too much freedom of choice.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 11 Mar 2024 23:27:43 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 11:27:43PM +0500, Andrey M. Borodin wrote:\n> Sorry for this long and vague explanation, if it still seems too\n> uncertain we can have a chat or something like that. I don't think\n> this number picking stuff deserve to be commented, because it still\n> is quite close to random. RFC gives us too much freedom of choice.\n\nSpeaking about the RFC, I can see that there is a draft but nothing\nformal yet. The last one I can see is v14 from last November:\nhttps://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis-14\n\nIt does not strike me as a good idea to rush an implementation without\na specification officially approved because there is always a risk of\nshipping something that's non-compliant into core. But perhaps I am\nmissing something on the RFC side?\n--\nMichael",
"msg_date": "Tue, 12 Mar 2024 14:53:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 12 Mar 2024, at 10:53, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> It does not strike me as a good idea to rush an implementation without\n> a specification officially approved because there is always a risk of\n> shipping something that's non-compliant into core. But perhaps I am\n> missing something on the RFC side?\n\nUpthread one of document’s authors commented:\n\n> On 14 Feb 2023, at 19:13, Kyzer Davis (kydavis) <kydavis@cisco.com> wrote:\n> \n> The point is 99% of the work since adoption by the IETF has been ironing out \n> RFC4122's problems and nothing major related to UUIDv6/7/8 which are all in a \n> very good state.\n\nAnd also\n\n\n> On 22 Jan 2024, at 09:22, Nikolay Samokhvalov <nik@postgres.ai> wrote:\n> \n> And many libraries are already including implementation of UUIDv7 – here are some examples:\n> \n> - https://www.npmjs.com/package/uuidv7\n> - https://crates.io/crates/uuidv7\n> - https://github.com/google/uuid/pull/139\n\nSo at least reviewing patch and agreeing on chosen methods and constants makes sense.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 12 Mar 2024 11:10:37 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 11:10:37AM +0500, Andrey M. Borodin wrote:\n> On 12 Mar 2024, at 10:53, Michael Paquier <michael@paquier.xyz> wrote:\n>> On 22 Jan 2024, at 09:22, Nikolay Samokhvalov <nik@postgres.ai> wrote:\n>> \n>> And many libraries are already including implementation of UUIDv7 – here are some examples:\n>> \n>> - https://www.npmjs.com/package/uuidv7\n>> - https://crates.io/crates/uuidv7\n>> - https://github.com/google/uuid/pull/139\n> \n> So at least reviewing patch and agreeing on chosen methods and constants makes sense.\n\nSure, there is no problem in discussing a patch to implement a\nbehavior. But I disagree about taking a risk in merging something\nthat could become non-compliant with the approved RFC, if the draft is\napproved at the end, of course. This just strikes me as a bad idea.\n--\nMichael",
"msg_date": "Tue, 12 Mar 2024 15:31:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Mon, 11 Mar 2024 at 19:27, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> Sorry for this long and vague explanation, if it still seems too uncertain we can have a chat or something like that. I don't think this number picking stuff deserve to be commented, because it still is quite close to random. RFC gives us too much freedom of choice.\n\nI thought your explanation was quite clear and I agree that this\napproach makes the most sense. I sent an email to the RFC authors to\nask for their feedback with you (Andrey) in the CC, because even\nthough it makes the most sense it does not comply with the either of\nmethod 1 or 2 as described in the RFC.\n\n\n",
"msg_date": "Tue, 12 Mar 2024 16:35:59 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Tue, 12 Mar 2024 at 07:32, Michael Paquier <michael@paquier.xyz> wrote:\n> Sure, there is no problem in discussing a patch to implement a\n> behavior. But I disagree about taking a risk in merging something\n> that could become non-compliant with the approved RFC, if the draft is\n> approved at the end, of course. This just strikes me as a bad idea.\n\nI agree that we shouldn't release UUIDv7 support if the RFC describing\nthat is not yet approved. But I do think it would be a shame if e.g.\nthe RFC got approved 2 weeks after Postgres its feature freeze. Which\nwould then mean we'd have to wait another 1.5 years before actually\nusing uuidv7. Would it be a reasonable compromise to still merge the\npatch for PG17 (assuming the code is good to merge with regards to the\ncurrent draft RFC), but revert the commit if the RFC is not approved\nbefore some deadline before the release date (e.g. before the first\nrelease candidate)?\n\n\n",
"msg_date": "Tue, 12 Mar 2024 16:41:10 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi Jelte,\nI am one of the contributors to this RFC.\n\nAndrey's patch corresponds exactly to Fixed-Length Dedicated Counter Bits (Method 1).\n\nAndrey and you simply did not read the RFC a little further down in the text:\n__________________________________________________________________\n\nThe following sub-topics cover topics related solely with creating reliable fixed-length dedicated counters:\n \n - Fixed-Length Dedicated Counter Seeding:\n - \nImplementations utilizing the fixed-length counter method randomly initialize the counter with each new timestamp tick. However, when the timestamp has not increased, the counter is instead incremented by the desired increment logic. When utilizing a randomly seeded counter alongside Method 1, the random value MAY be regenerated with each counter increment without impacting sortability. The downside is that Method 1 is prone to overflows if a counter of adequate length is not selected or the random data generated leaves little room for the required number of increments. Implementations utilizing fixed-length counter method MAY also choose to randomly initialize a portion of the counter rather than the entire counter. For example, a 24 bit counter could have the 23 bits in least-significant, right-most, position randomly initialized. The remaining most significant, left-most counter bit is initialized as zero for the sole purpose of guarding against counter rollovers.\n\n - \n - Fixed-Length Dedicated Counter Length:\n - Select a counter bit-length that can properly handle the level of timestamp precision in use. For example, millisecond precision generally requires a larger counter than a timestamp with nanosecond precision. General guidance is that the counter SHOULD be at least 12 bits but no longer than 42 bits. Care must be taken to ensure that the counter length selected leaves room for sufficient entropy in the random portion of the UUID after the counter. This entropy helps improve the unguessability characteristics of UUIDs created within the batch.\n - The following sub-topics cover rollover handling with either type of counter method: \n\n - ...\n - \n - Counter Rollover Handling:\n - \nCounter rollovers MUST be handled by the application to avoid sorting issues. The general guidance is that applications that care about absolute monotonicity and sortability should freeze the counter and wait for the timestamp to advance which ensures monotonicity is not broken. Alternatively, implementations MAY increment the timestamp ahead of the actual time and reinitialize the counter.\n\n\nSergey Prokhorenkosergeyprokhorenko@yahoo.com.au \n\n On Tuesday, 12 March 2024 at 06:36:13 pm GMT+3, Jelte Fennema-Nio <postgres@jeltef.nl> wrote: \n \n On Mon, 11 Mar 2024 at 19:27, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> Sorry for this long and vague explanation, if it still seems too uncertain we can have a chat or something like that. I don't think this number picking stuff deserve to be commented, because it still is quite close to random. RFC gives us too much freedom of choice.\n\nI thought your explanation was quite clear and I agree that this\napproach makes the most sense. I sent an email to the RFC authors to\nask for their feedback with you (Andrey) in the CC, because even\nthough it makes the most sense it does not comply with the either of\nmethod 1 or 2 as described in the RFC.\n \nHi Jelte,I am one of the contributors to this RFC.Andrey's patch corresponds exactly to Fixed-Length Dedicated Counter Bits (Method 1).Andrey and you simply did not read the RFC a little further down in the text:__________________________________________________________________The following sub-topics cover topics related solely with creating reliable fixed-length dedicated counters:Fixed-Length Dedicated Counter Seeding:Implementations utilizing the fixed-length counter method randomly initialize the counter with each new timestamp tick. However, when the timestamp has not increased, the counter is instead incremented by the desired increment logic. When utilizing a randomly seeded counter alongside Method 1, the random value MAY be regenerated with each counter increment without impacting sortability. The downside is that Method 1 is prone to overflows if a counter of adequate length is not selected or the random data generated leaves little room for the required number of increments. Implementations utilizing fixed-length counter method MAY also choose to randomly initialize a portion of the counter rather than the entire counter. For example, a 24 bit counter could have the 23 bits in least-significant, right-most, position randomly initialized. The remaining most significant, left-most counter bit is initialized as zero for the sole purpose of guarding against counter rollovers.Fixed-Length Dedicated Counter Length:Select a counter bit-length that can properly handle the level of timestamp precision in use. For example, millisecond precision generally requires a larger counter than a timestamp with nanosecond precision. General guidance is that the counter SHOULD be at least 12 bits but no longer than 42 bits. Care must be taken to ensure that the counter length selected leaves room for sufficient entropy in the random portion of the UUID after the counter. This entropy helps improve the unguessability characteristics of UUIDs created within the batch.The following sub-topics cover rollover handling with either type of counter method:...Counter Rollover Handling:Counter rollovers MUST be handled by the application to avoid sorting issues. The general guidance is that applications that care about absolute monotonicity and sortability should freeze the counter and wait for the timestamp to advance which ensures monotonicity is not broken. Alternatively, implementations MAY increment the timestamp ahead of the actual time and reinitialize the counter.Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Tuesday, 12 March 2024 at 06:36:13 pm GMT+3, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n \n\n\nOn Mon, 11 Mar 2024 at 19:27, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:> Sorry for this long and vague explanation, if it still seems too uncertain we can have a chat or something like that. I don't think this number picking stuff deserve to be commented, because it still is quite close to random. RFC gives us too much freedom of choice.I thought your explanation was quite clear and I agree that thisapproach makes the most sense. I sent an email to the RFC authors toask for their feedback with you (Andrey) in the CC, because eventhough it makes the most sense it does not comply with the either ofmethod 1 or 2 as described in the RFC.",
"msg_date": "Tue, 12 Mar 2024 17:18:17 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Tue, 12 Mar 2024 at 18:18, Sergey Prokhorenko\n<sergeyprokhorenko@yahoo.com.au> wrote:\n> Andrey and you simply did not read the RFC a little further down in the text:\n\nYou're totally right, sorry about that. Maybe it would be good to move\nthose subsections around a bit in the RFC though, so that anything\nrelated to only one method is included in the section for that method.\n\n\n",
"msg_date": "Tue, 12 Mar 2024 18:26:09 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On 10.03.24 13:59, Andrey M. Borodin wrote:\n>> The functions uuid_extract_ver and uuid_extract_var could be named\n>> uuid_extract_version and uuid_extract_variant. Otherwise, it's hard\n>> to tell them apart, with only one letter different.\n> \n> Renamed.\n\nAnother related comment: Throughout your patch, swap the order of \nuuid_extract_variant and uuid_extract_version. First, this makes more \nsense because version is subordinate to variant, and also it makes it \nalphabetical.\n\n>> I think the behavior of uuid_extract_var(iant) is wrong. The code\n>> takes just two bits to return, but the draft document is quite clear\n>> that the variant is 4 bits (see Table 1).\n> \n> Well, it was correct only for implemented variant. I've made version that implements full table 1 from section 4.1.\n\nI think we are still interpreting this differently. I think \nuuid_extract_variant should just return whatever is in those four bits. \nYour function comment says \"Can return only 0, 0b10, 0b110 and 0b111.\", \nwhich I don't think it is correct. It should return 0 through 15.\n\n>> I would have expected that, since gettimeofday() provides microsecond\n>> precision, we'd put the extra precision into \"rand_a\" as per Section 6.2 method 3.\n> \n> I had chosen method 2 over method 3 as most portable. Can we be sure how many bits (after reading milliseconds) are there across different OSes?\n\nI think this should have been researched. If we don't know how many \nbits we have, how do we know we have enough for milliseconds? I think \nwe should at least have some kind of idea, if we are going to have this \nconversation.\n\n\n\n",
"msg_date": "Thu, 14 Mar 2024 12:07:56 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 14 Mar 2024, at 16:07, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> On 10.03.24 13:59, Andrey M. Borodin wrote:\n>>> The functions uuid_extract_ver and uuid_extract_var could be named\n>>> uuid_extract_version and uuid_extract_variant. Otherwise, it's hard\n>>> to tell them apart, with only one letter different.\n>> Renamed.\n> \n> Another related comment: Throughout your patch, swap the order of uuid_extract_variant and uuid_extract_version. First, this makes more sense because version is subordinate to variant, and also it makes it alphabetical.\nI will do it soon.\n> \n>>> I think the behavior of uuid_extract_var(iant) is wrong. The code\n>>> takes just two bits to return, but the draft document is quite clear\n>>> that the variant is 4 bits (see Table 1).\n>> Well, it was correct only for implemented variant. I've made version that implements full table 1 from section 4.1.\n> \n> I think we are still interpreting this differently. I think uuid_extract_variant should just return whatever is in those four bits. Your function comment says \"Can return only 0, 0b10, 0b110 and 0b111.\", which I don't think it is correct. It should return 0 through 15.\nWe will return \"do not care\" bits. This bits can confuse someone. E.g. for varaint 0b10 we can return 8, 9, 10 and 11 randomly. Is it OK? BTW for some reason document lists number 1-15, but your are correct that range is 0-15.\n\n> \n>>> I would have expected that, since gettimeofday() provides microsecond\n>>> precision, we'd put the extra precision into \"rand_a\" as per Section 6.2 method 3.\n>> I had chosen method 2 over method 3 as most portable. Can we be sure how many bits (after reading milliseconds) are there across different OSes?\n> \n> I think this should have been researched. If we don't know how many bits we have, how do we know we have enough for milliseconds? I think we should at least have some kind of idea, if we are going to have this conversation.\n\nBits for milliseconds are strictly defined by the document: there are always 48 bits, independently from clock resolution.\nBut I don't think it's main problem for Method 3. Method 1 actually guarantees strictly increasing order of UUIDs generated by single backend. Method 3 can generate a lot of unsorted data in case of time leaping backward.\n\nBTW Kyzer (in an off-list discussion) and Sergey confirmed that implemented method from the patch actually is Method 1.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 14 Mar 2024 16:25:56 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On 14.03.24 12:25, Andrey M. Borodin wrote:\n>>>> I think the behavior of uuid_extract_var(iant) is wrong. The code\n>>>> takes just two bits to return, but the draft document is quite clear\n>>>> that the variant is 4 bits (see Table 1).\n>>> Well, it was correct only for implemented variant. I've made version that implements full table 1 from section 4.1.\n>> I think we are still interpreting this differently. I think uuid_extract_variant should just return whatever is in those four bits. Your function comment says \"Can return only 0, 0b10, 0b110 and 0b111.\", which I don't think it is correct. It should return 0 through 15.\n> We will return \"do not care\" bits. This bits can confuse someone. E.g. for varaint 0b10 we can return 8, 9, 10 and 11 randomly. Is it OK? BTW for some reason document lists number 1-15, but your are correct that range is 0-15.\n\nI agree it's confusing. Before I studied the RFC 4122bis project, I \ndidn't even know about variant vs. version. I think overall people will \nfind this more confusing than useful. If you just want to know, \"is \nthis UUID of the kind specified in RFC 4122\", you can query it with \nuuid_extract_version(x) IS NOT NULL. So maybe we don't need the \n_extract_variant function?\n\n\n\n",
"msg_date": "Thu, 14 Mar 2024 16:10:29 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 14 Mar 2024, at 20:10, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n>>>>> I think the behavior of uuid_extract_var(iant) is wrong. The code\n>>>>> takes just two bits to return, but the draft document is quite clear\n>>>>> that the variant is 4 bits (see Table 1).\n>>>> Well, it was correct only for implemented variant. I've made version that implements full table 1 from section 4.1.\n>>> I think we are still interpreting this differently. I think uuid_extract_variant should just return whatever is in those four bits. Your function comment says \"Can return only 0, 0b10, 0b110 and 0b111.\", which I don't think it is correct. It should return 0 through 15.\n>> We will return \"do not care\" bits. This bits can confuse someone. E.g. for varaint 0b10 we can return 8, 9, 10 and 11 randomly. Is it OK? BTW for some reason document lists number 1-15, but your are correct that range is 0-15.\n> \n> I agree it's confusing. Before I studied the RFC 4122bis project, I didn't even know about variant vs. version. I think overall people will find this more confusing than useful. If you just want to know, \"is this UUID of the kind specified in RFC 4122\", you can query it with uuid_extract_version(x) IS NOT NULL. So maybe we don't need the _extract_variant function?\n\nI think it's the best possible solution. The variant has no value besides detecting if a version can be extracted.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 14 Mar 2024 23:06:04 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Hi,\n\n> > So maybe we don't need the _extract_variant function?\n>\n> I think it's the best possible solution. The variant has no value besides detecting if a version can be extracted.\n\n+1 to the idea. I doubt that anyone will miss it.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 15 Mar 2024 12:47:42 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 15 Mar 2024, at 14:47, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> \n> +1 to the idea. I doubt that anyone will miss it.\n\nPFA v22.\n\nChanges:\n1. Squashed all editorialisation by Jelte\n2. Fixed my erroneous comments on using Method 2 (we are using method 1 instead)\n3. Remove all traces of uuid_extract_variant()\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Sat, 16 Mar 2024 22:43:54 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On 16.03.24 18:43, Andrey M. Borodin wrote:\n>> On 15 Mar 2024, at 14:47, Aleksander Alekseev <aleksander@timescale.com> wrote:\n>>\n>> +1 to the idea. I doubt that anyone will miss it.\n> \n> PFA v22.\n> \n> Changes:\n> 1. Squashed all editorialisation by Jelte\n> 2. Fixed my erroneous comments on using Method 2 (we are using method 1 instead)\n> 3. Remove all traces of uuid_extract_variant()\n\nI have committed a subset of this for now, namely the additions of \nuuid_extract_timestamp() and uuid_extract_version(). These seemed \nmature and agreed upon. You can rebase the rest of your patch on top of \nthat.\n\nI have started a separate discussion to learn about the precision we can \nexpect from gettimeofday().\n\n\n\n",
"msg_date": "Tue, 19 Mar 2024 09:55:51 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 19 Mar 2024, at 13:55, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> On 16.03.24 18:43, Andrey M. Borodin wrote:\n>>> On 15 Mar 2024, at 14:47, Aleksander Alekseev <aleksander@timescale.com> wrote:\n>>> \n>>> +1 to the idea. I doubt that anyone will miss it.\n>> PFA v22.\n>> Changes:\n>> 1. Squashed all editorialisation by Jelte\n>> 2. Fixed my erroneous comments on using Method 2 (we are using method 1 instead)\n>> 3. Remove all traces of uuid_extract_variant()\n> \n> I have committed a subset of this for now, namely the additions of uuid_extract_timestamp() and uuid_extract_version(). These seemed mature and agreed upon. You can rebase the rest of your patch on top of that.\n\nGreat! Thank you! PFA v23 with rebase on HEAD.\n\n> I have started a separate discussion to learn about the precision we can expect from gettimeofday().\n\nEven in presence of real microsecond-enabled and portable timer using microseconds does not seem to me an optimal way of utilising UUID bits.\n\nTimer-based bits contribute to global sortability. But the real timers we have are not even millisecond adjusted. We can hope for ~few ms variation in one datacenter or in presence of atomic clocks.\n\nTime-based bits contribute to global uniqueness, but certainly they are not that effective as counter bits.\n\nTime-based bits do not provide local sortability guarantees: some UUIDs might get same microseconds or be affected by leap backwards.\n\nI think that microseconds are good only for hardware-specific solutions, not for something that runs on variety of platforms, OSes, devices.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 20 Mar 2024 23:08:09 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Wed, 20 Mar 2024 at 19:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> Timer-based bits contribute to global sortability. But the real timers we have are not even millisecond adjusted. We can hope for ~few ms variation in one datacenter or in presence of atomic clocks.\n\nI think the main benefit of using microseconds would not be\nsortability between servers, but sortability between backends. With\nthe current counter approach between backends we only have sortability\nat the millisecond level.\n\nHowever, I don't really think it is incredibly important to get the\n\"perfect\" approach to filling in rand_a/rand_b right now. As long as\nwe don't document what we do, we can choose to change the method\nwithout breaking backwards compatibility. Because either approach\nresults in valid UUIDv7s.\n\n\n",
"msg_date": "Thu, 21 Mar 2024 16:21:15 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 21 Mar 2024, at 20:21, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> \n> On Wed, 20 Mar 2024 at 19:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> Timer-based bits contribute to global sortability. But the real timers we have are not even millisecond adjusted. We can hope for ~few ms variation in one datacenter or in presence of atomic clocks.\n> \n> I think the main benefit of using microseconds would not be\n> sortability between servers, but sortability between backends. \n\nOh, that’s an interesting practical feature!\nSe, essentially counter is a theoretical guaranty of sortability in one backend, while microseconds are practical sortability between backends.\n\n> However, I don't really think it is incredibly important to get the\n> \"perfect\" approach to filling in rand_a/rand_b right now. As long as\n> we don't document what we do, we can choose to change the method\n> without breaking backwards compatibility. Because either approach\n> results in valid UUIDv7s.\n\nMakes sense to me. I think both methods would be much better than UUIDv4 for practical reasons. And even not using extra bits at all (fill them with random numbers) would work for 0.999 cases.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 22 Mar 2024 11:53:51 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "I think it's better to leave Andrey's patch as is, and add another function in the future with a customizable UUIDv7 structure for special use cases. The structure description can be in JSON format. See this discussion.\n\n\nSergey Prokhorenko sergeyprokhorenko@yahoo.com.au \n\n On Friday, 22 March 2024 at 09:54:07 am GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote: \n \n \n\n> On 21 Mar 2024, at 20:21, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> \n> On Wed, 20 Mar 2024 at 19:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> Timer-based bits contribute to global sortability. But the real timers we have are not even millisecond adjusted. We can hope for ~few ms variation in one datacenter or in presence of atomic clocks.\n> \n> I think the main benefit of using microseconds would not be\n> sortability between servers, but sortability between backends. \n\nOh, that’s an interesting practical feature!\nSe, essentially counter is a theoretical guaranty of sortability in one backend, while microseconds are practical sortability between backends.\n\n> However, I don't really think it is incredibly important to get the\n> \"perfect\" approach to filling in rand_a/rand_b right now. As long as\n> we don't document what we do, we can choose to change the method\n> without breaking backwards compatibility. Because either approach\n> results in valid UUIDv7s.\n\nMakes sense to me. I think both methods would be much better than UUIDv4 for practical reasons. And even not using extra bits at all (fill them with random numbers) would work for 0.999 cases.\n\n\nBest regards, Andrey Borodin. \nI think it's better to leave Andrey's patch as is, and add another function in the future with a customizable UUIDv7 structure for special use cases. The structure description can be in JSON format. See this discussion.Sergey Prokhorenko sergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Friday, 22 March 2024 at 09:54:07 am GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n \n\n\n> On 21 Mar 2024, at 20:21, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:> > On Wed, 20 Mar 2024 at 19:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:>> Timer-based bits contribute to global sortability. But the real timers we have are not even millisecond adjusted. We can hope for ~few ms variation in one datacenter or in presence of atomic clocks.> > I think the main benefit of using microseconds would not be> sortability between servers, but sortability between backends. Oh, that’s an interesting practical feature!Se, essentially counter is a theoretical guaranty of sortability in one backend, while microseconds are practical sortability between backends.> However, I don't really think it is incredibly important to get the> \"perfect\" approach to filling in rand_a/rand_b right now. As long as> we don't document what we do, we can choose to change the method> without breaking backwards compatibility. Because either approach> results in valid UUIDv7s.Makes sense to me. I think both methods would be much better than UUIDv4 for practical reasons. And even not using extra bits at all (fill them with random numbers) would work for 0.999 cases.Best regards, Andrey Borodin.",
"msg_date": "Fri, 22 Mar 2024 11:43:58 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On 21.03.24 16:21, Jelte Fennema-Nio wrote:\n> On Wed, 20 Mar 2024 at 19:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> Timer-based bits contribute to global sortability. But the real timers we have are not even millisecond adjusted. We can hope for ~few ms variation in one datacenter or in presence of atomic clocks.\n> \n> I think the main benefit of using microseconds would not be\n> sortability between servers, but sortability between backends.\n\nThere is that, and there are also multiple backend workers for one session.\n\n\n\n",
"msg_date": "Fri, 22 Mar 2024 13:51:14 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Why not use a single UUID generator for the database table in this case, similar to autoincrement?\n\n\nSergey Prokhorenko\nsergeyprokhorenko@yahoo.com.au \n\n On Friday, 22 March 2024 at 03:51:20 pm GMT+3, Peter Eisentraut <peter@eisentraut.org> wrote: \n \n On 21.03.24 16:21, Jelte Fennema-Nio wrote:\n> On Wed, 20 Mar 2024 at 19:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> Timer-based bits contribute to global sortability. But the real timers we have are not even millisecond adjusted. We can hope for ~few ms variation in one datacenter or in presence of atomic clocks.\n> \n> I think the main benefit of using microseconds would not be\n> sortability between servers, but sortability between backends.\n\nThere is that, and there are also multiple backend workers for one session.\n\n \nWhy not use a single UUID generator for the database table in this case, similar to autoincrement?Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Friday, 22 March 2024 at 03:51:20 pm GMT+3, Peter Eisentraut <peter@eisentraut.org> wrote:\n \n\n\nOn 21.03.24 16:21, Jelte Fennema-Nio wrote:> On Wed, 20 Mar 2024 at 19:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:>> Timer-based bits contribute to global sortability. But the real timers we have are not even millisecond adjusted. We can hope for ~few ms variation in one datacenter or in presence of atomic clocks.> > I think the main benefit of using microseconds would not be> sortability between servers, but sortability between backends.There is that, and there are also multiple backend workers for one session.",
"msg_date": "Fri, 22 Mar 2024 13:42:20 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "BTW: Each microservice should have its own database to ensure data isolation and independence, enabling better scalability and fault tolerance\nSource: Microservices Pattern: Shared database\n\n| \n| \n| | \nMicroservices Pattern: Shared database\n\n\n |\n\n |\n\n |\n\n\n\n\n\nSergey Prokhorenko sergeyprokhorenko@yahoo.com.au \n\n On Friday, 22 March 2024 at 04:42:20 pm GMT+3, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote: \n \n Why not use a single UUID generator for the database table in this case, similar to autoincrement?\n\n\nSergey Prokhorenko\nsergeyprokhorenko@yahoo.com.au \n\n On Friday, 22 March 2024 at 03:51:20 pm GMT+3, Peter Eisentraut <peter@eisentraut.org> wrote: \n \n On 21.03.24 16:21, Jelte Fennema-Nio wrote:\n> On Wed, 20 Mar 2024 at 19:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> Timer-based bits contribute to global sortability. But the real timers we have are not even millisecond adjusted. We can hope for ~few ms variation in one datacenter or in presence of atomic clocks.\n> \n> I think the main benefit of using microseconds would not be\n> sortability between servers, but sortability between backends.\n\nThere is that, and there are also multiple backend workers for one session.\n\n \nBTW: Each microservice should have its own database to ensure data isolation and independence, enabling better scalability and fault toleranceSource: Microservices Pattern: Shared databaseMicroservices Pattern: Shared databaseSergey Prokhorenko sergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Friday, 22 March 2024 at 04:42:20 pm GMT+3, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:\n \n\n\nWhy not use a single UUID generator for the database table in this case, similar to autoincrement?Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Friday, 22 March 2024 at 03:51:20 pm GMT+3, Peter Eisentraut <peter@eisentraut.org> wrote:\n \n\n\nOn 21.03.24 16:21, Jelte Fennema-Nio wrote:> On Wed, 20 Mar 2024 at 19:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:>> Timer-based bits contribute to global sortability. But the real timers we have are not even millisecond adjusted. We can hope for ~few ms variation in one datacenter or in presence of atomic clocks.> > I think the main benefit of using microseconds would not be> sortability between servers, but sortability between backends.There is that, and there are also multiple backend workers for one session.",
"msg_date": "Fri, 22 Mar 2024 13:58:59 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Another source: Microservices Pattern: Database per service\n\n| \n| \n| \n| | |\n\n |\n\n |\n| \n| | \nMicroservices Pattern: Database per service\n\nA service's database is private to that service\n |\n\n |\n\n |\n\n\n\n\nSergey Prokhorenko sergeyprokhorenko@yahoo.com.au \n\n On Friday, 22 March 2024 at 04:58:59 pm GMT+3, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote: \n \n BTW: Each microservice should have its own database to ensure data isolation and independence, enabling better scalability and fault tolerance\nSource: Microservices Pattern: Shared database\n\n| \n| \n| | \nMicroservices Pattern: Shared database\n\n\n |\n\n |\n\n |\n\n\n\n| \n| \n| | \nMicroservices Pattern: Shared database\n\n\n |\n\n |\n\n |\n\n\n\n\n\n\n\nSergey Prokhorenko sergeyprokhorenko@yahoo.com.au \n\n On Friday, 22 March 2024 at 04:42:20 pm GMT+3, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote: \n \n Why not use a single UUID generator for the database table in this case, similar to autoincrement?\n\n\nSergey Prokhorenko\nsergeyprokhorenko@yahoo.com.au \n\n On Friday, 22 March 2024 at 03:51:20 pm GMT+3, Peter Eisentraut <peter@eisentraut.org> wrote: \n \n On 21.03.24 16:21, Jelte Fennema-Nio wrote:\n> On Wed, 20 Mar 2024 at 19:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> Timer-based bits contribute to global sortability. But the real timers we have are not even millisecond adjusted. We can hope for ~few ms variation in one datacenter or in presence of atomic clocks.\n> \n> I think the main benefit of using microseconds would not be\n> sortability between servers, but sortability between backends.\n\nThere is that, and there are also multiple backend workers for one session.\n\n \nAnother source: Microservices Pattern: Database per serviceMicroservices Pattern: Database per serviceA service's database is private to that serviceSergey Prokhorenko sergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Friday, 22 March 2024 at 04:58:59 pm GMT+3, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:\n \n\n\nBTW: Each microservice should have its own database to ensure data isolation and independence, enabling better scalability and fault toleranceSource: Microservices Pattern: Shared databaseMicroservices Pattern: Shared databaseMicroservices Pattern: Shared databaseSergey Prokhorenko sergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Friday, 22 March 2024 at 04:42:20 pm GMT+3, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:\n \n\n\nWhy not use a single UUID generator for the database table in this case, similar to autoincrement?Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Friday, 22 March 2024 at 03:51:20 pm GMT+3, Peter Eisentraut <peter@eisentraut.org> wrote:\n \n\n\nOn 21.03.24 16:21, Jelte Fennema-Nio wrote:> On Wed, 20 Mar 2024 at 19:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:>> Timer-based bits contribute to global sortability. But the real timers we have are not even millisecond adjusted. We can hope for ~few ms variation in one datacenter or in presence of atomic clocks.> > I think the main benefit of using microseconds would not be> sortability between servers, but sortability between backends.There is that, and there are also multiple backend workers for one session.",
"msg_date": "Fri, 22 Mar 2024 14:06:57 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On 20.03.24 19:08, Andrey M. Borodin wrote:\n>> On 19 Mar 2024, at 13:55, Peter Eisentraut <peter@eisentraut.org> wrote:\n>>\n>> On 16.03.24 18:43, Andrey M. Borodin wrote:\n>>>> On 15 Mar 2024, at 14:47, Aleksander Alekseev <aleksander@timescale.com> wrote:\n>>>>\n>>>> +1 to the idea. I doubt that anyone will miss it.\n>>> PFA v22.\n>>> Changes:\n>>> 1. Squashed all editorialisation by Jelte\n>>> 2. Fixed my erroneous comments on using Method 2 (we are using method 1 instead)\n>>> 3. Remove all traces of uuid_extract_variant()\n>>\n>> I have committed a subset of this for now, namely the additions of uuid_extract_timestamp() and uuid_extract_version(). These seemed mature and agreed upon. You can rebase the rest of your patch on top of that.\n> \n> Great! Thank you! PFA v23 with rebase on HEAD.\n\nI have been studying the uuidv() function.\n\nI find this code extremely hard to follow.\n\nWe don't need to copy all that documentation from the RFC 4122bis \ndocument. People can read that themselves. What I would like to see is \neasy to find information what from there we are implementing. Like,\n\n- UUID version 7\n- fixed-length dedicated counter\n- counter is 18 bits\n- 4 bits are initialized as zero\n\nThat's more or less all I would need to know what is going on.\n\nThat said, I don't understand why you say it's an 18 bit counter, when \nyou overwrite 6 bits with variant and version. Then it's just a 12 bit \ncounter? Which is the size of the rand_a field, so that kind of makes \nsense. But 12 bits is the recommended minimum, and (in this patch) we \ndon't use sub-millisecond timestamp precision, so we should probably use \nmore than the minimum?\n\nAlso, you are initializing 4 bits (I think?) to zero to guard against \ncounter rollovers (so it's really just an 8 bit counter?). But nothing \nchecks against such rollovers, so I don't understand the use of that.\n\nThe code code be organized better. In the not-increment_counter case, \nyou could use two separate pg_strong_random calls: One to initialize \nrand_b, starting at &uuid->data[8], and one to initialize the counter. \nThen the former could be shared between the two branches, and the code \nto assign the sequence_counter to the uuid fields could also be shared.\n\nI would also prefer if the normal case (not-increment_counter) were the \nfirst if branch.\n\n\nSome other notes on your patch:\n\n- Your rebase duplicated the documentation of uuid_extract_timestamp and \nuuid_extract_version.\n\n- PostgreSQL code uses uint64 etc. instead of uint64_t etc.\n\n- It seems the added includes\n\n#include \"access/xlog.h\"\n#include \"utils/builtins.h\"\n#include \"utils/datetime.h\"\n\nare not needed.\n\n- The static variables sequence_counter and previous_timestamp could be \nkept inside the uuidv7() function.\n\n\n\n",
"msg_date": "Fri, 22 Mar 2024 15:15:05 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Sorry for this long reply. I was looking on refactoring around pg_strong_random() and could not decide what to do. Finally, I decided to post at least something.\n\n> On 22 Mar 2024, at 19:15, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> I have been studying the uuidv() function.\n> \n> I find this code extremely hard to follow.\n> \n> We don't need to copy all that documentation from the RFC 4122bis document. People can read that themselves. What I would like to see is easy to find information what from there we are implementing. Like,\n> \n> - UUID version 7\n> - fixed-length dedicated counter\n> - counter is 18 bits\n> - 4 bits are initialized as zero\n\nI've removed table taken from RFC.\n\n> That's more or less all I would need to know what is going on.\n> \n> That said, I don't understand why you say it's an 18 bit counter, when you overwrite 6 bits with variant and version. Then it's just a 12 bit counter? Which is the size of the rand_a field, so that kind of makes sense. But 12 bits is the recommended minimum, and (in this patch) we don't use sub-millisecond timestamp precision, so we should probably use more than the minimum?\n\n\nNo, we use 4 bits in data[6], 8 bits in data[7], and 6 bits data[8]. It's 18 total. Essentially, we use both partial bytes and one whole byte between.\nThere was a bug - we used 1 extra byte of random numbers that was not necessary, I think that's what lead you to think that we use 12-bit counter.\n\n> Also, you are initializing 4 bits (I think?) to zero to guard against counter rollovers (so it's really just an 8 bit counter?). But nothing checks against such rollovers, so I don't understand the use of that.\n\nNo, there's only one guard rollover bit.\nHere: uuid->data[6] = (uuid->data[6] & 0xf7);\nBits that are called \"guard bits\" do not guard anything, they just ensure counter capacity when it is initialized.\nRollover is carried into time tick here: \n\t++sequence_counter;\n if (sequence_counter > 0x3ffff)\n {\n /* We only have 18-bit counter */\n sequence_counter = 0;\n previous_timestamp++;\n }\n\nI think we might use 10 bits of microseconds and have 8 bits of a counter. Effect of a counter won't change much. But I'm not sure if this is allowed per RFC.\nIf time source is coarse-grained it still acts like a random initializer. And when it is precise - time is \"natural\" source of entropy.\n\n\n> The code code be organized better. In the not-increment_counter case, you could use two separate pg_strong_random calls: One to initialize rand_b, starting at &uuid->data[8], and one to initialize the counter. Then the former could be shared between the two branches, and the code to assign the sequence_counter to the uuid fields could also be shared.\n\nCall to pg_strong_random() is very expensive in builds without ssl (and even with ssl too). If we could ammortize random numbers in small buffers - that would save a lot of time (see v8-0002-Buffer-random-numbers.patch upthread). Or, perhaps, we can ignore cost of two pg_string_random() calls.\n\n> \n> I would also prefer if the normal case (not-increment_counter) were the first if branch.\n\nDone.\n\n> Some other notes on your patch:\n> \n> - Your rebase duplicated the documentation of uuid_extract_timestamp and uuid_extract_version.\n> \n> - PostgreSQL code uses uint64 etc. instead of uint64_t etc.\n> \n> - It seems the added includes\n> \n> #include \"access/xlog.h\"\n> #include \"utils/builtins.h\"\n> #include \"utils/datetime.h\"\n> \n> are not needed.\n> \n> - The static variables sequence_counter and previous_timestamp could be kept inside the uuidv7() function.\n\nFixed.\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 26 Mar 2024 22:26:14 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On 26.03.24 18:26, Andrey M. Borodin wrote:\n>> Also, you are initializing 4 bits (I think?) to zero to guard against counter rollovers (so it's really just an 8 bit counter?). But nothing checks against such rollovers, so I don't understand the use of that.\n> No, there's only one guard rollover bit.\n> Here: uuid->data[6] = (uuid->data[6] & 0xf7);\n> Bits that are called \"guard bits\" do not guard anything, they just ensure counter capacity when it is initialized.\n\nUh, I guess I don't understand this at all. I tried to dig up some \ninformation about this, but didn't find anything. What exactly is the \nmechanism of these \"counter rollover guards\"? If they don't guard \nanything, what are they supposed to accomplish?\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 15:45:21 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 4 Apr 2024, at 18:45, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> On 26.03.24 18:26, Andrey M. Borodin wrote:\n>>> Also, you are initializing 4 bits (I think?) to zero to guard against counter rollovers (so it's really just an 8 bit counter?). But nothing checks against such rollovers, so I don't understand the use of that.\n>> No, there's only one guard rollover bit.\n>> Here: uuid->data[6] = (uuid->data[6] & 0xf7);\n>> Bits that are called \"guard bits\" do not guard anything, they just ensure counter capacity when it is initialized.\n> \n> Uh, I guess I don't understand this at all. I tried to dig up some information about this, but didn't find anything. What exactly is the mechanism of these \"counter rollover guards\"? If they don't guard anything, what are they supposed to accomplish?\n> \n\nMy understanding of guard bits is the following: on every UUID generation, when time is advancing, counter bits are initialized with random numbers, except guard bits. Guard bits are always initialized with zeroes.\n\nLet's consider we have a 1-byte counter with 4 guard bits and 4 normal bits.\nIf we generate some UUIDs at the very same millisecond we might have following counter values:\n\n0C <--- lower nibble is initialized with random 4 bits C.\n0D\n0E\n0F\n10\n11\n12\n\nIf we have no these guard bits we might get random numbers that are immifiately at the end of a range of allowed values:\n\nFE <--- first UUID at given millisecond\nFF\n00 <--- rollover to next millisecond\n01\n\n\nIf we have 1 guard bit and 7 normal bits we get at worst 128 values before rollover to next millisecond.\nIf we have 2 guard bits and 6 normal bits this guaranty is extended to 192.\n3/5 will guaranty capacity of 224.\nBut usefulness of every next guard bits decreases, so I think there is a point in only one.\n\nThat's my understanding of guard bits in the counter. Please correct me if I'm wrong.\n\n\nAt this point we can skip the counter\\microseconds entirely, just fill everything after unix_ts_ms with randomness. It's still a valid UUIDv7, exhibiting much more data locality than UUIDv4. We can adjust this sortability measures later.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 4 Apr 2024 23:12:10 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "For every complex problem there is an answer that is clear, simple, and wrong. Since the RFC allows microsecond timestamp granularity, the first thing that comes to everyone's mind is to insert microsecond granularity into UUIDv7. And if the RFC allowed nanosecond timestamp granularity, then they would try to insert nanosecond granularity into UUIDv7.\nBut I am categorically against abandoning the counter under pressure from the unfounded proposal to replace the counter with microsecond granularity.\n1) The RFC specifies millisecond timestamp granularity by default.\n2) All advanced UUIDv7 implementations include a counter:• for JavaScript https://www.npmjs.com/package/uuidv7• for Rust https://crates.io/crates/uuid7• for Go (Golang) https://pkg.go.dev/github.com/gofrs/uuid#NewV7• for Python https://github.com/oittaa/uuid6-python\n3) The theoretical performance of generating UUIDv7 without loss of monotonicity for microsecond granularity is only 1000 UUIDv7 per millisecond. This is very low and insufficient generation performance! But the actual generation performance is even worse, since the generation demand is unevenly distributed within a millisecond. Therefore, a UUIDv7 will not be generated every microsecond.\nFor a counter 18 bits long, with the most significant bit initialized to zero and the remaining bits initialized to a random number, the actual performance of generating a UUIDv7 without loss of monotonicity is between 2 to the power of 17 = 131072 UUIDv7 per millisecond (if the random number happens to be all ones) to 2 to the power of 18 = 262144 UUIDv7 per millisecond (if the random number happens to be all zeros). This is more than enough.\n4) Microsecond timestamp fraction subtracts 10 bits from random data, which increases the risk of collision. In the counter, almost all bits are initialized with a random number, which reduces the risk of collision.\n\n\nThe only reasonable use of microsecond granularity is when writing to a database table in parallel. However, monotonicity in this case can be ensured in another way, namely a single UUIDv7 generator per database table, similar to SERIAL (https://postgrespro.com/docs/postgresql/16/datatype-numeric#DATATYPE-SERIAL) in PostgreSQL.\nBest regards,\nSergey Prokhorenkosergeyprokhorenko@yahoo.com.au \n\n On Thursday, 4 April 2024 at 09:12:17 pm GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote: \n \n \n...\n\nAt this point we can skip the counter\\microseconds entirely, just fill everything after unix_ts_ms with randomness. It's still a valid UUIDv7, exhibiting much more data locality than UUIDv4. We can adjust this sortability measures later.\n\n\nBest regards, Andrey Borodin. \nFor every complex problem there is an answer that is clear, simple, and wrong. Since the RFC allows microsecond timestamp granularity, the first thing that comes to everyone's mind is to insert microsecond granularity into UUIDv7. And if the RFC allowed nanosecond timestamp granularity, then they would try to insert nanosecond granularity into UUIDv7.But I am categorically against abandoning the counter under pressure from the unfounded proposal to replace the counter with microsecond granularity.1) The RFC specifies millisecond timestamp granularity by default.2) All advanced UUIDv7 implementations include a counter:• for JavaScript https://www.npmjs.com/package/uuidv7• for Rust https://crates.io/crates/uuid7• for Go (Golang) https://pkg.go.dev/github.com/gofrs/uuid#NewV7• for Python https://github.com/oittaa/uuid6-python3) The theoretical performance of generating UUIDv7 without loss of monotonicity for microsecond granularity is only 1000 UUIDv7 per millisecond. This is very low and insufficient generation performance! But the actual generation performance is even worse, since the generation demand is unevenly distributed within a millisecond. Therefore, a UUIDv7 will not be generated every microsecond.For a counter 18 bits long, with the most significant bit initialized to zero and the remaining bits initialized to a random number, the actual performance of generating a UUIDv7 without loss of monotonicity is between 2 to the power of 17 = 131072 UUIDv7 per millisecond (if the random number happens to be all ones) to 2 to the power of 18 = 262144 UUIDv7 per millisecond (if the random number happens to be all zeros). This is more than enough.4) Microsecond timestamp fraction subtracts 10 bits from random data, which increases the risk of collision. In the counter, almost all bits are initialized with a random number, which reduces the risk of collision.The only reasonable use of microsecond granularity is when writing to a database table in parallel. However, monotonicity in this case can be ensured in another way, namely a single UUIDv7 generator per database table, similar to SERIAL (https://postgrespro.com/docs/postgresql/16/datatype-numeric#DATATYPE-SERIAL) in PostgreSQL.Best regards,Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Thursday, 4 April 2024 at 09:12:17 pm GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n \n\n\n...At this point we can skip the counter\\microseconds entirely, just fill everything after unix_ts_ms with randomness. It's still a valid UUIDv7, exhibiting much more data locality than UUIDv4. We can adjust this sortability measures later.Best regards, Andrey Borodin.",
"msg_date": "Sat, 6 Apr 2024 21:59:38 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 12 Mar 2024, at 20:41, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> \n> if e.g.\n> the RFC got approved 2 weeks after Postgres its feature freeze\n\nJelte, you seem to be the visionary! I would consider participating in lotteries or betting.\nNew UUID is assigned RFC number 9562, it was aproved by RFC editors and is now in AUTH48 state. This means after final approval by authors RFC will be imminently publicised. Most probably, this will happen circa 2 weeks after feature freeze :)\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.rfc-editor.org/auth48/rfc9562\n\n",
"msg_date": "Sat, 13 Apr 2024 11:58:07 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "I think that for the sake of such an epoch-making thing as UUIDv7 it would be worth slightly unfreezing this feature freeze.\n\nBest regards, \n\n\nSergey Prokhorenkosergeyprokhorenko@yahoo.com.au \n\n On Saturday, 13 April 2024 at 09:58:29 am GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote: \n \n \n\n> On 12 Mar 2024, at 20:41, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> \n> if e.g.\n> the RFC got approved 2 weeks after Postgres its feature freeze\n\nJelte, you seem to be the visionary! I would consider participating in lotteries or betting.\nNew UUID is assigned RFC number 9562, it was aproved by RFC editors and is now in AUTH48 state. This means after final approval by authors RFC will be imminently publicised. Most probably, this will happen circa 2 weeks after feature freeze :)\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.rfc-editor.org/auth48/rfc9562 \nI think that for the sake of such an epoch-making thing as UUIDv7 it would be worth slightly unfreezing this feature freeze.Best regards, Sergey Prokhorenkosergeyprokhorenko@yahoo.com.au\n\n\n\n\n On Saturday, 13 April 2024 at 09:58:29 am GMT+3, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n \n\n\n> On 12 Mar 2024, at 20:41, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:> > if e.g.> the RFC got approved 2 weeks after Postgres its feature freezeJelte, you seem to be the visionary! I would consider participating in lotteries or betting.New UUID is assigned RFC number 9562, it was aproved by RFC editors and is now in AUTH48 state. This means after final approval by authors RFC will be imminently publicised. Most probably, this will happen circa 2 weeks after feature freeze :)Best regards, Andrey Borodin.[0] https://www.rfc-editor.org/auth48/rfc9562",
"msg_date": "Sat, 13 Apr 2024 19:07:34 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "On Sat, Apr 13, 2024 at 07:07:34PM +0000, Sergey Prokhorenko wrote:\n> I think that for the sake of such an epoch-making thing as UUIDv7 it\n> would be worth slightly unfreezing this feature freeze.\n\nA feature freeze is here to freeze things in place. This comes up\nevery year, and that won't happen.\n\n> New UUID is assigned RFC number 9562, it was aproved by RFC editors\n> and is now in AUTH48 state. This means after final approval by\n> authors RFC will be imminently publicised. Most probably, this will\n> happen circa 2 weeks after feature freeze :)\n> \n> [0] https://www.rfc-editor.org/auth48/rfc9562 \n\nWell, that's life. It looks like this is waiting for some final\napproval, which may take some more time. I have no idea how long this\nusually takes. \n--\nMichael",
"msg_date": "Mon, 15 Apr 2024 15:12:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n> On 13 Apr 2024, at 11:58, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> New UUID is assigned RFC number 9562, it was aproved by RFC editors and is now in AUTH48 state.\n\nRFC 9562 is not in AUTH48-Done state, it was approved by authors and editor, and now should be published.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 3 May 2024 11:18:19 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 3 May 2024, at 11:18, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n\nHere's the documentation from ClickHouse [0] for their implementation. It's identical to provided patch in this thread, with few notable exceptions:\n\n1. Counter is 42 bits, not 18. The counter have no guard bits, every bit is initialized with random number on time ticks.\n2. By default counter is shared between threads. Alternative function generateUUIDv7ThreadMonotonic() provides thread-local counter.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://clickhouse.com/docs/en/sql-reference/functions/uuid-functions#generateUUIDv7\n\n",
"msg_date": "Sat, 4 May 2024 11:09:04 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "\n\n> On 3 May 2024, at 11:18, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> RFC 9562 is not in AUTH48-Done state, it was approved by authors and editor, and now should be published.\n\nIt's RFC now.\nhttps://datatracker.ietf.org/doc/rfc9562/\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 8 May 2024 20:37:11 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 8 May 2024, at 18:37, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> It's RFC now.\n\nPFA version with references to RFC instead of drafts.\nIn nearby thread [0] we found out that most systems have enough presicion to fill additional 12 bits of sub-millisecond information. So I switched implementation to this method.\nWe have a portable gettimeofday(), but unfortunately it gives only 10 bits of sub-millisecond information. So I created portable get_real_time_ns() for this purpose: it reads clock_gettime() on non-Windows platforms and GetSystemTimePreciseAsFileTime() on Windows.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/be0339cc-1ae1-4892-9445-8e6d8995a44d%40eisentraut.org",
"msg_date": "Sat, 20 Jul 2024 14:46:23 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "Dear Colleagues,\n\nAlthoughthe uuidv7(timestamp) function clearly contradicts RFC 9562, but theuuidv7(timestamp_offset) function is fully compliant with RFC 9562 and isabsolutely necessary.\nHere is a quote from the RFC 9562to support thisstatement (RFC 9562: Universally Unique IDentifiers (UUIDs)):\n\n| \n| \n| \n| | |\n\n |\n\n |\n| \n| | \nRFC 9562: Universally Unique IDentifiers (UUIDs)\n\nThis specification defines UUIDs (Universally Unique IDentifiers) -- also known as GUIDs (Globally Unique IDenti...\n |\n\n |\n\n |\n\n\n\n\n\"Altering,Fuzzing, or Smearing:\n\nImplementationsMAY alter the actual timestamp. Some examples include security considerationsaround providing a real-clock value within a UUID to 1) correct inaccurateclocks, 2) handle leap seconds, or 3) obtain a millisecond value by dividing by1024 (or some other value) for performance reasons (instead of dividing anumber of microseconds by 1000). This specification makes no requirement orguarantee about how close the clock value needs to be to the actual time. \"\n\nIt’s written clumsily, of course, butthe intention of the authors of RFC 9562 is completely clear: the currenttimestamp can be changed by any amount and for any reason, including securityor performance reasons. The wording provides only a few examples, the list ofwhich is certainly not exhaustive.\n\nThe motives of the authors of RFC 9562are also clear. The timestamp is needed only to generate monotonicallyincreasing UUIDv7.The timestamp should not be used as a source of data about the time the recordwas created (this is explicitly stated in section 6.12. Opacity). Therefore,the actual timestampcan and should be changed if necessary.\n\nWhy then does RFC 9562 contain wording aboutthe need to use \"Unix Epoch timestamp\"? First, the authors of RFC9562 wanted toget away from using the Gregorian calendar, which required a timestamp that wastoo long. Second, the RFC 9562 prohibits inserting into UUIDv7 a completely arbitrary dateand time value that does not increase with the passage of real time. And thisis correct, since in this case the generated UUIDv7 would not be monotonicallyincreasing. Thirdly, on almost all computing platforms there is a convenientsource of \"Unix Epoch timestamp\".\n\nWhydoes the uuidv7() function need the optional formal parameter timestamp_offset?This question isbest answered by a quote from https://lu.sagebl.eu/notes/maybe-we-dont-need-uuidv7 :\n\n\"Leakinginformation\n\nUUIDv4does not leak information assuming a proper implementation. But, UUIDv7 in factdoes: the timestamp of the server is embeded into the ID. From a business pointof view it discloses information about resource creation time. It may not be aproblem depending on the context. Current RFC draft allows implementation totweak timestamps a little to enforce a strict increasing order between twogenerations and to alleviate some security concerns.\"\n\nThere is a lot of hate on the internetabout \"UUIDv7 should not be used because it discloses the date and time the record wascreated.\" If there was a ban on changing the actual timestamp, this wouldprevent the use of UUIDv7 in mission-critical databases, and would generallylead to a decrease in the popularity of UUIDv7.\n\nThe implementation details of timestamp_offsetare, of course, up to the developer. But I would suggest two features:\n\n1. Ifthe result of applyingtimestamp_offsetthe timestamp goes beyond the permissible interval, the timestamp_offset value mustbe reset tozero\n2. Thedata type for timestamp_offsetshould bedeveloper-friendly interval type,(https://postgrespro.ru/docs/postgresql/16/datatype-datetime?lang=en#DATATYPE-INTERVAL-INPUT), which allows you to enter the argument value using words microsecond,millisecond, second, minute, hour, day, week, month, year, decade, century,millennium.\nIreally hope that timestamp_offsetwill be used inthe uuidv7() function for PostgreSQL.\n\nSergey Prokhorenkosergeyprokhorenko@yahoo.com.au \n\n \n \nDear Colleagues,\nAlthough\nthe uuidv7(timestamp) function clearly contradicts RFC 9562, but the\nuuidv7(timestamp_offset) function is fully compliant with RFC 9562 and is\nabsolutely necessary.\nHere is a quote from the RFC 9562\nto support this\nstatement (RFC 9562: Universally Unique IDentifiers (UUIDs)):RFC 9562: Universally Unique IDentifiers (UUIDs)This specification defines UUIDs (Universally Unique IDentifiers) -- also known as GUIDs (Globally Unique IDenti...\n\"Altering,\nFuzzing, or Smearing:\nImplementations\nMAY alter the actual timestamp. Some examples include security considerations\naround providing a real-clock value within a UUID to 1) correct inaccurate\nclocks, 2) handle leap seconds, or 3) obtain a millisecond value by dividing by\n1024 (or some other value) for performance reasons (instead of dividing a\nnumber of microseconds by 1000). This specification makes no requirement or\nguarantee about how close the clock value needs to be to the actual time. \"\nIt’s written clumsily, of course, but\nthe intention of the authors of RFC 9562 is completely clear: the current\ntimestamp can be changed by any amount and for any reason, including security\nor performance reasons. The wording provides only a few examples, the list of\nwhich is certainly not exhaustive.\nThe motives of the authors of RFC 9562\nare also clear. The timestamp is needed only to generate monotonically\nincreasing UUIDv7.\nThe timestamp should not be used as a source of data about the time the record\nwas created (this is explicitly stated in section 6.12. Opacity). Therefore,\nthe actual timestamp\ncan and should be changed if necessary.\nWhy then does RFC 9562 contain wording about\nthe need to use \"Unix Epoch timestamp\"? First, the authors of RFC\n9562 wanted to\nget away from using the Gregorian calendar, which required a timestamp that was\ntoo long. Second, the RFC 9562 prohibits inserting into UUIDv7 a completely arbitrary date\nand time value that does not increase with the passage of real time. And this\nis correct, since in this case the generated UUIDv7 would not be monotonically\nincreasing. Thirdly, on almost all computing platforms there is a convenient\nsource of \"Unix Epoch timestamp\".\nWhy\ndoes the uuidv7() function need the optional formal parameter timestamp_offset?\nThis question is\nbest answered by a quote from https://lu.sagebl.eu/notes/maybe-we-dont-need-uuidv7 :\n\"Leaking\ninformation\nUUIDv4\ndoes not leak information assuming a proper implementation. But, UUIDv7 in fact\ndoes: the timestamp of the server is embeded into the ID. From a business point\nof view it discloses information about resource creation time. It may not be a\nproblem depending on the context. Current RFC draft allows implementation to\ntweak timestamps a little to enforce a strict increasing order between two\ngenerations and to alleviate some security concerns.\"\nThere is a lot of hate on the internet\nabout \"UUIDv7 should not be used because it discloses the date and time the record was\ncreated.\" If there was a ban on changing the actual timestamp, this would\nprevent the use of UUIDv7 in mission-critical databases, and would generally\nlead to a decrease in the popularity of UUIDv7.\nThe implementation details of timestamp_offset\nare, of course, up to the developer. But I would suggest two features:\n1. If\nthe result of applying\ntimestamp_offset\nthe timestamp goes beyond the permissible interval, the timestamp_offset value must\nbe reset to\nzero\n2. The\ndata type for timestamp_offset\nshould be\ndeveloper-friendly interval type,\n(https://postgrespro.ru/docs/postgresql/16/datatype-datetime?lang=en#DATATYPE-INTERVAL-INPUT), which allows you to enter the argument value using words microsecond,\nmillisecond, second, minute, hour, day, week, month, year, decade, century,\nmillennium.\nI\nreally hope that timestamp_offset\nwill be used in\nthe uuidv7() function for PostgreSQL.\nSergey Prokhorenkosergeyprokhorenko@yahoo.com.au",
"msg_date": "Tue, 23 Jul 2024 23:09:48 +0000 (UTC)",
"msg_from": "Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 24 Jul 2024, at 04:09, Sergey Prokhorenko <sergeyprokhorenko@yahoo.com.au> wrote:\n> \n> Implementations MAY alter the actual timestamp.\n\nHmm… looks like we slightly misinterpreted words about clock source.\nWell, that’s great, let’s get offset back.\nPFA version accepting offset interval.\nIt works like this:\npostgres=# select uuidv7(interval '-2 months’);\n 018fc02f-0996-7136-aeb4-8936b5a516a1\n\n\npostgres=# select uuid_extract_timestamp(uuidv7(interval '-2 months'));\n 2024-05-28 22:11:15.71+05\n\nWhat do you think?\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 28 Jul 2024 23:44:22 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
},
{
"msg_contents": "> On 28 Jul 2024, at 23:44, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> PFA version accepting offset interval.\n\nThere was a bug: when time was not moving on, I was updating used time by a nanosecond, instead of 1/4096 of millisecond.\nV27 fixes that.\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 4 Aug 2024 15:50:37 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: UUID v7"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile working on sort optimization for window function, it was seen that \nperformance of sort where\n\nall tuples are in memory was bad when number of tuples were very large [1]\n\nEg: work_mem = 4 GB, sort on 4 int columns on table having 10 million \ntuples.\n\n\nIssues we saw were as follows:\n\n1. The comparetup function re-compares the first key again in case of \ntie-break.\n\n2. Frequent cache misses\n\n\nIssue #1 is being looked in separate patch. I am currently looking at #2.\n\nPossible solution was to batch tuples into groups (which can fit into L3 \ncache) before pushing them to sort function.\n\nAfter looking at different papers on this (multi-Quicksort, memory-tuned \nquicksort, Samplesort and various distributed sorts),\n\nalthough they look promising (especially samplesort), I would like to \nget more inputs as changes look bit too steep and\n\nmay or may not be in of scope of solving actual problem in hand.\n\n\nPlease let me know your opinions, do we really need to re-look at \nquicksort for this use-case or we can\n\nperform optimization without major change in core sorting algorithm? Are \nwe are open for trying new algorithms for sort?\n\nAny suggestions to narrow down search space for this problem are welcomed.\n\n\n[1] \nhttps://www.postgresql.org/message-id/CAApHDvqh+qOHk4sbvvy=Qr2NjPqAAVYf82oXY0g=Z2hRpC2Vmg@mail.gmail.com\n\n\nThanks,\n\nAnkit\n\n\n\n",
"msg_date": "Sat, 11 Feb 2023 17:49:02 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Sort optimizations: Making in-memory sort cache-aware"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-11 17:49:02 +0530, Ankit Kumar Pandey wrote:\n> 2. Frequent cache misses\n>\n> Issue #1 is being looked in separate patch. I am currently looking at #2.\n>\n> Possible solution was to batch tuples into groups (which can fit into L3\n> cache) before pushing them to sort function.\n>\n> After looking at different papers on this (multi-Quicksort, memory-tuned\n> quicksort, Samplesort and various distributed sorts), although they look\n> promising (especially samplesort), I would like to get more inputs as\n> changes look bit too steep and may or may not be in of scope of solving\n> actual problem in hand.\n>\n> Please let me know your opinions, do we really need to re-look at quicksort\n> for this use-case or we can perform optimization without major change in\n> core sorting algorithm? Are we are open for trying new algorithms for sort?\n\nI think it'll require some experimentation to know what we can and should\ndo. Clearly we're not going to do anything fundamental if the gains are a few\npercent. But it's not hard to imagine that the gains will be substantially\nlarger.\n\n\nI believe that a significant part of the reason we have low cache hit ratios\nonce the input gets larger, is that we kind of break the fundamental benefit\nof qsort:\n\nThe reason quicksort is a good sorting algorithm, despite plenty downsides, is\nthat it has pretty decent locality, due to its divide and conquer\napproach. However, tuplesort.c completely breaks that for > 1 column\nsorts. While spatial locality for accesses to the ->memtuples array is decent\nduring sorting, due to qsort's subdividing of the problem, the locality for\naccess to the tuples is *awful*.\n\nThe memtuples array is reordered while sorting, but the tuples themselves\naren't. Unless the input data is vaguely presorted, the access pattern for the\ntuples has practically zero locality.\n\nThe only reason that doesn't completely kill us is that SortTuple contains\ndatum1 inline and that abbreviated keys reduce the cost of by-reference datums\nin the first column.\n\n\nThere are things we could do to improve upon this that don't require swapping\nout our sorting implementation wholesale.\n\n\nOne idea is to keep track of the distinctness of the first column sorted and\nto behave differently if it's significantly lower than the number of to be\nsorted tuples. E.g. by doing a first sort solely on the first column, then\nreorder the MinimalTuples in memory, and then continue normally.\n\nThere's two main problems with that idea:\n1) It's hard to re-order the tuples in memory, without needing substantial\n amounts of additional memory\n2) If the second column also is not very distinct, it doesn't buy you much, if\n anything.\n\nBut it might provide sufficient benefits regardless. And a naive version,\nrequiring additional memory, should be quick to hack up.\n\n\nI have *not* looked at a whole lot of papers of cache optimized sorts, and the\nlittle I did was not recently. Partially because I am not sure that they are\nthat applicable to our scenarios: Most sorting papers don't discuss\nvariable-width data, nor a substantial amount of cache-polluting work while\ngathering the data that needs to be sorted.\n\nI think:\n\n> Possible solution was to batch tuples into groups (which can fit into L3\n> cache) before pushing them to sort function.\n\nis the most general solution to the issue outlined above. I wouldn't try to\nimplement this via a full new sorting algorithm though.\n\nMy suggestion would be to collect a roughly ~L3 sized amount of tuples, sort\njust those using the existing code, allocate new memory for all the\ncorresponding MinimalTuples in one allocation, and copy the MinimalTuples into\nthat, obviously in ->memtuples order.\n\nEven if we just use the existing code for the overall sort after that, I'd\nexpect that to yield noticable benefits.\n\nIt's very likely we can do better than just doing a plain sort of everything\nafter that.\n\nYou effectively end up with a bounded number of pre-sorted blocks, so the most\nobvious thing to try is to build a heap of those blocks and effectively do a\nheapsort between the presorted blocks.\n\n\n\nA related, but separate, improvement is to reduce / remove the memory\nallocation overhead. The switch to GenerationContext helped some, but still\nleaves a bunch of overhead. And it's not used for bounded sorts right now.\n\nWe don't palloc/pfree individual tuples during a normal sorts, but we do have\nsome, for bounded sorts. I think with a reasonable amount of work we could\navoid that for all tuples in ->tuplecontext. And switch to a trivial bump\nallocator, getting rid of all allocator overhead.\n\nThe biggest source of individual pfree()s in the bounded case is that we\nunconditionally copy the tuple into base->tuplecontext during puttuple. Even\nthough quite likely we'll immediately free it in the \"case TSS_BOUNDED\" block.\n\nWe could instead pre-check that the tuple won't immediately be discarded,\nbefore copying it into tuplecontext. Only in the TSS_BOUNDED, case, of\ncourse.\n\nI think we also can replace the individual freeing of tuples in\ntuplesort_heap_replace_top(), by allowing a number of dead tuples to\naccumulate (up to work_mem/2 maybe), and then copying the still living tuples\ninto new memory context, freeing the old one.\n\nWhile that doesn't sound cheap, in bounded sorts the number of tuples commonly\nis quite limited, the pre-check before copying the tuple will prevent this\nfrom occurring too often, the copy will result in higher locality, and, most\nimportantly, the reduced palloc() overhead (~25% or so with aset.c) will\nresult in a considerably higher cache hit ratio / lower memory usage.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Feb 2023 12:29:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Sort optimizations: Making in-memory sort cache-aware"
},
{
"msg_contents": "\n> On 12/02/23 01:59, Andres Freund wrote:\n\n> However, tuplesort.c completely breaks that for > 1 column\n> sorts. While spatial locality for accesses to the ->memtuples array is decent\n> during sorting, due to qsort's subdividing of the problem, the locality for\n> access to the tuples is *awful*.\n\n> The memtuples array is reordered while sorting, but the tuples themselves\n> aren't. Unless the input data is vaguely presorted, the access pattern for the\n> tuples has practically zero locality.\n\nThis probably explains the issue.\n\n\n> One idea is to keep track of the distinctness of the first column sorted and\n> to behave differently if it's significantly lower than the number of to be\n> sorted tuples. E.g. by doing a first sort solely on the first column, then\n> reorder the MinimalTuples in memory, and then continue normally.\n\n> There's two main problems with that idea:\n> 1) It's hard to re-order the tuples in memory, without needing substantial\n> amounts of additional memory\n> 2) If the second column also is not very distinct, it doesn't buy you much, if\n> anything.\n\n> But it might provide sufficient benefits regardless. And a naive version,\n> requiring additional memory, should be quick to hack up.\n\nI get the second point (to reorder MinimalTuples by sorting on first column) but\nnot sure how can keep `track of the distinctness of the first column`?\n\n\n> Most sorting papers don't discuss\n> variable-width data, nor a substantial amount of cache-polluting work while\n> gathering the data that needs to be sorted.\n\nI definitely agree with this.\n\n\n> My suggestion would be to collect a roughly ~L3 sized amount of tuples, sort\n> just those using the existing code, allocate new memory for all the\n> corresponding MinimalTuples in one allocation, and copy the MinimalTuples into\n> that, obviously in ->memtuples order.\n\nThis should be easy hack and we can easily profile benefits from this.\n\n> It's very likely we can do better than just doing a plain sort of everything\n> after that.\n> You effectively end up with a bounded number of pre-sorted blocks, so the most\n> obvious thing to try is to build a heap of those blocks and effectively do a \n> heapsort between the presorted blocks.\n\nThis is very interesting. It is actually what few papers had suggested, to\ndo sort in blocks and then merge (while sorting) presorted blocks.\nI am bit fuzzy on implementation of this (if it is same as what I am thinking)\nbut this is definitely what I was looking for.\n\n\n> A related, but separate, improvement is to reduce / remove the memory\n> allocation overhead. The switch to GenerationContext helped some, but still\n> leaves a bunch of overhead. And it's not used for bounded sorts right now.\n> We don't palloc/pfree individual tuples during a normal sorts, but we do have\n> some, for bounded sorts. I think with a reasonable amount of work we could\n> avoid that for all tuples in ->tuplecontext. And switch to a trivial bump\n> allocator, getting rid of all allocator overhead.\n\n> The biggest source of individual pfree()s in the bounded case is that we\n> unconditionally copy the tuple into base->tuplecontext during puttuple. Even\n> though quite likely we'll immediately free it in the \"case TSS_BOUNDED\" block.\n\n> We could instead pre-check that the tuple won't immediately be discarded,\n> before copying it into tuplecontext. Only in the TSS_BOUNDED, case, of\n> course.\n\nThis Looks doable, try this.\n\n> I think we also can replace the individual freeing of tuples in\n> tuplesort_heap_replace_top(), by allowing a number of dead tuples to\n> accumulate (up to work_mem/2 maybe), and then copying the still living tuples\n> into new memory context, freeing the old one.\n\n> While that doesn't sound cheap, in bounded sorts the number of tuples commonly\n> is quite limited, the pre-check before copying the tuple will prevent this\n> from occurring too often, the copy will result in higher locality, and, most\n> importantly, the reduced palloc() overhead (~25% or so with aset.c) will\n> result in a considerably higher cache hit ratio / lower memory usage.\n\nI would try this as well.\n\n> There are things we could do to improve upon this that don't require swapping\n> out our sorting implementation wholesale.\n\nThanks a lot Andres, these are lots of pointers to work on (without major overhaul of sort\nimplementation and with potentially good amount of improvements). I will give these a try\nand see if I could get some performance gains.\n\nThanks,\nAnkit\n\n\n\n",
"msg_date": "Sun, 12 Feb 2023 13:43:27 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Sort optimizations: Making in-memory sort cache-aware"
},
{
"msg_contents": "Hi Andres,\n\n\nI took a stab at naive version of this but been stuck for sometime now.\n\nI have added logic to sort on first column at first pass,\n\nrealloc all tuples and do full sort at second pass, but I am not seeing\n\nany benefit (it is actually regressing) at all.\n\nTried doing above both at bulk and at chunks of data.\n\n > You effectively end up with a bounded number of pre-sorted blocks, so \nthe most\n > obvious thing to try is to build a heap of those blocks and \neffectively do a\n > heapsort between the presorted blocks.\n\nI am not very clear about implementation for this. How can we do \nheapsort between\n\n the presorted blocks? Do we keep changing state->bound=i, i+n, i+2n \nsomething like\n\nthis and keep calling make_bounded_heap/sort_bounded_heap?\n\n > A related, but separate, improvement is to reduce / remove the memory\n > allocation overhead.\n\nThis is still pending from my side.\n\n\nI have attached some benchmarking results with script and POC\n\npatches (which includes GUC switch to enable optimization for ease of \ntesting) for the same.\n\nTested on WORK_MEM=3 GB for 1 and 10 Million rows data.\n\nPlease let me know things which I can fix and re-attempt.\n\n\nThanks,\n\nAnkit",
"msg_date": "Fri, 3 Mar 2023 00:02:21 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Sort optimizations: Making in-memory sort cache-aware"
}
] |
[
{
"msg_contents": "Hi,\n\nA common annoyance when writing ad-hoc analytics queries is column naming once\naggregates are used.\n\nUseful column names:\nSELECT reads, writes FROM pg_stat_io;\ncolumn names: reads, writes\n\nNot useful column names:\nSELECT SUM(reads), SUM(writes) FROM pg_stat_io;\ncolumn names: sum, sum\n\nSo i often end up manually writing:\nSELECT SUM(reads) AS sum_reads, SUM(writes) AS sum_writes, ... FROM pg_stat_io;\n\n\nOf course we can't infer useful column names for everything, but for something\nlike this, it should't be too hard to do better. E.g. by combining the\nfunction name with the column name in the argument, if a single plain column\nis the argument.\n\nI think on a green field it'd be clearly better to do something like the\nabove. What does give me pause is that it seems quite likely to break\nexisting queries, and to a lesser degree, might break applications relying on\ninferred column names\n\nCan anybody think of a good way out of that? It's not like that problem is\ngoing to go away at some point...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Feb 2023 11:24:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Improving inferred query column names"
},
{
"msg_contents": "That is a good idea for simple cases, I'm just curious how it would look\nlike for more complex cases (you can have all kinds of expressions as\nparameters for aggregate function calls).\nIf it works only for simple cases, I think it would be confusing and not\nvery helpful.\nWouldn't it make more sense to just deduplicate the names by adding\nnumerical postfixes, like sum_1, sum_2?\nFor backwards compatibility I guess you can have a GUC flag controlling\nthat behavior that can be set into backwards compatibility mode if required.\nThe previous functionality can be declared deprecated and removed (with the\nflag) once the current version becomes unsupported.\n(or with a different deprecation policy, I'm not sure what is the general\nrule for breaking changes and deprecation currently).\nIf there is a clearly defined deprecation policy and a backwards\ncompatibility option, it should be good, no? Just my 2 cents.\n\n-Vladimir Churyukin\n\nOn Sat, Feb 11, 2023 at 11:24 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> A common annoyance when writing ad-hoc analytics queries is column naming\n> once\n> aggregates are used.\n>\n> Useful column names:\n> SELECT reads, writes FROM pg_stat_io;\n> column names: reads, writes\n>\n> Not useful column names:\n> SELECT SUM(reads), SUM(writes) FROM pg_stat_io;\n> column names: sum, sum\n>\n> So i often end up manually writing:\n> SELECT SUM(reads) AS sum_reads, SUM(writes) AS sum_writes, ... FROM\n> pg_stat_io;\n>\n>\n> Of course we can't infer useful column names for everything, but for\n> something\n> like this, it should't be too hard to do better. E.g. by combining the\n> function name with the column name in the argument, if a single plain\n> column\n> is the argument.\n>\n> I think on a green field it'd be clearly better to do something like the\n> above. What does give me pause is that it seems quite likely to break\n> existing queries, and to a lesser degree, might break applications relying\n> on\n> inferred column names\n>\n> Can anybody think of a good way out of that? It's not like that problem is\n> going to go away at some point...\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n>\n\nThat is a good idea for simple cases, I'm just curious how it would look like for more complex cases (you can have all kinds of expressions as parameters for aggregate function calls).If it works only for simple cases, I think it would be confusing and not very helpful.Wouldn't it make more sense to just deduplicate the names by adding numerical postfixes, like sum_1, sum_2?For backwards compatibility I guess you can have a GUC flag controlling that behavior that can be set into backwards compatibility mode if required.The previous functionality can be declared deprecated and removed (with the flag) once the current version becomes unsupported. (or with a different deprecation policy, I'm not sure what is the general rule for breaking changes and deprecation currently).If there is a clearly defined deprecation policy and a backwards compatibility option, it should be good, no? Just my 2 cents.-Vladimir ChuryukinOn Sat, Feb 11, 2023 at 11:24 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nA common annoyance when writing ad-hoc analytics queries is column naming once\naggregates are used.\n\nUseful column names:\nSELECT reads, writes FROM pg_stat_io;\ncolumn names: reads, writes\n\nNot useful column names:\nSELECT SUM(reads), SUM(writes) FROM pg_stat_io;\ncolumn names: sum, sum\n\nSo i often end up manually writing:\nSELECT SUM(reads) AS sum_reads, SUM(writes) AS sum_writes, ... FROM pg_stat_io;\n\n\nOf course we can't infer useful column names for everything, but for something\nlike this, it should't be too hard to do better. E.g. by combining the\nfunction name with the column name in the argument, if a single plain column\nis the argument.\n\nI think on a green field it'd be clearly better to do something like the\nabove. What does give me pause is that it seems quite likely to break\nexisting queries, and to a lesser degree, might break applications relying on\ninferred column names\n\nCan anybody think of a good way out of that? It's not like that problem is\ngoing to go away at some point...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 11 Feb 2023 12:47:04 -0800",
"msg_from": "Vladimir Churyukin <vladimir@churyukin.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving inferred query column names"
},
{
"msg_contents": "On Sat, Feb 11, 2023 at 3:47 PM Vladimir Churyukin <vladimir@churyukin.com>\nwrote:\n\n> For backwards compatibility I guess you can have a GUC flag controlling\n> that behavior that can be set into backwards compatibility mode if required.\n> The previous functionality can be declared deprecated and removed (with\n> the flag) once the current version becomes unsupported.\n>\n\nSeems more like a per-session setting than a GUC.\n\nHere's a suggestion off the top of my head.\n\nWe create a session setting inferred_column_name_template.\n\nThe template takes a formatting directive %N which is just a counter\n\nSET inferred_column_name_template = 'col_%N'\n\n\nwhich would give you col_1, col_2, regardless of what kind of expression\nthe columns were\n\nWe could introduce another directive, %T\n\nSET inferred_column_name_template = '%T_%N'\n\n\nwhich prints the datatype short name of the column. In this case, %N would\nincrement per datatype, so text_1, integer_1, text_2, timestamptz_1, text_3\n\nGetting fancier, we could introduce something less datatype centric, %F\n\nSET inferred_column_name_template = '%F_%N'\n\n\nWhich would walk the following waterfall and stop on the first match\n\n 1. The datatype short name if the expression is explicitly casted\n(either CAST or ::)\n 2. the name of the function if the outermost expression was a function\n(aggregate, window, or scalar), so sum_1, substr_1\n 3. 'case' if the outermost expression was case\n 4. 'expr' if the expression was effectively an operator ( SELECT 3+4,\n'a' || 'b' etc)\n 5. the datatype short name for anything that doesn't match any of the\nprevious, and for explicit casts\n\n\nKeeping track of all the %N counters could get silly, so maybe a %P which\nis simply the numeric column position of the column, so your result set\nwould go like: id, name, col_3, last_login, col_5.\n\nWe would have to account for the case where the user left either %N or %P\nout of the template, so one of them would be an implied suffix if both were\nabsent, or we maybe go with\n\nSET inferred_column_name_prefix = '%F_';\nSET inferred_column_name_counter = 'position'; /* position, counter,\nper_type_counter */\n\nOr we just cook up a few predefined naming schemes, and let the user pick\nfrom those.\n\nOne caution I have is that I have seen several enterprise app database\ndesigns that have lots of user-customizable columns with names like\nvarchar1, numeric4, etc. Presumably the user would know their environment\nand not pick a confusing template.\n\nOn Sat, Feb 11, 2023 at 3:47 PM Vladimir Churyukin <vladimir@churyukin.com> wrote:For backwards compatibility I guess you can have a GUC flag controlling that behavior that can be set into backwards compatibility mode if required.The previous functionality can be declared deprecated and removed (with the flag) once the current version becomes unsupported. Seems more like a per-session setting than a GUC.Here's a suggestion off the top of my head.We create a session setting inferred_column_name_template.The template takes a formatting directive %N which is just a counterSET inferred_column_name_template = 'col_%N'which would give you col_1, col_2, regardless of what kind of expression the columns wereWe could introduce another directive, %TSET inferred_column_name_template = '%T_%N'which prints the datatype short name of the column. In this case, %N would increment per datatype, so text_1, integer_1, text_2, timestamptz_1, text_3Getting fancier, we could introduce something less datatype centric, %FSET inferred_column_name_template = '%F_%N'Which would walk the following waterfall and stop on the first match 1. The datatype short name if the expression is explicitly casted (either CAST or ::) 2. the name of the function if the outermost expression was a function (aggregate, window, or scalar), so sum_1, substr_1 3. 'case' if the outermost expression was case 4. 'expr' if the expression was effectively an operator ( SELECT 3+4, 'a' || 'b' etc) 5. the datatype short name for anything that doesn't match any of the previous, and for explicit castsKeeping track of all the %N counters could get silly, so maybe a %P which is simply the numeric column position of the column, so your result set would go like: id, name, col_3, last_login, col_5.We would have to account for the case where the user left either %N or %P out of the template, so one of them would be an implied suffix if both were absent, or we maybe go with SET inferred_column_name_prefix = '%F_';SET inferred_column_name_counter = 'position'; /* position, counter, per_type_counter */Or we just cook up a few predefined naming schemes, and let the user pick from those.One caution I have is that I have seen several enterprise app database designs that have lots of user-customizable columns with names like varchar1, numeric4, etc. Presumably the user would know their environment and not pick a confusing template.",
"msg_date": "Sat, 11 Feb 2023 16:51:21 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving inferred query column names"
},
{
"msg_contents": "On 11.02.23 20:24, Andres Freund wrote:\n> Not useful column names:\n> SELECT SUM(reads), SUM(writes) FROM pg_stat_io;\n> column names: sum, sum\n> \n> So i often end up manually writing:\n> SELECT SUM(reads) AS sum_reads, SUM(writes) AS sum_writes, ... FROM pg_stat_io;\n> \n> Of course we can't infer useful column names for everything, but for something\n> like this, it should't be too hard to do better. E.g. by combining the\n> function name with the column name in the argument, if a single plain column\n> is the argument.\n> \n> I think on a green field it'd be clearly better to do something like the\n> above. What does give me pause is that it seems quite likely to break\n> existing queries, and to a lesser degree, might break applications relying on\n> inferred column names\n> \n> Can anybody think of a good way out of that? It's not like that problem is\n> going to go away at some point...\n\nI think we should just do it and not care about what breaks. There has \nnever been any guarantee about these.\n\nFWIW, \"most\" other SQL implementations appear to generate column names like\n\nSELECT SUM(reads), SUM(writes) FROM pg_stat_io;\ncolumn names: \"SUM(reads)\", \"SUM(writes)\"\n\n(various capitalization of course).\n\nWe had a colleague look into this a little while ago, and it got pretty \ntedious to implement this for all the expression types. And, you know, \nthe bikeshedding ...\n\nBut I'm all in favor of improving this.\n\n\n\n",
"msg_date": "Mon, 20 Feb 2023 16:08:00 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving inferred query column names"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 8:08 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 11.02.23 20:24, Andres Freund wrote:\n> >\n> > I think on a green field it'd be clearly better to do something like the\n> > above. What does give me pause is that it seems quite likely to break\n> > existing queries, and to a lesser degree, might break applications\n> relying on\n> > inferred column names\n> >\n> > Can anybody think of a good way out of that? It's not like that problem\n> is\n> > going to go away at some point...\n>\n> I think we should just do it and not care about what breaks. There has\n> never been any guarantee about these.\n>\n>\nI'm going to toss a -1 into the ring but if this does go through a strong\nrequest that it be disabled via a GUC. The ugliness of that option is why\nwe shouldn't do this.\n\nDefacto reality is still a reality we are on the hook for.\n\nI too find the legacy design choice to be annoying but not so much that\nchanging it seems like a good idea.\n\nDavid J.\n\nOn Mon, Feb 20, 2023 at 8:08 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 11.02.23 20:24, Andres Freund wrote:> \n> I think on a green field it'd be clearly better to do something like the\n> above. What does give me pause is that it seems quite likely to break\n> existing queries, and to a lesser degree, might break applications relying on\n> inferred column names\n> \n> Can anybody think of a good way out of that? It's not like that problem is\n> going to go away at some point...\n\nI think we should just do it and not care about what breaks. There has \nnever been any guarantee about these.I'm going to toss a -1 into the ring but if this does go through a strong request that it be disabled via a GUC. The ugliness of that option is why we shouldn't do this.Defacto reality is still a reality we are on the hook for.I too find the legacy design choice to be annoying but not so much that changing it seems like a good idea.David J.",
"msg_date": "Mon, 20 Feb 2023 08:17:02 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving inferred query column names"
},
{
"msg_contents": "On 20.02.23 16:17, David G. Johnston wrote:\n> I think we should just do it and not care about what breaks. There has\n> never been any guarantee about these.\n> \n> \n> I'm going to toss a -1 into the ring but if this does go through a \n> strong request that it be disabled via a GUC. The ugliness of that \n> option is why we shouldn't do this.\n> \n> Defacto reality is still a reality we are on the hook for.\n> \n> I too find the legacy design choice to be annoying but not so much that \n> changing it seems like a good idea.\n\nWell, a small backward compatibility GUC might not be too cumbersome.\n\n\n\n",
"msg_date": "Wed, 22 Feb 2023 14:23:51 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving inferred query column names"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-20 16:08:00 +0100, Peter Eisentraut wrote:\n> On 11.02.23 20:24, Andres Freund wrote:\n> I think we should just do it and not care about what breaks. There has\n> never been any guarantee about these.\n> \n> FWIW, \"most\" other SQL implementations appear to generate column names like\n> \n> SELECT SUM(reads), SUM(writes) FROM pg_stat_io;\n> column names: \"SUM(reads)\", \"SUM(writes)\"\n\nHm, personally I don't like leaving in parens in the names, that makes it\nunnecessarily hard to reference the columns. sum_reads imo is more usable\nthan than \"SUM(reads)\".\n\n\n> We had a colleague look into this a little while ago, and it got pretty\n> tedious to implement this for all the expression types.\n\nHm, any chance that colleague could be pointed at this discussion and chime\nin? It doesn't immediately look that hard to do substantially better than\ntoday. Of course there's an approximately endless amount of effort that could\nbe poured into this, but even some fairly basic improvements seem like a big\nwin.\n\n\n> And, you know, the bikeshedding ...\n\nIndeed. I already started above :)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Feb 2023 12:38:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Improving inferred query column names"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-11 12:47:04 -0800, Vladimir Churyukin wrote:\n> That is a good idea for simple cases, I'm just curious how it would look\n> like for more complex cases (you can have all kinds of expressions as\n> parameters for aggregate function calls).\n> If it works only for simple cases, I think it would be confusing and not\n> very helpful.\n\nI don't think it needs to be perfect to be helpful.\n\n\n> Wouldn't it make more sense to just deduplicate the names by adding\n> numerical postfixes, like sum_1, sum_2?\n\nThat'd be considerably worse than what we do today imo, because any reordering\n/ added aggregate would lead to everything else changing as well.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Feb 2023 12:40:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Improving inferred query column names"
},
{
"msg_contents": "On Wed, Feb 22, 2023, 12:40 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-02-11 12:47:04 -0800, Vladimir Churyukin wrote:\n> > That is a good idea for simple cases, I'm just curious how it would look\n> > like for more complex cases (you can have all kinds of expressions as\n> > parameters for aggregate function calls).\n> > If it works only for simple cases, I think it would be confusing and not\n> > very helpful.\n>\n> I don't think it needs to be perfect to be helpful.\n>\n\n\nIt doesn't need to be perfect, but it needs to be consistent. So far you\nproposed a rule to replace () with _. What is the plan for expressions, how\nto convert them to names (with deduplication I guess?, because there could\nbe 2 similar expressions mapped to the same name potentially).\n\n\n> > Wouldn't it make more sense to just deduplicate the names by adding\n> > numerical postfixes, like sum_1, sum_2?\n>\n> That'd be considerably worse than what we do today imo, because any\n> reordering\n> / added aggregate would lead to everything else changing as well.\n>\n\n\nOk, that I kinda agree with. Not necessarily worse overall, but worse for\nsome cases. Well, the proposal above about keeping the names exactly the\nsame as the full expressions is probably the best we can do then. It will\ntake care of possible duplications and won't be position-sensitive. And\nwill be consistent. The only issue is somewhat unusual column names that\nyou will have to use quotes to refer to. But is that a real issue?\n\n-Vladimir Churyukin\n\n>\n\nOn Wed, Feb 22, 2023, 12:40 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-02-11 12:47:04 -0800, Vladimir Churyukin wrote:\n> That is a good idea for simple cases, I'm just curious how it would look\n> like for more complex cases (you can have all kinds of expressions as\n> parameters for aggregate function calls).\n> If it works only for simple cases, I think it would be confusing and not\n> very helpful.\n\nI don't think it needs to be perfect to be helpful.It doesn't need to be perfect, but it needs to be consistent. So far you proposed a rule to replace () with _. What is the plan for expressions, how to convert them to names (with deduplication I guess?, because there could be 2 similar expressions mapped to the same name potentially).\n> Wouldn't it make more sense to just deduplicate the names by adding\n> numerical postfixes, like sum_1, sum_2?\n\nThat'd be considerably worse than what we do today imo, because any reordering\n/ added aggregate would lead to everything else changing as well.Ok, that I kinda agree with. Not necessarily worse overall, but worse for some cases. Well, the proposal above about keeping the names exactly the same as the full expressions is probably the best we can do then. It will take care of possible duplications and won't be position-sensitive. And will be consistent. The only issue is somewhat unusual column names that you will have to use quotes to refer to. But is that a real issue?-Vladimir Churyukin",
"msg_date": "Wed, 22 Feb 2023 13:30:45 -0800",
"msg_from": "Vladimir Churyukin <vladimir@churyukin.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving inferred query column names"
},
{
"msg_contents": "Vladimir Churyukin <vladimir@churyukin.com> writes:\n> It doesn't need to be perfect, but it needs to be consistent. So far you\n> proposed a rule to replace () with _. What is the plan for expressions, how\n> to convert them to names (with deduplication I guess?, because there could\n> be 2 similar expressions mapped to the same name potentially).\n\nI do not think we need to do anything for arbitrary expressions.\nThe proposal so far was just to handle a function call wrapped\naround something else by converting to the function name followed\nby whatever we'd emit for the something else. You cannot realistically\nhandle, say, operator expressions without emitting names that will\nrequire quoting, which doesn't seem attractive.\n\nAnd no, deduplication isn't on the table at all here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Feb 2023 16:38:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving inferred query column names"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-22 16:38:51 -0500, Tom Lane wrote:\n> Vladimir Churyukin <vladimir@churyukin.com> writes:\n> > It doesn't need to be perfect, but it needs to be consistent. So far you\n> > proposed a rule to replace () with _. What is the plan for expressions, how\n> > to convert them to names (with deduplication I guess?, because there could\n> > be 2 similar expressions mapped to the same name potentially).\n> \n> I do not think we need to do anything for arbitrary expressions.\n\nExactly. It's not like they have a useful name today. Nor are they unique.\n\n\n> The proposal so far was just to handle a function call wrapped\n> around something else by converting to the function name followed\n> by whatever we'd emit for the something else.\n\nJust to showcase that better, what I think we're discussing is changing:\n\nSELECT sum(relpages), sum(reltuples), 1+1 FROM pg_class;\n┌──────┬────────┬──────────┐\n│ sum │ sum │ ?column? │\n├──────┼────────┼──────────┤\n│ 2774 │ 257896 │ 2 │\n└──────┴────────┴──────────┘\n(1 row)\n\nto\n\nSELECT sum(relpages), sum(reltuples), 1+1 FROM pg_class;\n┌──────────────┬───────────────┬──────────┐\n│ sum_relpages │ sum_reltuples │ ?column? │\n├──────────────┼───────────────┼──────────┤\n│ 2774 │ 257896 │ 2 │\n└──────────────┴───────────────┴──────────┘\n(1 row)\n\n\n> You cannot realistically\n> handle, say, operator expressions without emitting names that will\n> require quoting, which doesn't seem attractive.\n\nWell, it doesn't require much to be better than \"?column?\", which already\nrequires quoting...\n\nWe could just do something like printing <left>_<funcname>_<right>. So\nsomething like avg(reltuples / relpages) would end up as\navg_reltuples_float48div_relpages.\n\nWhether that's worth it, or whether column name lengths would be too painful,\nIDK.\n\n\n> And no, deduplication isn't on the table at all here.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Feb 2023 14:19:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Improving inferred query column names"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-22 16:38:51 -0500, Tom Lane wrote:\n>> The proposal so far was just to handle a function call wrapped\n>> around something else by converting to the function name followed\n>> by whatever we'd emit for the something else.\n\n> SELECT sum(relpages), sum(reltuples), 1+1 FROM pg_class;\n> ┌──────────────┬───────────────┬──────────┐\n> │ sum_relpages │ sum_reltuples │ ?column? │\n> ├──────────────┼───────────────┼──────────┤\n\nSo far so good, but what about multi-argument functions?\nDo we do \"f_x_y_z\", and truncate wherever? How well will this\nwork with nested function calls?\n\n>> You cannot realistically\n>> handle, say, operator expressions without emitting names that will\n>> require quoting, which doesn't seem attractive.\n\n> Well, it doesn't require much to be better than \"?column?\", which already\n> requires quoting...\n\nI think the point of \"?column?\" is to use something that nobody's going\nto want to reference that way, quoted or otherwise. The SQL spec says\n(in SQL:2021, it's 7.16 <query specification> syntax rule 18) that if the\ncolumn expression is anything more complex than a simple column reference\n(or SQL parameter reference, which I think we don't support) then the\ncolumn name is implementation-dependent, which is standards-ese for\n\"here be dragons\".\n\nBTW, SQL92 and SQL99 had a further constraint:\n\n c) Otherwise, the <column name> of the i-th column of the <query\n specification> is implementation-dependent and different\n from the <column name> of any column, other than itself, of\n a table referenced by any <table reference> contained in the\n SQL-statement.\n\nWe never tried to implement that literally, and now I'm glad we didn't\nbother, because recent spec versions only say \"implementation-dependent\",\nfull stop. In any case, the spec is clearly in the camp of \"don't depend\non these column names\".\n\n> We could just do something like printing <left>_<funcname>_<right>. So\n> something like avg(reltuples / relpages) would end up as\n> avg_reltuples_float48div_relpages.\n> Whether that's worth it, or whether column name lengths would be too painful,\n> IDK.\n\nI think you'd soon be hitting NAMEDATALEN limits ...\n\n>> And no, deduplication isn't on the table at all here.\n\n> +1\n\nI remembered while looking at the spec that duplicate column names\nin SELECT output are not only allowed but *required* by the spec.\nIf you write, say, \"SELECT 1 AS x, 2 AS x, ...\" then the column\nnames of those two columns are both \"x\", no wiggle room at all.\nSo I see little point in trying to deduplicate generated names,\neven aside from the points you made.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Feb 2023 23:03:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improving inferred query column names"
},
{
"msg_contents": "On 2/22/23 23:03, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> We could just do something like printing <left>_<funcname>_<right>. So\n>> something like avg(reltuples / relpages) would end up as\n>> avg_reltuples_float48div_relpages.\n>> Whether that's worth it, or whether column name lengths would be too painful,\n>> IDK.\n> \n> I think you'd soon be hitting NAMEDATALEN limits ...\n\n<flameproof_suit>\n\nProbably an unpalatable idea, but if we did something like \nmd5('avg(reltuples / relpages)') for the column name, it would be \n(reasonably) unique and deterministic. Not pretty, but possibly useful \nin some cases.\n\n</flameproof_suit>\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 08:15:54 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving inferred query column names"
},
{
"msg_contents": "On 22.02.23 21:38, Andres Freund wrote:\n> On 2023-02-20 16:08:00 +0100, Peter Eisentraut wrote:\n>> On 11.02.23 20:24, Andres Freund wrote:\n>> I think we should just do it and not care about what breaks. There has\n>> never been any guarantee about these.\n>>\n>> FWIW, \"most\" other SQL implementations appear to generate column names like\n>>\n>> SELECT SUM(reads), SUM(writes) FROM pg_stat_io;\n>> column names: \"SUM(reads)\", \"SUM(writes)\"\n> Hm, personally I don't like leaving in parens in the names, that makes it\n> unnecessarily hard to reference the columns. sum_reads imo is more usable\n> than than \"SUM(reads)\".\n\nIf you want something without special characters, the example you gave \nis manageable, but what are you going to do with\n\nSELECT a, b, a * b, a / b FROM ...\n\nor\n\nSELECT a, b, SUM(a * b) FROM ...\n\nand so on. What would be the actual rule to produce the output you want?\n\nI think a question here is what \"usable\" means in this context.\n\nIf you want a name that you can refer to (in a client API, for example), \nyou should give it a name explicitly.\n\nI think the uses for the automatic names are that they look pretty and \nmeaningful in visual output (psql, pgadmin, etc.). In that context, I \nthink it is ok to use special characters without limitation, since you \nare just going to look at the thing, not type it back in.\n\n\n\n",
"msg_date": "Thu, 2 Mar 2023 15:05:51 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving inferred query column names"
}
] |
[
{
"msg_contents": "While working on 16fd03e95, I noticed that in each aggregate\ndeserialization function, in order to \"receive\" the bytea value that\nis the serialized aggregate state, appendBinaryStringInfo is used to\nappend the bytes of the bytea value onto a temporary StringInfoData.\nUsing appendBinaryStringInfo seems a bit wasteful here. We could\nreally just fake up a StringInfoData and point directly to the bytes\nof the bytea value.\n\nThe best way I could think of to do this was to invent\ninitStringInfoFromString() which initialises a StringInfoData and has\nthe ->data field point directly at the specified buffer. This will\nmean that it would be unsafe to do any appendStringInfo* operations on\nthe resulting StringInfoData as enlargeStringInfo would try to\nrepalloc the data buffer, which might not even point to a palloc'd\nstring. I thought it might be fine just to mention that in the\ncomments for the function, but we could probably do a bit better and\nset maxlen to something like -1 and Assert() we never see -1 in the\nvarious append functions. I wasn't sure it was worth it, so didn't do\nthat.\n\nI had a look around for other places that might be following the same\npattern. I only found range_recv() and XLogWalRcvProcessMsg(). I\ndidn't adjust the range_recv() one as I couldn't see how to do that\nwithout casting away a const. I did adjust the XLogWalRcvProcessMsg()\none and got rid of a global variable in the process.\n\nI've attached the benchmark results I got after testing how the\nmodification changed the performance of string_agg_deserialize().\n\nI was hoping this would have a slightly more impressive performance\nimpact, especially for string_agg() and array_agg() as the aggregate\nstates of those can be large. However, in the test I ran, there's\nonly a very slight performance gain. I may just not have found the\nbest case, however.\n\nDavid",
"msg_date": "Sun, 12 Feb 2023 18:38:36 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Making aggregate deserialization (and WAL receive) functions slightly\n faster"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> While working on 16fd03e95, I noticed that in each aggregate\n> deserialization function, in order to \"receive\" the bytea value that\n> is the serialized aggregate state, appendBinaryStringInfo is used to\n> append the bytes of the bytea value onto a temporary StringInfoData.\n> Using appendBinaryStringInfo seems a bit wasteful here. We could\n> really just fake up a StringInfoData and point directly to the bytes\n> of the bytea value.\n\nPerhaps, but ...\n\n> The best way I could think of to do this was to invent\n> initStringInfoFromString() which initialises a StringInfoData and has\n> the ->data field point directly at the specified buffer. This will\n> mean that it would be unsafe to do any appendStringInfo* operations on\n> the resulting StringInfoData as enlargeStringInfo would try to\n> repalloc the data buffer, which might not even point to a palloc'd\n> string.\n\nI find this patch horribly dangerous.\n\nIt could maybe be okay if we added the capability for StringInfoData\nto understand (and enforce) that its \"data\" buffer is read-only.\nHowever, that'd add overhead to every existing use-case.\n\n> I've attached the benchmark results I got after testing how the\n> modification changed the performance of string_agg_deserialize().\n> I was hoping this would have a slightly more impressive performance\n> impact, especially for string_agg() and array_agg() as the aggregate\n> states of those can be large. However, in the test I ran, there's\n> only a very slight performance gain. I may just not have found the\n> best case, however.\n\nI do not think we should even consider this without solid evidence\nfor *major* performance improvements. As it stands, it's a\nquintessential example of a loaded foot-gun, and it seems clear\nthat making it safe enough to use would add more overhead than\nit saves.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Feb 2023 01:39:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Sun, 12 Feb 2023 at 19:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I find this patch horribly dangerous.\n\nI see LogicalRepApplyLoop() does something similar with a\nStringInfoData. Maybe it's just scarier having an external function in\nstringinfo.c which does this as having it increases the chances of\nsomeone using it for the wrong thing.\n\n> It could maybe be okay if we added the capability for StringInfoData\n> to understand (and enforce) that its \"data\" buffer is read-only.\n> However, that'd add overhead to every existing use-case.\n\nI'm not very excited by that. I considered just setting maxlen = -1\nin the new function and adding Asserts to check for that in each of\nthe appendStringInfo* functions. However, since the performance gains\nare not so great, I'll probably just drop the whole thing given\nthere's resistance.\n\nDavid\n\n\n",
"msg_date": "Sun, 12 Feb 2023 23:43:38 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Sun, 12 Feb 2023 at 23:43, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 12 Feb 2023 at 19:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > It could maybe be okay if we added the capability for StringInfoData\n> > to understand (and enforce) that its \"data\" buffer is read-only.\n> > However, that'd add overhead to every existing use-case.\n>\n> I'm not very excited by that. I considered just setting maxlen = -1\n> in the new function and adding Asserts to check for that in each of\n> the appendStringInfo* functions. However, since the performance gains\n> are not so great, I'll probably just drop the whole thing given\n> there's resistance.\n\nI know I said I'd drop this, but I was reminded of it again today. I\nended up adjusting the patch so that it no longer adds a helper\nfunction to stringinfo.c and instead just manually assigns the\nStringInfo.data field to point to the bytea's buffer. This follows\nwhat's done in some existing places such as\nLogicalParallelApplyLoop(), ReadArrayBinary() and record_recv() to\nname a few.\n\nI ran a fresh set of benchmarks on today's master with and without the\npatch applied. I used the same benchmark as I did in [1]. The average\nperformance increase from between 0 and 12 workers is about 6.6%.\n\nThis seems worthwhile to me. Any objections?\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvr%3De-YOigriSHHm324a40HPqcUhSp6pWWgjz5WwegR%3DcQ%40mail.gmail.com",
"msg_date": "Tue, 3 Oct 2023 18:02:10 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Tue, Oct 03, 2023 at 06:02:10PM +1300, David Rowley wrote:\n> I know I said I'd drop this, but I was reminded of it again today. I\n> ended up adjusting the patch so that it no longer adds a helper\n> function to stringinfo.c and instead just manually assigns the\n> StringInfo.data field to point to the bytea's buffer. This follows\n> what's done in some existing places such as\n> LogicalParallelApplyLoop(), ReadArrayBinary() and record_recv() to\n> name a few.\n> \n> I ran a fresh set of benchmarks on today's master with and without the\n> patch applied. I used the same benchmark as I did in [1]. The average\n> performance increase from between 0 and 12 workers is about 6.6%.\n> \n> This seems worthwhile to me. Any objections?\n\nInteresting.\n\n+ buf.len = VARSIZE_ANY_EXHDR(sstate);\n+ buf.maxlen = 0;\n+ buf.cursor = 0;\n\nPerhaps it would be worth hiding that in a macro defined in\nstringinfo.h?\n--\nMichael",
"msg_date": "Wed, 4 Oct 2023 12:57:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "Thanks for taking a look at this.\n\nOn Wed, 4 Oct 2023 at 16:57, Michael Paquier <michael@paquier.xyz> wrote:\n> + buf.len = VARSIZE_ANY_EXHDR(sstate);\n> + buf.maxlen = 0;\n> + buf.cursor = 0;\n>\n> Perhaps it would be worth hiding that in a macro defined in\n> stringinfo.h?\n\nThe original patch had a new function in stringinfo.c which allowed a\nStringInfoData to be initialised from an existing string with some\ngiven length. Tom wasn't a fan of that because there wasn't any\nprotection against someone trying to use the given StringInfoData and\nthen calling appendStringInfo to append another string. That can't be\ndone in this case as we can't repalloc the VARDATA_ANY(state) pointer\ndue to it not pointing directly to a palloc'd chunk. Tom's complaint\nseemed to be about having a reusable function which could be abused,\nso I modified the patch to remove the reusable code. I think your\nmacro idea in stringinfo.h would put the patch in the same position as\nit was initially.\n\nIt would be possible to do something like have maxlen == -1 mean that\nthe StringInfoData.data field isn't being managed internally in\nstringinfo.c and then have all the appendStringInfo functions check\nfor that, but I really don't want to add overhead to everything that\nuses appendStringInfo just for this.\n\nDavid\n\n\n",
"msg_date": "Wed, 4 Oct 2023 19:47:11 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Wed, Oct 04, 2023 at 07:47:11PM +1300, David Rowley wrote:\n> The original patch had a new function in stringinfo.c which allowed a\n> StringInfoData to be initialised from an existing string with some\n> given length. Tom wasn't a fan of that because there wasn't any\n> protection against someone trying to use the given StringInfoData and\n> then calling appendStringInfo to append another string. That can't be\n> done in this case as we can't repalloc the VARDATA_ANY(state) pointer\n> due to it not pointing directly to a palloc'd chunk. Tom's complaint\n> seemed to be about having a reusable function which could be abused,\n> so I modified the patch to remove the reusable code. I think your\n> macro idea in stringinfo.h would put the patch in the same position as\n> it was initially.\n\nAhem, well. Based on this argument my own argument does not hold\nmuch. Perhaps I'd still use a macro at the top of array_userfuncs.c\nand numeric.c, to avoid repeating the same pattern respectively two\nand four times, documenting once on top of both macros that this is a\nfake StringInfo because of the reasons documented in these code paths.\n--\nMichael",
"msg_date": "Thu, 5 Oct 2023 14:23:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Thu, 5 Oct 2023 at 18:23, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 04, 2023 at 07:47:11PM +1300, David Rowley wrote:\n> > The original patch had a new function in stringinfo.c which allowed a\n> > StringInfoData to be initialised from an existing string with some\n> > given length. Tom wasn't a fan of that because there wasn't any\n> > protection against someone trying to use the given StringInfoData and\n> > then calling appendStringInfo to append another string. That can't be\n> > done in this case as we can't repalloc the VARDATA_ANY(state) pointer\n> > due to it not pointing directly to a palloc'd chunk. Tom's complaint\n> > seemed to be about having a reusable function which could be abused,\n> > so I modified the patch to remove the reusable code. I think your\n> > macro idea in stringinfo.h would put the patch in the same position as\n> > it was initially.\n>\n> Ahem, well. Based on this argument my own argument does not hold\n> much. Perhaps I'd still use a macro at the top of array_userfuncs.c\n> and numeric.c, to avoid repeating the same pattern respectively two\n> and four times, documenting once on top of both macros that this is a\n> fake StringInfo because of the reasons documented in these code paths.\n\nI looked at the patch again and I just couldn't bring myself to change\nit to that. If it were a macro going into stringinfo.h then I'd agree\nwith having a macro or inline function as it would allow the reader to\nconceptualise what's happening after learning what the function does.\nHaving multiple macros defined in various C files means that much\nharder as there are more macros to learn. Since we're only talking 4\nlines of code, I think I'd rather reduce the number of hops the reader\nmust do to find out what's going on and just leave the patch as is.\n\nI considered if it might be better to reduce the 4 lines down to 3 by\nchaining the assignments like:\n\nbuf.maxlen = buf.cursor = 0;\n\nbut I think I might instead change it so that maxlen gets set to -1 to\nfollow what's done in LogicalParallelApplyLoop() and\nLogicalRepApplyLoop(). In the absence of having a function/macro in\nstringinfo.h, it might make grepping for this type of thing easier.\n\nIf anyone else has a good argument for having multiple macros for this\npurpose then I could reconsider.\n\nDavid\n\n\n",
"msg_date": "Thu, 5 Oct 2023 21:24:28 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Thu, 5 Oct 2023 at 21:24, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 5 Oct 2023 at 18:23, Michael Paquier <michael@paquier.xyz> wrote:\n> > Ahem, well. Based on this argument my own argument does not hold\n> > much. Perhaps I'd still use a macro at the top of array_userfuncs.c\n> > and numeric.c, to avoid repeating the same pattern respectively two\n> > and four times, documenting once on top of both macros that this is a\n> > fake StringInfo because of the reasons documented in these code paths.\n>\n> I looked at the patch again and I just couldn't bring myself to change\n> it to that. If it were a macro going into stringinfo.h then I'd agree\n> with having a macro or inline function as it would allow the reader to\n> conceptualise what's happening after learning what the function does.\n\nI've pushed this patch. I didn't go with the macros in the end. I\njust felt it wasn't an improvement and none of the existing code which\ndoes the same thing bothers with a macro. I got the idea you were not\nparticularly for the macro given that you used the word \"Perhaps\".\n\nAnyway, thank you for having a look at this.\n\nDavid\n\n\n",
"msg_date": "Mon, 9 Oct 2023 17:28:47 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Thu, 5 Oct 2023 at 21:24, David Rowley <dgrowleyml@gmail.com> wrote:\n>> I looked at the patch again and I just couldn't bring myself to change\n>> it to that. If it were a macro going into stringinfo.h then I'd agree\n>> with having a macro or inline function as it would allow the reader to\n>> conceptualise what's happening after learning what the function does.\n\n> I've pushed this patch. I didn't go with the macros in the end. I\n> just felt it wasn't an improvement and none of the existing code which\n> does the same thing bothers with a macro. I got the idea you were not\n> particularly for the macro given that you used the word \"Perhaps\".\n\nSorry for not having paid more attention to this thread ... but\nI'm pretty desperately unhappy with the patch as-pushed. I agree\nwith the criticism that this is a very repetitive coding pattern\nthat could have used a macro. But my real problem with this:\n\n+ buf.data = VARDATA_ANY(sstate);\n+ buf.len = VARSIZE_ANY_EXHDR(sstate);\n+ buf.maxlen = 0;\n+ buf.cursor = 0;\n\nis that it totally breaks the StringInfo API without even\nattempting to fix the API specs that it falsifies,\nparticularly this in stringinfo.h:\n\n * maxlen is the allocated size in bytes of 'data', i.e. the maximum\n * string size (including the terminating '\\0' char) that we can\n * currently store in 'data' without having to reallocate\n * more space. We must always have maxlen > len.\n\nI could see inventing a notion of a \"read-only StringInfo\"\nto legitimize what you've done here, but you didn't bother\nto try. I do not like this one bit. This is a fairly\nfundamental API and we shouldn't be so cavalier about\nbreaking it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Oct 2023 00:37:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Mon, 9 Oct 2023 at 17:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Sorry for not having paid more attention to this thread ... but\n> I'm pretty desperately unhappy with the patch as-pushed. I agree\n> with the criticism that this is a very repetitive coding pattern\n> that could have used a macro. But my real problem with this:\n>\n> + buf.data = VARDATA_ANY(sstate);\n> + buf.len = VARSIZE_ANY_EXHDR(sstate);\n> + buf.maxlen = 0;\n> + buf.cursor = 0;\n>\n> is that it totally breaks the StringInfo API without even\n> attempting to fix the API specs that it falsifies,\n> particularly this in stringinfo.h:\n>\n> * maxlen is the allocated size in bytes of 'data', i.e. the maximum\n> * string size (including the terminating '\\0' char) that we can\n> * currently store in 'data' without having to reallocate\n> * more space. We must always have maxlen > len.\n>\n> I could see inventing a notion of a \"read-only StringInfo\"\n> to legitimize what you've done here, but you didn't bother\n> to try. I do not like this one bit. This is a fairly\n> fundamental API and we shouldn't be so cavalier about\n> breaking it.\n\nYou originally called the centralised logic a \"loaded foot-gun\" [1],\nbut now you're complaining about a lack of loaded foot-gun and want a\nmacro? Which part did I misunderstand? Enlighten me, please.\n\nHere are some more thoughts on how we could improve this:\n\n1. Adjust the definition of StringInfoData.maxlen to define that -1\nmeans the StringInfoData's buffer is externally managed.\n2. Adjust enlargeStringInfo() to add a check for maxlen = -1 and have\nit palloc, say, pg_next_pow2(str->len * 2) bytes and memcpy the\nexisting (externally managed string) into the newly palloc'd buffer.\n3. Add a new function along the lines of what I originally proposed to\nallow init of a StringInfoData using an existing allocated string\nwhich sets maxlen = -1.\n4. Update all the existing places, including the ones I just committed\n(plus the ones you committed in ba1e066e4) to make use of the function\nadded in #3.\n\nBetter ideas?\n\nDavid\n\n[1] https://postgr.es/m/770055.1676183953@sss.pgh.pa.us\n\n\n",
"msg_date": "Mon, 9 Oct 2023 21:17:00 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Mon, 9 Oct 2023 at 21:17, David Rowley <dgrowleyml@gmail.com> wrote:\n> Here are some more thoughts on how we could improve this:\n>\n> 1. Adjust the definition of StringInfoData.maxlen to define that -1\n> means the StringInfoData's buffer is externally managed.\n> 2. Adjust enlargeStringInfo() to add a check for maxlen = -1 and have\n> it palloc, say, pg_next_pow2(str->len * 2) bytes and memcpy the\n> existing (externally managed string) into the newly palloc'd buffer.\n> 3. Add a new function along the lines of what I originally proposed to\n> allow init of a StringInfoData using an existing allocated string\n> which sets maxlen = -1.\n> 4. Update all the existing places, including the ones I just committed\n> (plus the ones you committed in ba1e066e4) to make use of the function\n> added in #3.\n\nI just spent the past few hours playing around with the attached WIP\npatch to try to clean up the various places where we manually build\nStringInfoDatas around the tree.\n\nWhile working on this, I added an Assert in the new\ninitStringInfoFromStringWithLen function to ensure that data[len] ==\n'\\0' per the \"There is guaranteed to be a terminating '\\0' at\ndata[len]\" comment in stringinfo.h. It looks like we have some\nexisting breakers of this rule.\n\nIf you apply the attached patch to 608fd198de~1 and ignore the\nrejected hunks from the deserial functions, you'll see an Assert\nfailure during 023_twophase_stream.pl\n\n023_twophase_stream_subscriber.log indicates:\nTRAP: failed Assert(\"data[len] == '\\0'\"), File:\n\"../../../../src/include/lib/stringinfo.h\", Line: 97, PID: 1073141\npostgres: subscriber: logical replication parallel apply worker for\nsubscription 16396 (ExceptionalCondition+0x70)[0x56160451e9d0]\npostgres: subscriber: logical replication parallel apply worker for\nsubscription 16396 (ParallelApplyWorkerMain+0x53c)[0x5616043618cc]\npostgres: subscriber: logical replication parallel apply worker for\nsubscription 16396 (StartBackgroundWorker+0x20b)[0x56160434452b]\n\nSo it seems like we have some existing issues with\nLogicalParallelApplyLoop(). The code there does not properly NUL\nterminate the StringInfoData.data field. There are some examples in\nexec_bind_message() of how that could be fixed. I've CC'd Amit to let\nhim know about this.\n\nI'll also need to revert 608fd198 as this also highlights that setting\nthe StringInfoData.data to point to a bytea Datum can't be done either\nas those aren't NUL terminated strings.\n\nIf people think it's worthwhile having something like the attached to\ntry to eliminate our need to manually build StringInfoDatas then I can\nspend more time on it once LogicalParallelApplyLoop() is fixed.\n\nDavid",
"msg_date": "Mon, 9 Oct 2023 23:47:31 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Mon, 9 Oct 2023 at 21:17, David Rowley <dgrowleyml@gmail.com> wrote:\n>> Here are some more thoughts on how we could improve this:\n>> \n>> 1. Adjust the definition of StringInfoData.maxlen to define that -1\n>> means the StringInfoData's buffer is externally managed.\n>> 2. Adjust enlargeStringInfo() to add a check for maxlen = -1 and have\n>> it palloc, say, pg_next_pow2(str->len * 2) bytes and memcpy the\n>> existing (externally managed string) into the newly palloc'd buffer.\n>> 3. Add a new function along the lines of what I originally proposed to\n>> allow init of a StringInfoData using an existing allocated string\n>> which sets maxlen = -1.\n>> 4. Update all the existing places, including the ones I just committed\n>> (plus the ones you committed in ba1e066e4) to make use of the function\n>> added in #3.\n\nHm. I'd be inclined to use maxlen == 0 as the indicator of a read-only\nbuffer, just because that would not create a problem if we ever want\nto change it to an unsigned type. Other than that, I agree with the\nidea of using a special maxlen value to indicate that the buffer is\nread-only and not owned by the StringInfo. We need to nail down the\nexact semantics though.\n\n> While working on this, I added an Assert in the new\n> initStringInfoFromStringWithLen function to ensure that data[len] ==\n> '\\0' per the \"There is guaranteed to be a terminating '\\0' at\n> data[len]\" comment in stringinfo.h. It looks like we have some\n> existing breakers of this rule.\n\nUgh. The point that 608fd198d also broke the terminating-nul\nconvention was something that occurred to me after sending\nmy previous message. That's something we can't readily accommodate\nwithin the concept of a read-only buffer, but I think we can't\ngive it up without risking a lot of obscure bugs.\n\n> I'll also need to revert 608fd198 as this also highlights that setting\n> the StringInfoData.data to point to a bytea Datum can't be done either\n> as those aren't NUL terminated strings.\n\nYeah. I would revert that as a separate commit and then think about\nhow we want to proceed, but I generally agree that there could be\nvalue in the idea of a setup function that accepts a caller-supplied\nbuffer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Oct 2023 13:38:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Tue, 10 Oct 2023 at 06:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hm. I'd be inclined to use maxlen == 0 as the indicator of a read-only\n> buffer, just because that would not create a problem if we ever want\n> to change it to an unsigned type. Other than that, I agree with the\n> idea of using a special maxlen value to indicate that the buffer is\n> read-only and not owned by the StringInfo. We need to nail down the\n> exact semantics though.\n\nI've attached a slightly more worked on patch that makes maxlen == 0\nmean read-only. Unsure if a macro is worthwhile there or not.\n\nThe patch still fails during 023_twophase_stream.pl for the reasons\nmentioned upthread. Getting rid of the Assert in\ninitStringInfoFromStringWithLen() allows it to pass.\n\nOne thought I had about this is that the memory context behaviour\nmight catch someone out at some point. Right now if you do\ninitStringInfo() the memory context of the \"data\" field will be\nCurrentMemoryContext, but if someone does\ninitStringInfoFromStringWithLen() and then changes to some other\nmemory context before doing an appendStringInfo on that string, then\nwe'll allocate \"data\" in whatever that memory context is. Maybe that's\nok if we document it. Fixing it would mean adding a MemoryContext\nfield to StringInfoData which would be set to CurrentMemoryContext\nduring initStringInfo() and initStringInfoFromStringWithLen().\n\nI'm not fully happy with the extra code added in enlargeStringInfo().\nIt's a little repetitive. Fixing it up would mean having to have a\nboolean variable to mark if the string was readonly so at the end we'd\nknow to repalloc or palloc/memcpy. For now, I just marked that code\nas unlikely() since there's no place in the code base that uses it.\n\nDavid",
"msg_date": "Tue, 10 Oct 2023 15:59:06 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Mon, 9 Oct 2023 at 16:20, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Mon, 9 Oct 2023 at 21:17, David Rowley <dgrowleyml@gmail.com> wrote:\n> > Here are some more thoughts on how we could improve this:\n> >\n> > 1. Adjust the definition of StringInfoData.maxlen to define that -1\n> > means the StringInfoData's buffer is externally managed.\n> > 2. Adjust enlargeStringInfo() to add a check for maxlen = -1 and have\n> > it palloc, say, pg_next_pow2(str->len * 2) bytes and memcpy the\n> > existing (externally managed string) into the newly palloc'd buffer.\n> > 3. Add a new function along the lines of what I originally proposed to\n> > allow init of a StringInfoData using an existing allocated string\n> > which sets maxlen = -1.\n> > 4. Update all the existing places, including the ones I just committed\n> > (plus the ones you committed in ba1e066e4) to make use of the function\n> > added in #3.\n>\n> I just spent the past few hours playing around with the attached WIP\n> patch to try to clean up the various places where we manually build\n> StringInfoDatas around the tree.\n>\n> While working on this, I added an Assert in the new\n> initStringInfoFromStringWithLen function to ensure that data[len] ==\n> '\\0' per the \"There is guaranteed to be a terminating '\\0' at\n> data[len]\" comment in stringinfo.h. It looks like we have some\n> existing breakers of this rule.\n>\n> If you apply the attached patch to 608fd198de~1 and ignore the\n> rejected hunks from the deserial functions, you'll see an Assert\n> failure during 023_twophase_stream.pl\n>\n> 023_twophase_stream_subscriber.log indicates:\n> TRAP: failed Assert(\"data[len] == '\\0'\"), File:\n> \"../../../../src/include/lib/stringinfo.h\", Line: 97, PID: 1073141\n> postgres: subscriber: logical replication parallel apply worker for\n> subscription 16396 (ExceptionalCondition+0x70)[0x56160451e9d0]\n> postgres: subscriber: logical replication parallel apply worker for\n> subscription 16396 (ParallelApplyWorkerMain+0x53c)[0x5616043618cc]\n> postgres: subscriber: logical replication parallel apply worker for\n> subscription 16396 (StartBackgroundWorker+0x20b)[0x56160434452b]\n>\n> So it seems like we have some existing issues with\n> LogicalParallelApplyLoop(). The code there does not properly NUL\n> terminate the StringInfoData.data field. There are some examples in\n> exec_bind_message() of how that could be fixed. I've CC'd Amit to let\n> him know about this.\n\nThanks for reporting this issue, I was able to reproduce this issue\nwith the steps provided. I will analyze further and start a new thread\nto provide the details of the same.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 10 Oct 2023 16:04:32 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I've attached a slightly more worked on patch that makes maxlen == 0\n> mean read-only. Unsure if a macro is worthwhile there or not.\n\nA few thoughts:\n\n* initStringInfoFromStringWithLen() is kind of a mouthful.\nHow about \"initStringInfoWithBuf\", or something like that?\n\n* logicalrep_read_tuple is doing something different from these\nother callers: it's creating a *fully valid* StringInfo that\ncould be enlarged via repalloc. (Whether anything downstream\ndepends on that, I dunno.) Is it worth having two new init\nfunctions, one that has that spec and initializes maxlen\nappropriately, and the other that sets maxlen to 0?\n\n* I think that this bit in the new enlargeStringInfo code path\nis wrong:\n\n+\t\tnewlen = pg_nextpower2_32(str->len) * 2;\n+\t\twhile (needed > newlen)\n+\t\t\tnewlen = 2 * newlen;\n\nIn the admittedly-unlikely case that str->len is more than half a GB\nto start with, pg_nextpower2_32() will round up to 1GB and then the *2\noverflows. I think you should make this just\n\n+\t\tnewlen = pg_nextpower2_32(str->len);\n+\t\twhile (needed > newlen)\n+\t\t\tnewlen = 2 * newlen;\n\nIt's fairly likely that this path will never be taken at all,\nso trying to shave a cycle or two seems unnecessary.\n\n> One thought I had about this is that the memory context behaviour\n> might catch someone out at some point. Right now if you do\n> initStringInfo() the memory context of the \"data\" field will be\n> CurrentMemoryContext, but if someone does\n> initStringInfoFromStringWithLen() and then changes to some other\n> memory context before doing an appendStringInfo on that string, then\n> we'll allocate \"data\" in whatever that memory context is. Maybe that's\n> ok if we document it. Fixing it would mean adding a MemoryContext\n> field to StringInfoData which would be set to CurrentMemoryContext\n> during initStringInfo() and initStringInfoFromStringWithLen().\n\nI think documenting it is sufficient. I don't really foresee use-cases\nwhere the string would get enlarged, anyway.\n\nOn the whole, I wonder about the value of allowing such a StringInfo to be\nenlarged at all. If we are defining the case as being a \"read only\"\nbuffer, under what circumstances would it be useful to enlarge it?\nI'm tempted to suggest that we should just Assert(maxlen > 0) in\nenlargeStringInfo, and anywhere else in stringinfo.c that modifies\nthe buffer. That also removes the concern about which context the\nenlargement would happen in.\n\nI'm not really happy with what you did documentation-wise in\nstringinfo.h. I suggest more like\n\n * StringInfoData holds information about an extensible string.\n * data is the current buffer for the string (allocated with palloc).\n * len is the current string length. There is guaranteed to be\n * a terminating '\\0' at data[len], although this is not very\n * useful when the string holds binary data rather than text.\n * maxlen is the allocated size in bytes of 'data', i.e. the maximum\n * string size (including the terminating '\\0' char) that we can\n * currently store in 'data' without having to reallocate\n-* more space. We must always have maxlen > len, except\n+* in the read-only case described below.\n * cursor is initialized to zero by makeStringInfo or initStringInfo,\n * but is not otherwise touched by the stringinfo.c routines.\n * Some routines use it to scan through a StringInfo.\n+*\n+* As a special case, a StringInfoData can be initialized with a read-only\n+* string buffer. In this case \"data\" does not necessarily point at a\n+* palloc'd chunk, and management of the buffer storage is the caller's\n+* responsibility. maxlen is set to zero to indicate that this is the case.\n\nAlso, the following comment block asserting that there are \"two ways\"\nto initialize a StringInfo needs work, and I guess so does the above-\ncited comment about the cursor field.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Oct 2023 15:52:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Wed, 11 Oct 2023 at 08:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I've attached a slightly more worked on patch that makes maxlen == 0\n> > mean read-only. Unsure if a macro is worthwhile there or not.\n>\n> A few thoughts:\n\nThank you for the review.\n\nI spent more time on this and did end up with 2 new init functions as\nyou mentioned. One for strictly read-only (initReadOnlyStringInfo),\nwhich cannot be appended to, and as you mentioned, another\n(initStringInfoFromString) which can accept a palloc'd buffer which\nbecomes managed by the stringinfo code. I know these names aren't\nexactly as you mentioned. I'm open to adjusting still.\n\nThis means I got rid of the read-only conversion code in\nenlargeStringInfo(). I didn't do anything to try to handle buffer\nenlargement more efficiently in enlargeStringInfo() for the case where\ninitStringInfoFromString sets maxlen to some non-power-of-2. The\ndoubling code seems like it'll work ok without power-of-2 values,\nit'll just end up calling repalloc() with non-power-of-2 values.\n\nI did also wonder if resetStringInfo() would have any business\ntouching the existing buffer in a read-only StringInfo and came to the\nconclusion that it wouldn't be very read-only if we allowed\nresetStringInfo() to do its thing on it. I added an Assert to fail if\nresetStringInfo() receives a read-only StringInfo.\n\nAlso, since it's still being discussed, I left out the adjustment to\nLogicalParallelApplyLoop(). That also allows the tests to pass\nwithout the failing Assert that was checking for the NUL terminator.\n\nDavid",
"msg_date": "Fri, 13 Oct 2023 12:22:54 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I spent more time on this and did end up with 2 new init functions as\n> you mentioned. One for strictly read-only (initReadOnlyStringInfo),\n> which cannot be appended to, and as you mentioned, another\n> (initStringInfoFromString) which can accept a palloc'd buffer which\n> becomes managed by the stringinfo code. I know these names aren't\n> exactly as you mentioned. I'm open to adjusting still.\n\nThis v3 looks pretty decent, although I noted one significant error\nand a few minor issues:\n\n* in initStringInfoFromString, str->maxlen must be set to len+1 not len\n\n* comment in exec_bind_message doesn't look like pgindent will like it\n\n* same in record_recv, plus it has a misspelling \"Initalize\"\n\n* in stringinfo.c, inclusion of pg_bitutils.h seems no longer needed\n\nI guess the next question is whether we want to stop here or\ntry to relax the requirement about NUL-termination. I'd be inclined\nto call that a separate issue deserving a separate commit, so maybe\nwe should go ahead and commit this much anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 15 Oct 2023 12:56:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Mon, 16 Oct 2023 at 05:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> * in initStringInfoFromString, str->maxlen must be set to len+1 not len\n>\n> * comment in exec_bind_message doesn't look like pgindent will like it\n>\n> * same in record_recv, plus it has a misspelling \"Initalize\"\n>\n> * in stringinfo.c, inclusion of pg_bitutils.h seems no longer needed\n\nThank you for looking again. I've addressed all of these in the attached.\n\n> I guess the next question is whether we want to stop here or\n> try to relax the requirement about NUL-termination. I'd be inclined\n> to call that a separate issue deserving a separate commit, so maybe\n> we should go ahead and commit this much anyway.\n\nI am keen to see this relaxed. I agree that a separate effort is best.\n\nDavid",
"msg_date": "Tue, 17 Oct 2023 20:39:52 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Tue, 17 Oct 2023 at 20:39, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Mon, 16 Oct 2023 at 05:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I guess the next question is whether we want to stop here or\n> > try to relax the requirement about NUL-termination. I'd be inclined\n> > to call that a separate issue deserving a separate commit, so maybe\n> > we should go ahead and commit this much anyway.\n>\n> I am keen to see this relaxed. I agree that a separate effort is best.\n\nI looked at the latest posted patch again today with thoughts about\npushing it but there's something I'm a bit unhappy with that makes me\nthink we should maybe do the NUL-termination relaxation in the same\ncommit.\n\nThe problem is in LogicalRepApplyLoop() the current patch adjusts the\nmanual building of the StringInfoData to make use of\ninitReadOnlyStringInfo() instead. The problem I have with that is that\nthe string that's given to initReadOnlyStringInfo() comes from\nwalrcv_receive() and on looking at the API spec for walrcv_receive_fn\nI see:\n\n/*\n * walrcv_receive_fn\n *\n * Receive a message available from the WAL stream. 'buffer' is a pointer\n * to a buffer holding the message received. Returns the length of the data,\n * 0 if no data is available yet ('wait_fd' is a socket descriptor which can\n * be waited on before a retry), and -1 if the cluster ended the COPY.\n */\n\ni.e, no mention that the buffer will be NUL terminated upon return.\n\nLooking at pqGetCopyData3(), is see the buffer does get NUL\nterminated, but without the API spec mentioning this I'm not feeling\ngood about going ahead with wrapping that up in\ninitReadOnlyStringInfo() which Asserts the buffer will be NUL\nterminated.\n\nI've attached a patch which builds on the previous patch and relaxes\nthe rule that the StringInfo must be NUL-terminated. The rule is\nonly relaxed for StringInfos that are initialized with\ninitReadOnlyStringInfo. On working on this I went over the locations\nwhere we've added code to add a '\\0' char to the buffer. If you look\nat, for example, record_recv() and array_agg_deserialize() in master,\nwe modify the StringInfo's data to set a \\0 at the end of the string.\nI've removed that code as I *believe* this isn't required for the\ntype's receive function.\n\nThere's also an existing confusing comment in logicalrep_read_tuple()\nwhich seems to think we're just setting the NUL terminator to conform\nto StringInfo's practises. This is misleading as the NUL is required\nfor LOGICALREP_COLUMN_TEXT mode as we use the type's input function\ninstead of the receive function. You don't have to look very hard to\nfind an input function that needs a NUL terminator.\n\nI'm a bit less confident that the type's receive function will never\nneed to be NUL terminated. cstring_recv() came to mind as one I should\nlook at, but on looking I see it's not required as it just reads the\nremaining bytes from the input StringInfo. Is it safe to assume this?\nor could there be some UDF receive function which requires this?\n\nDavid",
"msg_date": "Wed, 25 Oct 2023 14:03:17 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I've attached a patch which builds on the previous patch and relaxes\n> the rule that the StringInfo must be NUL-terminated. The rule is\n> only relaxed for StringInfos that are initialized with\n> initReadOnlyStringInfo.\n\nYeah, that's probably a reasonable way to frame it.\n\n> There's also an existing confusing comment in logicalrep_read_tuple()\n> which seems to think we're just setting the NUL terminator to conform\n> to StringInfo's practises. This is misleading as the NUL is required\n> for LOGICALREP_COLUMN_TEXT mode as we use the type's input function\n> instead of the receive function. You don't have to look very hard to\n> find an input function that needs a NUL terminator.\n\nRight, input functions are likely to expect this.\n\n> I'm a bit less confident that the type's receive function will never\n> need to be NUL terminated. cstring_recv() came to mind as one I should\n> look at, but on looking I see it's not required as it just reads the\n> remaining bytes from the input StringInfo. Is it safe to assume this?\n\nI think that we can make that assumption starting with v17.\nBack-patching it would be hazardous perhaps; but if there's some\nfunction out there that depends on NUL termination, testing should\nexpose it before too long. Wouldn't hurt to mention this explicitly\nas a possible incompatibility in the commit message.\n\nLooking over the v5 patch, I have some nits:\n\n* In logicalrep_read_tuple,\ns/input function require that/input functions require that/\n(or fix the grammatical disagreement some other way)\n\n* In exec_bind_message, you removed the comment pointing out that\nwe are scribbling directly on the message buffer, even though\nwe still are. This patch does nothing to make that any safer,\nso I object to removing the comment.\n\n* In stringinfo.h, I'd suggest adding text more or less like this\nwithin or at the end of the \"As a special case, ...\" para in\nthe first large comment block:\n\n * Also, it is caller's option whether a read-only string buffer has\n * a terminating '\\0' or not. This depends on the intended usage.\n\nThat's partially redundant with some other comments, but this para\nis defining the API for read-only buffers, so I think it would\nbe good to include it here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Oct 2023 15:43:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Thu, 26 Oct 2023 at 08:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think that we can make that assumption starting with v17.\n> Back-patching it would be hazardous perhaps; but if there's some\n> function out there that depends on NUL termination, testing should\n> expose it before too long. Wouldn't hurt to mention this explicitly\n> as a possible incompatibility in the commit message.\n>\n> Looking over the v5 patch, I have some nits:\n\nThanks for looking at this again. I fixed up each of those and pushed\nthe result, mentioning the incompatibility in the commit message.\n\nNow that that's done, I've attached a patch which makes use of the new\ninitReadOnlyStringInfo initializer function for the original case\nmentioned when I opened this thread. I don't think there are any\nremaining objections to this, but I'll let it sit for a bit to see.\n\nDavid",
"msg_date": "Thu, 26 Oct 2023 17:00:29 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Thu, 26 Oct 2023 at 17:00, David Rowley <dgrowleyml@gmail.com> wrote:\n> Thanks for looking at this again. I fixed up each of those and pushed\n> the result, mentioning the incompatibility in the commit message.\n>\n> Now that that's done, I've attached a patch which makes use of the new\n> initReadOnlyStringInfo initializer function for the original case\n> mentioned when I opened this thread. I don't think there are any\n> remaining objections to this, but I'll let it sit for a bit to see.\n\nI've just pushed the deserial function optimisation patch.\n\nI was just looking at a few other places where we might want to make\nuse of initReadOnlyStringInfo.\n\n* parallel.c in HandleParallelMessages():\n\nDrilling into HandleParallelMessage(), I see the PqMsg_BackendKeyData\ncase just reads a fixed number of bytes. In some of the other\n\"switch\" cases, I see calls pq_getmsgrawstring() either directly or\nindirectly. I see the counterpart to pq_getmsgrawstring() is\npq_sendstring() which always appends the NUL char to the StringInfo,\nso I don't think not NUL terminating the received bytes is a problem\nas cstrings seem to be sent with the NUL terminator.\n\nThis case just seems to handle ERROR/NOTICE messages coming from\nparallel workers. Not tuples themselves. It may not be that\ninteresting a case to speed up.\n\n* applyparallelworker.c in HandleParallelApplyMessages():\n\nDrilling into HandleParallelApplyMessage(), I don't see anything there\nthat needs the input StringInfo to be NUL terminated.\n\n* worker.c in apply_spooled_messages():\n\nDrilling into apply_dispatch() and going through each of the cases, I\nsee logicalrep_read_tuple() pallocs a new buffer and ensures it's\nalways NUL terminated which will be required in LOGICALREP_COLUMN_TEXT\nmode. (There seems to be further optimisation opportunities there\nwhere we could not do the palloc when in LOGICALREP_COLUMN_BINARY mode\nand just point value's buffer directly to the correct portion of the\ninput StringInfo's buffer).\n\n* walreceiver.c in XLogWalRcvProcessMsg():\n\nNothing there seems to require the incoming_message StringInfo to have\na NUL terminator. I imagine this one is the most worthwhile to do out\nof the 4. I've not tested to see if there are any performance\nimprovements.\n\nDoes anyone see any reason why we can't do the attached?\n\nDavid",
"msg_date": "Fri, 27 Oct 2023 10:53:29 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 3:23 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 26 Oct 2023 at 17:00, David Rowley <dgrowleyml@gmail.com> wrote:\n> > Thanks for looking at this again. I fixed up each of those and pushed\n> > the result, mentioning the incompatibility in the commit message.\n> >\n> > Now that that's done, I've attached a patch which makes use of the new\n> > initReadOnlyStringInfo initializer function for the original case\n> > mentioned when I opened this thread. I don't think there are any\n> > remaining objections to this, but I'll let it sit for a bit to see.\n>\n> I've just pushed the deserial function optimisation patch.\n>\n> I was just looking at a few other places where we might want to make\n> use of initReadOnlyStringInfo.\n>\n> * parallel.c in HandleParallelMessages():\n>\n> Drilling into HandleParallelMessage(), I see the PqMsg_BackendKeyData\n> case just reads a fixed number of bytes. In some of the other\n> \"switch\" cases, I see calls pq_getmsgrawstring() either directly or\n> indirectly. I see the counterpart to pq_getmsgrawstring() is\n> pq_sendstring() which always appends the NUL char to the StringInfo,\n> so I don't think not NUL terminating the received bytes is a problem\n> as cstrings seem to be sent with the NUL terminator.\n>\n> This case just seems to handle ERROR/NOTICE messages coming from\n> parallel workers. Not tuples themselves. It may not be that\n> interesting a case to speed up.\n>\n> * applyparallelworker.c in HandleParallelApplyMessages():\n>\n> Drilling into HandleParallelApplyMessage(), I don't see anything there\n> that needs the input StringInfo to be NUL terminated.\n>\n\nBoth the above calls are used to handle ERROR/NOTICE messages from\nparallel workers as you have also noticed. The comment atop\ninitReadOnlyStringInfo() clearly states that it is used in the\nperformance-critical path. So, is it worth changing these places? In\nthe future, this may pose the risk of this API being used\ninconsistently.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 30 Oct 2023 16:18:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Mon, 30 Oct 2023 at 23:48, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 27, 2023 at 3:23 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > * parallel.c in HandleParallelMessages():\n> > * applyparallelworker.c in HandleParallelApplyMessages():\n>\n> Both the above calls are used to handle ERROR/NOTICE messages from\n> parallel workers as you have also noticed. The comment atop\n> initReadOnlyStringInfo() clearly states that it is used in the\n> performance-critical path. So, is it worth changing these places? In\n> the future, this may pose the risk of this API being used\n> inconsistently.\n\nI'm ok to leave those ones out. But just a note on the performance\nside, if we go around needlessly doing palloc/memcpy then we'll be\nflushing possibly useful cachelines out and cause slowdowns elsewhere.\nThat's a pretty hard thing to quantify, but something to keep in mind.\n\nDavid\n\n\n",
"msg_date": "Tue, 31 Oct 2023 09:55:21 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 2:25 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Mon, 30 Oct 2023 at 23:48, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Oct 27, 2023 at 3:23 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > * parallel.c in HandleParallelMessages():\n> > > * applyparallelworker.c in HandleParallelApplyMessages():\n> >\n> > Both the above calls are used to handle ERROR/NOTICE messages from\n> > parallel workers as you have also noticed. The comment atop\n> > initReadOnlyStringInfo() clearly states that it is used in the\n> > performance-critical path. So, is it worth changing these places? In\n> > the future, this may pose the risk of this API being used\n> > inconsistently.\n>\n> I'm ok to leave those ones out.\n>\n\nThe other two look good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Nov 2023 15:11:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Thu, 2 Nov 2023 at 22:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> The other two look good to me.\n\nThanks for looking.\n\nI spent some time trying to see if the performance changes much with\neither of these cases. For the XLogWalRcvProcessMsg() I was unable to\nmeasure any difference even when replaying inserts into a table with a\nsingle int4 column and no indexes. I think that change is worthwhile\nregardless as it allows us to get rid of a global variable. I was\ntempted to shorten the name of that variable a bit since it's now\nlocal, but didn't as it causes a bit more churn.\n\nFor the apply_spooled_messages() change, I tried logical decoding but\nquickly saw apply_spooled_messages() isn't the normal case. I didn't\nquite find a test case that caused the changes to be serialized to a\nfile, but I do see that the number of bytes can be large so thought\nthat it's worthwhile saving the memcpy for that case.\n\nI pushed those two changes.\n\nDavid\n\n\n",
"msg_date": "Tue, 7 Nov 2023 11:26:05 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
},
{
"msg_contents": "On Tue, Nov 7, 2023 at 3:56 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 2 Nov 2023 at 22:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > The other two look good to me.\n>\n> Thanks for looking.\n>\n> I spent some time trying to see if the performance changes much with\n> either of these cases. For the XLogWalRcvProcessMsg() I was unable to\n> measure any difference even when replaying inserts into a table with a\n> single int4 column and no indexes. I think that change is worthwhile\n> regardless as it allows us to get rid of a global variable. I was\n> tempted to shorten the name of that variable a bit since it's now\n> local, but didn't as it causes a bit more churn.\n>\n> For the apply_spooled_messages() change, I tried logical decoding but\n> quickly saw apply_spooled_messages() isn't the normal case. I didn't\n> quite find a test case that caused the changes to be serialized to a\n> file, but I do see that the number of bytes can be large so thought\n> that it's worthwhile saving the memcpy for that case.\n>\n\nYeah, and another reason is that the usage of StringInfo becomes\nconsistent with LogicalRepApplyLoop(). One can always configure the\nlower value of logical_decoding_work_mem or use\ndebug_logical_replication_streaming for a smaller number of changes to\nfollow that code path. But I am not sure how much practically it will\nhelp because we are anyway reading file to apply the changes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Nov 2023 07:45:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making aggregate deserialization (and WAL receive) functions\n slightly faster"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs mentioned two times on this thread, there is not much coverage for\nthe query jumbling code, even if it is in core:\nhttps://www.postgresql.org/message-id/Y5BHOUhX3zTH/ig6@paquier.xyz\n\nThs issue is that we have the options to enable it, but only\npg_stat_statements is able to enable and stress it. This causes\ncoverage to be missed for all query patterns that are not covered\ndirectly by pg_stat_statements, like XML expressions, various DML\npatterns, etc. More aggressive testing would also ensure that no\nnodes are marked as no_query_jumble while they should be included in a\ncomputation.\n\nAttached is a patch to improve that. The main regression database is\nable to cover everything, basically, so I'd like to propose the\naddition of some extra configuration in 027_stream_regress.pl to\nenable pg_stat_statements. This could be added in the pg_upgrade\ntests, but that felt a bit less adapted here. Or can people think\nabout cases where checking pg_stat_statements makes more sense after\nan upgrade or on a standby? One thing that makes sense for a standby\nis to check that the contents of pg_stat_statements are empty?\n\nWith this addition, the query jumbling gets covered at 95%~, while\nhttps://coverage.postgresql.org/src/backend/nodes/index.html reports\ncurrently 35%.\n\nThoughts or comments?\n--\nMichael",
"msg_date": "Mon, 13 Feb 2023 14:00:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Force testing of query jumbling code in TAP tests"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-13 14:00:36 +0900, Michael Paquier wrote:\n> With this addition, the query jumbling gets covered at 95%~, while\n> https://coverage.postgresql.org/src/backend/nodes/index.html reports\n> currently 35%.\n> \n> Thoughts or comments?\n\nShouldn't there at least be some basic verification of pg_stat_statements\noutput being sane after running the test? Even if that's perhaps just actually\nprinting the statements.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 09:45:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Force testing of query jumbling code in TAP tests"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 09:45:12AM -0800, Andres Freund wrote:\n> Shouldn't there at least be some basic verification of pg_stat_statements\n> output being sane after running the test? Even if that's perhaps just actually\n> printing the statements.\n\nThere is a total of 20k entries in pg_stat_statements if the max is\nhigh enough to store everything. Only dumping the first 100\ncharacters of each query generates at least 1MB worth of logs, which\nwould bloat a lot of the buildfarm in each run. So I would not do\nthat. One thing may be perhaps to show a count of the queries in five\ncategories: select, insert, delete, update and the rest?\n--\nMichael",
"msg_date": "Tue, 14 Feb 2023 16:04:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Force testing of query jumbling code in TAP tests"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-14 16:04:16 +0900, Michael Paquier wrote:\n> On Mon, Feb 13, 2023 at 09:45:12AM -0800, Andres Freund wrote:\n> > Shouldn't there at least be some basic verification of pg_stat_statements\n> > output being sane after running the test? Even if that's perhaps just actually\n> > printing the statements.\n> \n> There is a total of 20k entries in pg_stat_statements if the max is\n> high enough to store everything. Only dumping the first 100\n> characters of each query generates at least 1MB worth of logs, which\n> would bloat a lot of the buildfarm in each run. So I would not do\n> that. One thing may be perhaps to show a count of the queries in five\n> categories: select, insert, delete, update and the rest?\n\nI didn't mean printing in the sense of outputting the statements to the tap\nlog. Maybe creating a temp table or such for all the queries. And yes, then\ndoing some top-level analysis on it like you describe sounds like a good idea.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Feb 2023 10:11:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Force testing of query jumbling code in TAP tests"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 10:11:21AM -0800, Andres Freund wrote:\n> I didn't mean printing in the sense of outputting the statements to the tap\n> log. Maybe creating a temp table or such for all the queries. And yes, then\n> doing some top-level analysis on it like you describe sounds like a good idea.\n\nOne idea would be something like that, that makes sure that reports\nare generated for the most common query patterns:\nWITH select_stats AS\n (SELECT upper(substr(query, 1, 6)) AS select_query\n FROM pg_stat_statements\n WHERE upper(substr(query, 1, 6)) IN ('SELECT', 'UPDATE',\n 'INSERT', 'DELETE',\n 'CREATE'))\n SELECT select_query, count(select_query) > 1 AS some_rows\n FROM select_stats\n GROUP BY select_query ORDER BY select_query;\n\nOther ideas are welcome. At least this would be a start.\n--\nMichael",
"msg_date": "Thu, 16 Feb 2023 14:08:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Force testing of query jumbling code in TAP tests"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 02:08:42PM +0900, Michael Paquier wrote:\n> Other ideas are welcome. At least this would be a start.\n\nThe main idea of the patch is here:\n\n> +# Check some data from pg_stat_statements.\n> +$node_primary->safe_psql('postgres', 'CREATE EXTENSION pg_stat_statements');\n> +# This gathers data based on the first characters for some common query types,\n> +# providing coverage for SELECT, DMLs, and some DDLs.\n> +my $result = $node_primary->safe_psql(\n> +\t'postgres',\n> +\tqq{WITH select_stats AS\n> + (SELECT upper(substr(query, 1, 6)) AS select_query\n> + FROM pg_stat_statements\n> + WHERE upper(substr(query, 1, 6)) IN ('SELECT', 'UPDATE',\n> + 'INSERT', 'DELETE',\n> + 'CREATE'))\n> + SELECT select_query, count(select_query) > 1 AS some_rows\n> + FROM select_stats\n> + GROUP BY select_query ORDER BY select_query;});\n> +is( $result, qq(CREATE|t\n> +DELETE|t\n> +INSERT|t\n> +SELECT|t\n> +UPDATE|t), 'check contents of pg_stat_statements on regression database');\n\nAre there any objections to do what's proposed in the patch and\nimprove the testing coverage of query jumbling by default?\n--\nMichael",
"msg_date": "Tue, 28 Feb 2023 14:06:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Force testing of query jumbling code in TAP tests"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 02:06:05PM +0900, Michael Paquier wrote:\n> Are there any objections to do what's proposed in the patch and\n> improve the testing coverage of query jumbling by default?\n\nWell, done this one as of d28a449. More validation tests could always\nbe added later if there are better ideas. Coverage of this code has\ngone up to 94.4% at the end:\nhttps://coverage.postgresql.org/src/backend/nodes/queryjumblefuncs.funcs.c.gcov.html\n--\nMichael",
"msg_date": "Fri, 3 Mar 2023 15:28:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Force testing of query jumbling code in TAP tests"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nPlease find attached a patch proposal to $SUBJECT: it would allow to simplify\neven more the work done to generate pg_stat_get_xact*() functions with Macros.\n\nThis is a follow up for [1] where it has been suggested\nto get rid of PgStat_BackendFunctionEntry in a separate patch.\n\nLooking forward to your feedback,\n\nRegards\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n[1]: https://www.postgresql.org/message-id/20230210214619.bdpbd5wvxcpx27rw%40awork3.anarazel.de",
"msg_date": "Mon, 13 Feb 2023 10:06:27 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Get rid of PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 10:06:27AM +0100, Drouvot, Bertrand wrote:\n> Please find attached a patch proposal to $SUBJECT: it would allow to simplify\n> even more the work done to generate pg_stat_get_xact*() functions with Macros.\n> \n> This is a follow up for [1] where it has been suggested\n> to get rid of PgStat_BackendFunctionEntry in a separate patch.\n\nLooks reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 8 Mar 2023 13:38:38 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Get rid of PgStat_BackendFunctionEntry"
},
{
"msg_contents": "On Wed, Mar 08, 2023 at 01:38:38PM -0800, Nathan Bossart wrote:\n> Looks reasonable to me.\n\nI have been catching up with this thread and the other thread, and\nindeed it looks like this is going to help in refactoring\npgstatfuncs.c to have more macros for all these mostly-duplicated\nfunctions. So, I have applied this bit.\n--\nMichael",
"msg_date": "Thu, 16 Mar 2023 14:25:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Get rid of PgStat_BackendFunctionEntry"
},
{
"msg_contents": "Hi,\n\nOn 3/16/23 6:25 AM, Michael Paquier wrote:\n> On Wed, Mar 08, 2023 at 01:38:38PM -0800, Nathan Bossart wrote:\n>> Looks reasonable to me.\n> \n> I have been catching up with this thread and the other thread, and\n> indeed it looks like this is going to help in refactoring\n> pgstatfuncs.c to have more macros for all these mostly-duplicated\n> functions. So, I have applied this bit.\n\nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Mar 2023 10:25:59 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Get rid of PgStat_BackendFunctionEntry"
}
] |
[
{
"msg_contents": "For Citus we'd like to hook into the way that GENERATED AS IDENTITY\ngenerates the next value. The way we had in mind was to replace\nT_NextValueExpr with a function call node. But doing that same for\nCOPY seems only possible by manually changing the defexprs array of\nthe CopyFromState. Sadly CopyFromStateData is in an internal header so\nthat seems dangerous to do, since the struct definition might change\nacross minor versions.\nHowever, it seems that the only time this was actually done in the\nlast 5 years was in 8dc49a8934de023c08890035d96916994bd9b297\n\nWhat do you think about making CopyFromStateData its definition public?\n\n\n",
"msg_date": "Mon, 13 Feb 2023 12:27:09 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": true,
"msg_subject": "Making CopyFromStateData not internal anymore"
}
] |
[
{
"msg_contents": "Hi hackers,\n After a61b1f74823c commit, below query reports error:\n\n create table perm_test1(a int);\n create table perm_test2(b int);\n select subq.c0\nfrom (select (select a from perm_test1 order by a limit 1) as c0, b as c1\nfrom perm_test2 where false order by c0, c1) as subq where false;\nERROR: permission info at index 1 (with relid=16457) does not match\nprovided RTE (with relid=16460)\n\nBelow codes can fix this:\n\n--- a/src/backend/optimizer/plan/setrefs.c\n+++ b/src/backend/optimizer/plan/setrefs.c\n@@ -512,11 +512,16 @@ flatten_rtes_walker(Node *node,\nflatten_rtes_walker_context *cxt)\n * Recurse into subselects. Must update cxt->query to this\nquery so\n * that the rtable and rteperminfos correspond with each\nother.\n */\n+ Query *current_query = cxt->query;\n+ bool result;\n+\n cxt->query = (Query *) node;\n- return query_tree_walker((Query *) node,\n+ result = query_tree_walker((Query *) node,\n\n flatten_rtes_walker,\n (void *)\ncxt,\n\n QTW_EXAMINE_RTES_BEFORE);\n+ cxt->query = current_query;\n+ return result;\n }\n\n\n regards, tender wang\n\nHi hackers, After a61b1f74823c commit, below query reports error: create table perm_test1(a int); create table perm_test2(b int); select subq.c0 from\r\n (select\r\n (select a from perm_test1 order by a limit 1) as c0,\r\n b as c1\r\n from perm_test2\r\n where false\r\n order by c0, c1) as subq\r\nwhere false;ERROR: permission info at index 1 (with relid=16457) does not match provided RTE (with relid=16460)Below codes can fix this:--- a/src/backend/optimizer/plan/setrefs.c+++ b/src/backend/optimizer/plan/setrefs.c@@ -512,11 +512,16 @@ flatten_rtes_walker(Node *node, flatten_rtes_walker_context *cxt) * Recurse into subselects. Must update cxt->query to this query so * that the rtable and rteperminfos correspond with each other. */+ Query *current_query = cxt->query;+ bool result;+ cxt->query = (Query *) node;- return query_tree_walker((Query *) node,+ result = query_tree_walker((Query *) node, flatten_rtes_walker, (void *) cxt, QTW_EXAMINE_RTES_BEFORE);+ cxt->query = current_query;+ return result; } regards, tender wang",
"msg_date": "Mon, 13 Feb 2023 22:32:32 +0800",
"msg_from": "tender wang <tndrwang@gmail.com>",
"msg_from_op": true,
"msg_subject": "ERROR: permission info at index 1 ...."
},
{
"msg_contents": "tender wang <tndrwang@gmail.com> writes:\n> After a61b1f74823c commit, below query reports error:\n> create table perm_test1(a int);\n> create table perm_test2(b int);\n> select subq.c0\n> from (select (select a from perm_test1 order by a limit 1) as c0, b as c1\n> from perm_test2 where false order by c0, c1) as subq where false;\n> ERROR: permission info at index 1 (with relid=16457) does not match\n> provided RTE (with relid=16460)\n\nYeah, this was also reported by Justin Pryzby [1].\n\n> Below codes can fix this:\n\nRight you are. Pushed, thanks!\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20230212233711.GA1316%40telsasoft.com\n\n\n",
"msg_date": "Mon, 13 Feb 2023 12:21:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: permission info at index 1 ...."
},
{
"msg_contents": "On 2023-Feb-13, Tom Lane wrote:\n\n> tender wang <tndrwang@gmail.com> writes:\n> > After a61b1f74823c commit, below query reports error:\n> > create table perm_test1(a int);\n> > create table perm_test2(b int);\n> > select subq.c0\n> > from (select (select a from perm_test1 order by a limit 1) as c0, b as c1\n> > from perm_test2 where false order by c0, c1) as subq where false;\n> > ERROR: permission info at index 1 (with relid=16457) does not match\n> > provided RTE (with relid=16460)\n> \n> Yeah, this was also reported by Justin Pryzby [1].\n> \n> > Below codes can fix this:\n> \n> Right you are. Pushed, thanks!\n\nThank you both!\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 13 Feb 2023 19:33:07 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: permission info at index 1 ...."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nA colleague of mine wanted to use a ScanKey with SK_SEARCHNULL flag\nfor a heap-only scan (besides other ScanKeys) and discovered that the\nresult differs from what he would expect. Turned out that this is\ncurrently not supported as it is explicitly stated in skey.h.\n\nAlthough several workarounds come to mind this limitation may be\nreally of inconvenience for the extension authors, and implementing\ncorresponding support seems to be pretty straightforward.\n\nThe attached patch does this.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 13 Feb 2023 17:59:13 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Support SK_SEARCHNULL / SK_SEARCHNOTNULL for heap-only scans"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-13 17:59:13 +0300, Aleksander Alekseev wrote:\n> @@ -36,20 +36,36 @@ HeapKeyTest(HeapTuple tuple, TupleDesc tupdesc, int nkeys, ScanKey keys)\n> \t\tbool\t\tisnull;\n> \t\tDatum\t\ttest;\n> \n> -\t\tif (cur_key->sk_flags & SK_ISNULL)\n> -\t\t\treturn false;\n> +\t\tif (cur_key->sk_flags & (SK_SEARCHNULL | SK_SEARCHNOTNULL))\n> +\t\t{\n> +\t\t\t/* special case: looking for NULL / NOT NULL values */\n> +\t\t\tAssert(cur_key->sk_flags & SK_ISNULL);\n> \n> -\t\tatp = heap_getattr(tuple, cur_key->sk_attno, tupdesc, &isnull);\n> +\t\t\tatp = heap_getattr(tuple, cur_key->sk_attno, tupdesc, &isnull);\n> \n> -\t\tif (isnull)\n> -\t\t\treturn false;\n> +\t\t\tif (isnull && (cur_key->sk_flags & SK_SEARCHNOTNULL))\n> +\t\t\t\treturn false;\n> \n> -\t\ttest = FunctionCall2Coll(&cur_key->sk_func,\n> -\t\t\t\t\t\t\t\t cur_key->sk_collation,\n> -\t\t\t\t\t\t\t\t atp, cur_key->sk_argument);\n> +\t\t\tif (!isnull && (cur_key->sk_flags & SK_SEARCHNULL))\n> +\t\t\t\treturn false;\n\nShouldn't need to extract the column if we just want to know if it's NULL (see\nheap_attisnull()). Afaics the value isn't accessed after this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 08:36:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support SK_SEARCHNULL / SK_SEARCHNOTNULL for heap-only\n scans"
},
{
"msg_contents": "Hi Andres,\n\n> Shouldn't need to extract the column if we just want to know if it's NULL (see\n> heap_attisnull()). Afaics the value isn't accessed after this.\n\nMany thanks. Fixed.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 14 Feb 2023 12:10:21 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support SK_SEARCHNULL / SK_SEARCHNOTNULL for heap-only\n scans"
},
{
"msg_contents": "On 14/02/2023 11:10, Aleksander Alekseev wrote:\n> Hi Andres,\n> \n>> Shouldn't need to extract the column if we just want to know if it's NULL (see\n>> heap_attisnull()). Afaics the value isn't accessed after this.\n> \n> Many thanks. Fixed.\n\nI'm confused, what exactly is the benefit of this? What extension \nperforms a direct table scan bypassing the executor, searching for NULLs \nor not-NULLs?\n\nIf heapam can check for NULL/not-NULL more efficiently than the code \nthat calls it, sure let's do this, and let's also see the performance \ntest results to show the benefit. But then let's also modify the caller \nin nodeSeqScan.c to actually make use of it.\n\nFor tableam extensions, which may or may not support checking for NULLs, \nwe need to add an 'amsearchnulls' field to the table AM API.\n\n- Heikki\n\n\n\n",
"msg_date": "Wed, 22 Feb 2023 11:29:38 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support SK_SEARCHNULL / SK_SEARCHNOTNULL for heap-only\n scans"
},
{
"msg_contents": "Hi,\n\n> I'm confused, what exactly is the benefit of this? What extension\n> performs a direct table scan bypassing the executor, searching for NULLs\n> or not-NULLs?\n\nBasically any extension that accesses the tables without SPI in order\nto avoid parsing and planning overhead for relatively simple cases.\nOne can specify *several* ScanKeys for a single scan which will be an\nequivalent of WHERE condition(a) AND b IS NOT NULL /* AND ... */;\n\n> If heapam can check for NULL/not-NULL more efficiently than the code\n> that calls it [...]\n\nThis is done not for efficiency but rather for convenience.\nAdditionally, practice shows that for an extension author it's very\neasy to miss a comment in skey.h:\n\n\"\"\"\n * SK_SEARCHARRAY, SK_SEARCHNULL and SK_SEARCHNOTNULL are supported only\n * for index scans, not heap scans;\n\"\"\"\n\n... which results in many hours of debugging. The current interface is\nmisleading and counterintuitive.\n\nI did my best in order to add as few new assembly instructions as\npossible, and only one extra if/else branching. I don't expect any\nmeasurable performance difference since the bottleneck for SeqScans is\nunlikely to be CPU in the affected piece of code but rather\ndisk/locks/network/etc. On top of that the scenario when somebody is\nreally worried about the performance AND is using seqscans (not index\nscans) AND this particular seqscan is a bottleneck (not JOINs, etc)\nseems rare, to me at least.\n\n> For tableam extensions, which may or may not support checking for NULLs,\n> we need to add an 'amsearchnulls' field to the table AM API.\n\nThis will result in an unnecessary complication of the code and\nexpensive extra checks that for the default heapam will always return\ntrue. I would argue that what we actually want is to force any TAM to\nsupport checking for NULLs. At least until somebody working on a real\nTAM will complain about this limitation.\n\n> But then let's also modify the caller in nodeSeqScan.c to actually make use of it.\n\nThat could actually be a good point.\n\nIf memory serves I noticed that WHERE ... IS NULL queries don't even\nhit HeapKeyTest() and I was curious where the check for NULLs is\nactually made. As I understand, SeqNext() in nodeSeqscan.c simply\niterates over all the tuples it can find and pushes them to the parent\nnode. We could get a slightly better performance for certain queries\nif SeqNext() did the check internally.\n\nUnfortunately I'm not very experienced with plan nodes in order to go\ndown this rabbit hole straight away. I suggest we make one change at a\ntime and keep the patchset small as it was previously requested by\nmany people on several occasions (the 64-bit XIDs story, etc). I will\nbe happy to propose a follow-up patch accompanied by the benchmarks if\nand when we reach the consensus on this patch.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 22 Feb 2023 16:03:44 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support SK_SEARCHNULL / SK_SEARCHNOTNULL for heap-only\n scans"
},
{
"msg_contents": "On 22/02/2023 15:03, Aleksander Alekseev wrote:\n> Additionally, practice shows that for an extension author it's very\n> easy to miss a comment in skey.h:\n> \n> \"\"\"\n> * SK_SEARCHARRAY, SK_SEARCHNULL and SK_SEARCHNOTNULL are supported only\n> * for index scans, not heap scans;\n> \"\"\"\n> \n> ... which results in many hours of debugging. The current interface is\n> misleading and counterintuitive.\n\nPerhaps an Assert in heap_beginscan would be in order, to check that \nnone of those flags are set.\n\n>> But then let's also modify the caller in nodeSeqScan.c to actually make use of it.\n> \n> That could actually be a good point.\n> \n> If memory serves I noticed that WHERE ... IS NULL queries don't even\n> hit HeapKeyTest() and I was curious where the check for NULLs is\n> actually made. As I understand, SeqNext() in nodeSeqscan.c simply\n> iterates over all the tuples it can find and pushes them to the parent\n> node. We could get a slightly better performance for certain queries\n> if SeqNext() did the check internally.\n\nRight, it might be faster to perform the NULL-checks before checking \nvisibility, for example. Arbitrary quals cannot be evaluated before \nchecking visibility, but NULL checks could be.\n\n> Unfortunately I'm not very experienced with plan nodes in order to go\n> down this rabbit hole straight away. I suggest we make one change at a\n> time and keep the patchset small as it was previously requested by\n> many people on several occasions (the 64-bit XIDs story, etc). I will\n> be happy to propose a follow-up patch accompanied by the benchmarks if\n> and when we reach the consensus on this patch.\n\nOk, I don't think this patch on its own is a good idea, without the \nother parts, so I'll mark this as Returned with Feedback in the commitfest.\n\n- Heikki\n\n\n\n",
"msg_date": "Mon, 27 Feb 2023 10:24:21 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support SK_SEARCHNULL / SK_SEARCHNOTNULL for heap-only\n scans"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 12:24 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 22/02/2023 15:03, Aleksander Alekseev wrote:\n> > If memory serves I noticed that WHERE ... IS NULL queries don't even\n> > hit HeapKeyTest() and I was curious where the check for NULLs is\n> > actually made. As I understand, SeqNext() in nodeSeqscan.c simply\n> > iterates over all the tuples it can find and pushes them to the parent\n> > node. We could get a slightly better performance for certain queries\n> > if SeqNext() did the check internally.\n>\n> Right, it might be faster to perform the NULL-checks before checking\n> visibility, for example. Arbitrary quals cannot be evaluated before\n> checking visibility, but NULL checks could be.\n\nHi Heikki,\n\nThere's quite a bit of work left to do, but I wanted to check if the\nattached patch (0002, based on top of Aleks' 0001 from upthread) was\ngoing in the direction you were thinking. This patch pushes down any\nforced-null and not-null Vars as ScanKeys. It doesn't remove the\nredundant quals after turning them into ScanKeys, so it's needlessly\ninefficient, but there's still a decent speedup for some of the basic\nbenchmarks in 0003.\n\nPlans look something like this:\n\n# EXPLAIN SELECT * FROM t WHERE i IS NULL;\n QUERY PLAN\n------------------------------------------------------------\n Seq Scan on t (cost=0.00..1393.00 rows=49530 width=4)\n Scan Cond: (i IS NULL)\n Filter: (i IS NULL)\n(3 rows)\n\n# EXPLAIN SELECT * FROM t WHERE i = 3;\n QUERY PLAN\n--------------------------------------------------------\n Seq Scan on t (cost=0.00..1643.00 rows=1 width=4)\n Scan Cond: (i IS NOT NULL)\n Filter: (i = 3)\n(3 rows)\n\nThe non-nullable case worries me a bit because so many things imply IS\nNOT NULL. I think I need to do some sort of cost analysis using the\nnull_frac statistics -- it probably only makes sense to push an\nimplicit SK_SEARCHNOTNULL down to the AM layer if some fraction of\nrows would actually be filtered out -- but I'm not really sure how to\nchoose a threshold.\n\nIt would also be neat if `COUNT(col)` could push down\nSK_SEARCHNOTNULL, but I think that would require a new support\nfunction to rewrite the plan for an aggregate.\n\nAm I on the right track?\n\nThanks,\n--Jacob",
"msg_date": "Wed, 19 Jul 2023 16:44:31 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support SK_SEARCHNULL / SK_SEARCHNOTNULL for heap-only\n scans"
},
{
"msg_contents": "On 7/19/23 16:44, Jacob Champion wrote:\n> This patch pushes down any\n> forced-null and not-null Vars as ScanKeys. It doesn't remove the\n> redundant quals after turning them into ScanKeys, so it's needlessly\n> inefficient, but there's still a decent speedup for some of the basic\n> benchmarks in 0003.\n> \n> Plans look something like this:\n> \n> # EXPLAIN SELECT * FROM t WHERE i IS NULL;\n> QUERY PLAN\n> ------------------------------------------------------------\n> Seq Scan on t (cost=0.00..1393.00 rows=49530 width=4)\n> Scan Cond: (i IS NULL)\n> Filter: (i IS NULL)\n> (3 rows)\n\nRedundant clauses are now filtered out in v3. So the new plans look more\nlike what you'd expect:\n\n =# EXPLAIN SELECT * FROM table1 WHERE a IS NOT NULL AND b = 2;\n QUERY PLAN\n ---------------------------------------------------------\n Seq Scan on table1 (cost=0.00..3344.00 rows=1 width=4)\n Scan Cond: (a IS NOT NULL)\n Filter: (b = 2)\n (3 rows)\n\n> The non-nullable case worries me a bit because so many things imply IS\n> NOT NULL. I think I need to do some sort of cost analysis using the\n> null_frac statistics -- it probably only makes sense to push an\n> implicit SK_SEARCHNOTNULL down to the AM layer if some fraction of\n> rows would actually be filtered out -- but I'm not really sure how to\n> choose a threshold.\n\nv3 also uses the nullfrac and the expected cost of the filter clauses to\ndecide whether to promote a derived IS NOT NULL condition to a ScanKey.\n(Explicit IS [NOT] NULL clauses are always promoted.) I'm still not sure\nhow to fine-tune the expected cost of the ScanKey, but the number I've\nchosen for now (`0.1 * cpu_operator_cost`) doesn't seem to regress my\nbenchmarks, for whatever that's worth.\n\nI recorded several of the changes to the regression EXPLAIN output, but\nI've left a few broken because I'm not sure if they're useful or if I\nshould just disable the optimization. For example:\n\n explain (analyze, costs off, summary off, timing off)\n select * from list_part where a = list_part_fn(1) + a;\n QUERY PLAN\n ------------------------------------------------------------------\n Append (actual rows=0 loops=1)\n -> Seq Scan on list_part1 list_part_1 (actual rows=0 loops=1)\n + Scan Cond: (a IS NOT NULL)\n Filter: (a = (list_part_fn(1) + a))\n Rows Removed by Filter: 1\n -> Seq Scan on list_part2 list_part_2 (actual rows=0 loops=1)\n + Scan Cond: (a IS NOT NULL)\n Filter: (a = (list_part_fn(1) + a))\n Rows Removed by Filter: 1\n -> Seq Scan on list_part3 list_part_3 (actual rows=0 loops=1)\n + Scan Cond: (a IS NOT NULL)\n Filter: (a = (list_part_fn(1) + a))\n Rows Removed by Filter: 1\n -> Seq Scan on list_part4 list_part_4 (actual rows=0 loops=1)\n + Scan Cond: (a IS NOT NULL)\n Filter: (a = (list_part_fn(1) + a))\n Rows Removed by Filter: 1\n\nThese new conditions are due to a lack of statistics for the tiny\npartitions; the filters are considered expensive enough that the savings\nagainst a DEFAULT_UNK_SEL proportion of NULLs would hypothetically be\nworth it. Best I can tell, this is a non-issue, since autovacuum will\nfix the problem by the time the table grows to the point where the\npointless ScanKey would cause a slowdown. But it sure _looks_ like a\nmistake, especially since these particular partitions can't even contain\nNULL. Do I need to do something about this short-lived case?\n\nThere's still other work to do -- for instance, I think my modifications\nto the filter clauses have probably broken some isolation levels until I\nfix up SeqRecheck(), and better benchmarks would be good -- but I think\nthis is ready for early CF feedback.\n\nThanks,\n--Jacob",
"msg_date": "Wed, 30 Aug 2023 13:55:24 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support SK_SEARCHNULL / SK_SEARCHNOTNULL for heap-only\n scans"
}
] |
[
{
"msg_contents": "Hi,\n\nwhile experimenting with BRIN indexes after a couple FOSDEM discussions,\nI ran into the existing limitation that BRIN indexes don't handle array\nscan keys. So BRIN indexes can be used for conditions like\n\n WHERE a IN (1,2,3,4,5)\n\nbut we essentially treat the values as individual scan keys, and for\neach one we scan the BRIN index and build/update the bitmap. Which for\nlarge indexes may be fairly expensive - the cost is proportional to the\nnumber of values, so if building the bitmap for 1 value takes 10ms, for\n100 values it'll take ~1000ms.\n\nIt's not hard to construct cases like this (e.g. when using indexes with\nsmall pages_per_range values) etc. Of course, if the query does a lot of\nother expensive stuff, this cost may be insignificant.\n\nI'm not sure how often people do queries with such conditions. But I've\nbeen experimenting with features that'd build such paths, so I took a\nstab at a PoC, which can significantly reduce the time needed to build\nthe bitmaps. And there's a couple more interesting opportunities.\n\n\n0001 - Support SK_SEARCHARRAY in BRIN minmax\n--------------------------------------------\nThe 0001 part does a \"naive\" SK_SEARCHARRAY implementation for minmax.\nIt simply sets amsearcharray=true and then tweaks the consistent\nfunction to handle both the scalar and array scan keys.\n\nThis is obviously rather inefficient, because the array is searched\nlinearly. So yes, we don't walk the index repeatedly, but we have to\ncompare each range to (almost-)all values.\n\n\n0002 - Sort the array in brinrescan() and do binsearch\n------------------------------------------------------\nThere's a simple way to optimize the naive approach by sorting the array\nand then searching in this array. If the array is sorted, we can search\nfor the first value >= minvalue, and see if that is consistent (i.e. if\nit's <= maxval).\n\nIn my experiments this cuts the time needed to build the bitmap for\narray to pretty much the same as for a single value.\n\nI think this is similar to the preprocessing of scan keys in b-tree, so\nbrinrescan() is a natural way to do the sort. The problem however is\nwhere to store the result.\n\nIdeally, we'd store it in BrinOpaque (just like BTScanOpaque in btree),\nbut the problem is we don't pass that to the consistent functions. Those\nonly get the ScanKeys and stuff extracted from BrinOpaque.\n\nWe might add a parameter to the \"consistent\" function, but that\nconflicts with not wanting to break existing extensions implementing\ntheir own BRIN indexes. We allow the opclasses to define \"consistent\"\nwith either 4 or 5 arguments. Adding an argument would mean 5 or 6\narguments, but because of the backwards compatibility we'd need to\nsupport existing opclasses, and 5 is ambiguous :-/\n\nIn hindsight, I would probably not chose supporting both 4 and 5\narguments again. It makes it harder for us to maintain the code to make\nlife easier for extensions, but I'm not aware of any out-of-core BRIN\nopclasses anyway. So I'd probably just change the API, it's pretty easy\nto update existing extensions.\n\nThis patch however does a much simpler thing - it just replaces the\narray in the SK_SEARCHARRAY scan key with a sorted one. That works for\nfor minmax, but not for bloom/inclusion, because those are not based on\nsorting. And the ArrayType is not great for minmax either, because it\nmeans we need to deconstruct it again and again, for each range. It'd be\nmuch better to deconstruct the array once.\n\nI'll get back to this ...\n\n\n0003 - Support SK_SEARCHARRAY in BRIN inclusion\n-----------------------------------------------\nTrivial modification to support array scan keys, can't benefit from\nsorting the array.\n\n\n0004 - Support SK_SEARCHARRAY in BRIN bloom\n-------------------------------------------\nTrivial modification to support array scan keys, can't benefit from\nsorted array either.\n\nBut we might \"preprocess\" the keys in a different way - bloom needs to\ncalculate two hashes per key, and at the moment it happens again and\nagain for each range. So if you have 1M ranges, and SK_SEARCHARRAY query\nwith 100 values, we'll do 100M calls to PROCNUM_HASH and 200M calls to\nhash_uint32_extended(). And our hash functions are pretty expensive,\ncertainly compared to the fast functions often used for bloom filters.\n\nSo the preprocessing might actually calculate the hash functions once,\nand then only reuse those in the \"consistent\" function.\n\n0005 is a dirty PoC illustrating the benefit of caching the hashes.\n\nUnfortunately, this complicates things, because it means:\n\n* The scan key preprocessing is not universal for all BRIN opclasses,\n because some opclasses, i.e. each BRIN opclass might have optional\n BRIN_PROCNUM_PREPROCESS which would preprocess the keys the way the\n opclass would like.\n\n* We can simply replace the array in the scan key the way minmax does\n that with the sorted array, because the data type is not the same\n (hashes are uint64).\n\n\nWhen I started to write this e-mail I thought there's pretty much just\none way to move this forward:\n\n1) Add a BRIN_PROCNUM_PREPROCESS to BRIN, doing the preprocessing (if\n not defined, the key is not preprocessed.\n\n2) Store the preprocessed keys in BrinOpaque.\n\n3) Modify the BRIN API to allow passing the preprocessed keys.\n\nAs mentioned earlier, I'm not sure how difficult would it be to maintain\nbackwards compatibility, considering the number of arguments of the\nconsistent function would be ambiguous.\n\nMaybe the existence of BRIN_PROCNUM_PREPROCESS would be enough to decide\nthis - if it's decided, no keys are preprocessed (and the opclass would\nnot support SK_SEARCHARRAY).\n\n\nBut now I realize maybe we can do without adding parameters to the\n\"consistent\" function. We might stash \"preprocessed\" scankeys into\nBrinOpaque, and pass them to the consistent function instead of the\n\"original\" scan keys (at least when the BRIN_PROCNUM_PREPROCESS). In\nfact, I see ScanKey even allows AM-specific flags, maybe it'd be useful\nto to mark the preprocessed keys.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 13 Feb 2023 18:01:20 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "Hi,\n\nAttached is a patch series adopting the idea of scan key preprocessing\nin brinrescan(), and producing scan keys. It turns out to work pretty\nnicely, and it allows different opclasses doing different things:\n\n- minmax / minmax-multi: sort the array values (leave scalars alone)\n- inclusion: no preprocessing\n- bloom: precalculate hash values\n\nThe _consistent functions are modified to leverage the preprocessed\nkeys. I wonder if it should check the existence of the (optional)\nprocedure, and fallback to the non-optimized search if not defined.\n\nThat would allow opclasses (e.g. from extensions) to keep using the\nbuilt-in consistent function without tweaking the definition to also\nhave the preprocess function. But that seems like a rather minor issue,\nespecially because the number of external opclasses is tiny and updating\nthe definition to also reference the preprocess function is trivial. I\ndon't think it's worth the extra code complexity.\n\n0001 and 0002 are minor code cleanup in the opclasses introduced in PG\n13. There's a couple places assigning boolean values to Datum variables,\nand misleading comments.\n\n0003 is a minor refactoring making the Bloom filter size calculation\neasier to reuse.\n\n0004 introduces the optional \"preprocess\" opclass procedure, and calls\nit for keys from brinrescan().\n\n0005-0008 add the preprocess procedure to the various BRIN types, and\nadjust the consistent procedures accordingly.\n\n\nAttached is a python script I used to measure this. It builds a table\nwith 10M rows, with sequential but slightly randomized (value may move\nwithin the 1% of table), and minmax/bloom indexes. The table has ~500MB,\nthe indexes are using pages_per_range=1 (tiny, but simulates large table\nwith regular page ranges).\n\nAnd then the script queries the table with different number of random\nvalues in the \"IN (...)\" clause, and measures query duration (in ms).\n\nThe results look like this:\n\n int text\n index values master patched master patched int text\n ------------------------------------------------------------------\n minmax 1 7 7 27 25 100% 92%\n 10 66 15 277 70 23% 25%\n 20 132 16 558 85 12% 15%\n 50 331 21 1398 102 7% 7%\n 100 663 29 2787 118 4% 4%\n 500 3312 81 13964 198 2% 1%\n ------------------------------------------------------------------\n bloom 1 30 27 23 18 92% 77%\n 10 302 208 231 35 69% 15%\n 20 585 381 463 54 65% 12%\n 50 1299 761 1159 111 59% 10%\n 100 2194 1099 2312 204 50% 9%\n 500 6850 1228 11559 918 18% 8%\n ------------------------------------------------------------------\n\nWith minmax, consider for example queries with 20 values, which used to\ntake ~130ms, but with the patch this drops to 16ms (~23%). And the\nimprovement is even more significant for larger number of values. For\ntext data the results are pretty comparable.\n\nWith bloom indexes, the improvements are proportional to how expensive\nthe hash function is (for the data type). For int the hash is fairly\ncheap, so the improvement is rather moderate (but visible). For text,\nthe improvements are way more significant - for 10 values the duration\nis reduced by a whopping 85%.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 16 Feb 2023 02:35:47 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "cfbot identified a couple issues in the pathes:\n\n1) not handling NULLs correctly (or rather at all). There was a FIXME,\nso I took this as a sign it's time to finally address that.\n\n2) minmax-multi did not fully adopt the preprocessed values in the\nsecond part of the _consistent function\n\nThe patches also add a bunch of regression tests to improve coverage.\n\n\nWhile adding those, I ran into an interesting behavior with BRIN bloom\nindexes. If you have such index on a bigint column, then this won't use\nthe index:\n\n SELECT * FROM t WHERE b = 82;\n\nunless you cast the constant to bigint like this:\n\n SELECT * FROM t WHERE b = 82::bigint;\n\nI vaguely remember dealing with this while working on the bloom indexes,\nand concluding this is OK. But what's interesting is that with multiple\nvalues in the IN clause it works and this will use the index:\n\n SELECT * FROM t WHERE b IN (82, 83);\n\nThat's a bit surprising.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 17 Feb 2023 03:50:54 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "Hi,\n\nApparently there was a bug in handling IS [NOT] NULL scan keys in the\nbloom opclass, so here's a fixed patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 18 Feb 2023 20:49:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "I had a quick look at just the preliminary cleanup patches:\n\n> 0001-BRIN-bloom-cleanup-20230218.patch\n\nLooks good to me\n\n> 0002-BRIN-minmax-multi-cleanup-20230218.patch\n\nLooks good, although it would feel more natural to me to do it the other \nway round, and define 'matches' as 'bool matches', and use DatumGetBool.\n\nNot new with this patch, but I find the 'matches' and 'matching' \nvariables a bit strange. Wouldn't it be simpler to have just one variable?\n\n> 0003-Introduce-bloom_filter_size-20230218.patch\n\nLooks good\n\n> 0004-Add-minmax-multi-inequality-tests-20230218.patch\n\nLooks good\n\n> +SELECT i/5 + mod(911 * i + 483, 25),\n> + i/10 + mod(751 * i + 221, 41)\n\nPeculiar formulas. Was there a particular reason for these values?\n\n- Heikki\n\n\n\n",
"msg_date": "Fri, 24 Feb 2023 23:07:33 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "On 2/24/23 22:07, Heikki Linnakangas wrote:\n> I had a quick look at just the preliminary cleanup patches:\n> \n>> 0001-BRIN-bloom-cleanup-20230218.patch\n> \n> Looks good to me\n> \n>> 0002-BRIN-minmax-multi-cleanup-20230218.patch\n> \n> Looks good, although it would feel more natural to me to do it the other\n> way round, and define 'matches' as 'bool matches', and use DatumGetBool.\n> \n\nYeah, probably. I was trying to only do the minimal change because of\n(maybe) backpatching this.\n\n> Not new with this patch, but I find the 'matches' and 'matching'\n> variables a bit strange. Wouldn't it be simpler to have just one variable?\n> \n\nTrue. I don't recall why we did it this way.\n\n>> 0003-Introduce-bloom_filter_size-20230218.patch\n> \n> Looks good\n> \n>> 0004-Add-minmax-multi-inequality-tests-20230218.patch\n> \n> Looks good\n> \n>> +SELECT i/5 + mod(911 * i + 483, 25),\n>> + i/10 + mod(751 * i + 221, 41)\n> \n> Peculiar formulas. Was there a particular reason for these values?\n> \n\nNo, not really. I simply wanted a random-looking data, but reproducible\nand deterministic. And linear congruential generator is a simple way to\ndo that. I just picked a couple co-prime numbers, and that's it.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 25 Feb 2023 12:45:28 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "Here's a rebased version of this patch series, no other changes.\n\nOn 2/25/23 12:45, Tomas Vondra wrote:\n> On 2/24/23 22:07, Heikki Linnakangas wrote:\n>> I had a quick look at just the preliminary cleanup patches:\n>>\n>>> 0001-BRIN-bloom-cleanup-20230218.patch\n>>\n>> Looks good to me\n>>\n>>> 0002-BRIN-minmax-multi-cleanup-20230218.patch\n>>\n>> Looks good, although it would feel more natural to me to do it the other\n>> way round, and define 'matches' as 'bool matches', and use DatumGetBool.\n>>\n> \n> Yeah, probably. I was trying to only do the minimal change because of\n> (maybe) backpatching this.\n> \n\nI haven't changed this.\n\nHeikki, do you think these cleanup parts should be backpatched? If yes,\ndo you still think it should be reworked to do it the other way, or like\nI did it do minimize the change?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 8 Jun 2023 14:03:47 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "Here's an updated version of the patch series.\n\nI've polished and pushed the first three patches with cleanup, tests to\nimprove test coverage and so on. I chose not to backpatch those - I\nplanned to do that to make future backpatches simpler, but the changes\nended up less disruptive than expected.\n\nThe remaining patches are just about adding SK_SEARCHARRAY to BRIN.\n\n0001 - adds the optional preprocess procedure, calls it from brinrescan\n\n0002 to 0005 - adds the support to the existing BRIN opclasses\n\nThe main open question I have is what exactly does it mean that the\nprocedure is optional. In particular, should it be supported to have a\nBRIN opclass without the \"preprocess\" procedure but using the other\nbuilt-in support procedures?\n\nFor example, imagine you have a custom BRIN opclass in an extension (for\na custom data type or something). This does not need to implement any\nprocedures, it can just call the existing built-in ones. Of course, this\nwon't get the \"preprocess\" procedure automatically.\n\nShould we support such opclasses or should we force the extension to be\nupdated by adding a preprocess procedure? I'd say \"optional\" means we\nshould support (otherwise it'd not really optional).\n\nThe reason why this matters is that \"amsearcharray\" is AM-level flag,\nbut the support procedure is defined by the opclass. So the consistent\nfunction needs to handle SK_SEARCHARRAY keys both with and without\npreprocessing.\n\nThat's mostly what I did for all existing BRIN opclasses (it's a bit\nconfusing that opclass may refer to both the \"generic\" minmax or the\nopclass defined for a particular data type). All the opclasses now\nhandle three cases:\n\n1) scalar keys (just like before, with amsearcharray=fase)\n\n2) array keys with preprocessing (sorted array, array of hashes, ...)\n\n3) array keys without preprocessing (for compatibility with old\n opclasses missing the optional preprocess procedure)\n\nThe current code is a bit ugly, because it duplicates a bunch of code,\nbecause the option (3) mostly does (1) in a loop. I'm confident this can\nbe reduced by refactoring and reusing some of the \"shared\" code.\n\nThe question is if my interpretation of what \"optional\" procedure means\nis reasonable. Thoughts?\n\nThe other thing is how to test this \"compatibility\" code. I assume we\nwant to have the procedure for all built-in opclasses, so that won't\nexercise it. I did test it by temporarily removing the procedure from a\ncouple pg_amproc.dat entries. I guess creating a custom opclass in the\nregression test is the right solution.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 2 Jul 2023 18:09:12 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "On 02/07/2023 19:09, Tomas Vondra wrote:\n> Here's an updated version of the patch series.\n> \n> I've polished and pushed the first three patches with cleanup, tests to\n> improve test coverage and so on. I chose not to backpatch those - I\n> planned to do that to make future backpatches simpler, but the changes\n> ended up less disruptive than expected.\n> \n> The remaining patches are just about adding SK_SEARCHARRAY to BRIN.\n> \n> 0001 - adds the optional preprocess procedure, calls it from brinrescan\n> \n> 0002 to 0005 - adds the support to the existing BRIN opclasses\n\nCould you implement this completely in the consistent-function, by \ncaching the sorted array in fn_extra, without adding the new preprocess \nprocedure? On first call, when fn_extra == NULL, sort the array and \nstash it in fn_extra.\n\nI don't think that works, because fn_extra isn't reset when the scan \nkeys change on rescan. We could reset it, and document that you can use \nfn_extra for per-scankey caching. There's some precedence for not \nresetting it though, see commit d22a09dc70f. But we could provide an \nopaque per-scankey scratch space like that somewhere else. In BrinDesc, \nperhaps.\n\nThe new preprocess support function feels a bit too inflexible to me. \nTrue, you can store whatever you want in the ScanKey that it returns, \nbut since that's the case, why not just make it void * ?It seems that \nthe constraint here was that you need to pass a ScanKey to the \nconsistent function, because the consistent function's signature is what \nit is. But we can change the signature, if we introduce a new support \namproc number for it.\n\n> The main open question I have is what exactly does it mean that the\n> procedure is optional. In particular, should it be supported to have a\n> BRIN opclass without the \"preprocess\" procedure but using the other\n> built-in support procedures?\n>\n> For example, imagine you have a custom BRIN opclass in an extension (for\n> a custom data type or something). This does not need to implement any\n> procedures, it can just call the existing built-in ones. Of course, this\n> won't get the \"preprocess\" procedure automatically.\n> \n> Should we support such opclasses or should we force the extension to be\n> updated by adding a preprocess procedure? I'd say \"optional\" means we\n> should support (otherwise it'd not really optional).\n> \n> The reason why this matters is that \"amsearcharray\" is AM-level flag,\n> but the support procedure is defined by the opclass. So the consistent\n> function needs to handle SK_SEARCHARRAY keys both with and without\n> preprocessing.\n> \n> That's mostly what I did for all existing BRIN opclasses (it's a bit\n> confusing that opclass may refer to both the \"generic\" minmax or the\n> opclass defined for a particular data type). All the opclasses now\n> handle three cases:\n> \n> 1) scalar keys (just like before, with amsearcharray=fase)\n> \n> 2) array keys with preprocessing (sorted array, array of hashes, ...)\n> \n> 3) array keys without preprocessing (for compatibility with old\n> opclasses missing the optional preprocess procedure)\n> \n> The current code is a bit ugly, because it duplicates a bunch of code,\n> because the option (3) mostly does (1) in a loop. I'm confident this can\n> be reduced by refactoring and reusing some of the \"shared\" code.\n> \n> The question is if my interpretation of what \"optional\" procedure means\n> is reasonable. Thoughts?\n> \n> The other thing is how to test this \"compatibility\" code. I assume we\n> want to have the procedure for all built-in opclasses, so that won't\n> exercise it. I did test it by temporarily removing the procedure from a\n> couple pg_amproc.dat entries. I guess creating a custom opclass in the\n> regression test is the right solution.\n\nIt would be unpleasant to force all BRIN opclasses to immediately \nimplement the searcharray-logic. If we don't want to do that, we need to \nimplement generic SK_SEARCHARRAY handling in BRIN AM itself. That would \nbe pretty easy, right? Just call the regular consistent function for \nevery element in the array.\n\nIf an opclass wants to provide a faster/better implementation, it can \nprovide a new consistent support procedure that supports that. Let's \nassign a new amproc number for new-style consistent function, which must \nsupport SK_SEARCHARRAY, and pass it some scratch space where it can \ncache whatever per-scankey data. Because it gets a new amproc number, we \ncan define the arguments as we wish. We can pass a pointer to the \nper-scankey scratch space as a new argument, for example.\n\nWe did this backwards-compatibility dance with the 3/4-argument variants \nof the current consistent functions. But I think assigning a whole new \nprocedure number is better than looking at the number of arguments.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sun, 9 Jul 2023 00:57:14 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "On 7/8/23 23:57, Heikki Linnakangas wrote:\n> On 02/07/2023 19:09, Tomas Vondra wrote:\n>> Here's an updated version of the patch series.\n>>\n>> I've polished and pushed the first three patches with cleanup, tests to\n>> improve test coverage and so on. I chose not to backpatch those - I\n>> planned to do that to make future backpatches simpler, but the changes\n>> ended up less disruptive than expected.\n>>\n>> The remaining patches are just about adding SK_SEARCHARRAY to BRIN.\n>>\n>> 0001 - adds the optional preprocess procedure, calls it from brinrescan\n>>\n>> 0002 to 0005 - adds the support to the existing BRIN opclasses\n> \n> Could you implement this completely in the consistent-function, by\n> caching the sorted array in fn_extra, without adding the new preprocess\n> procedure? On first call, when fn_extra == NULL, sort the array and\n> stash it in fn_extra.\n> \n> I don't think that works, because fn_extra isn't reset when the scan\n> keys change on rescan. We could reset it, and document that you can use\n> fn_extra for per-scankey caching. There's some precedence for not\n> resetting it though, see commit d22a09dc70f. But we could provide an\n> opaque per-scankey scratch space like that somewhere else. In BrinDesc,\n> perhaps.\n> \n\nHmm, yeah. BrinDesc seems like a good place for such scratch space ...\n\nAnd it's seem to alleviate most of the compatibility issues, because\nit'd make the preprocessing a responsibility of the consistent function,\ninstead of doing it in a separate optional procedure (and having to deal\nwith cases when it does not exist). If would not even need a separate\nprocnum.\n\n> The new preprocess support function feels a bit too inflexible to me.\n> True, you can store whatever you want in the ScanKey that it returns,\n> but since that's the case, why not just make it void * ?It seems that\n> the constraint here was that you need to pass a ScanKey to the\n> consistent function, because the consistent function's signature is what\n> it is. But we can change the signature, if we introduce a new support\n> amproc number for it.\n> \n\nNow sure I follow - what should be made (void *)? Oh, you mean not\npassing the preprocessed array as a scan key at all, and instead passing\nit as a new (void*) parameter to the (new) consistent function?\n\nYeah, I was trying to stick to the existing signature of the consistent\nfunction, to minimize the necessary changes.\n\n>> The main open question I have is what exactly does it mean that the\n>> procedure is optional. In particular, should it be supported to have a\n>> BRIN opclass without the \"preprocess\" procedure but using the other\n>> built-in support procedures?\n>>\n>> For example, imagine you have a custom BRIN opclass in an extension (for\n>> a custom data type or something). This does not need to implement any\n>> procedures, it can just call the existing built-in ones. Of course, this\n>> won't get the \"preprocess\" procedure automatically.\n>>\n>> Should we support such opclasses or should we force the extension to be\n>> updated by adding a preprocess procedure? I'd say \"optional\" means we\n>> should support (otherwise it'd not really optional).\n>>\n>> The reason why this matters is that \"amsearcharray\" is AM-level flag,\n>> but the support procedure is defined by the opclass. So the consistent\n>> function needs to handle SK_SEARCHARRAY keys both with and without\n>> preprocessing.\n>>\n>> That's mostly what I did for all existing BRIN opclasses (it's a bit\n>> confusing that opclass may refer to both the \"generic\" minmax or the\n>> opclass defined for a particular data type). All the opclasses now\n>> handle three cases:\n>>\n>> 1) scalar keys (just like before, with amsearcharray=fase)\n>>\n>> 2) array keys with preprocessing (sorted array, array of hashes, ...)\n>>\n>> 3) array keys without preprocessing (for compatibility with old\n>> opclasses missing the optional preprocess procedure)\n>>\n>> The current code is a bit ugly, because it duplicates a bunch of code,\n>> because the option (3) mostly does (1) in a loop. I'm confident this can\n>> be reduced by refactoring and reusing some of the \"shared\" code.\n>>\n>> The question is if my interpretation of what \"optional\" procedure means\n>> is reasonable. Thoughts?\n>>\n>> The other thing is how to test this \"compatibility\" code. I assume we\n>> want to have the procedure for all built-in opclasses, so that won't\n>> exercise it. I did test it by temporarily removing the procedure from a\n>> couple pg_amproc.dat entries. I guess creating a custom opclass in the\n>> regression test is the right solution.\n> \n> It would be unpleasant to force all BRIN opclasses to immediately\n> implement the searcharray-logic. If we don't want to do that, we need to\n> implement generic SK_SEARCHARRAY handling in BRIN AM itself. That would\n> be pretty easy, right? Just call the regular consistent function for\n> every element in the array.\n> \n\nTrue, although the question is how many out-of-core opclasses are there.\nMy impression is the number is pretty close to 0, in which case we're\nmaking ourselves to jump through all kinds of hoops, making the code\nmore complex, with almost no benefit in the end.\n\n> If an opclass wants to provide a faster/better implementation, it can\n> provide a new consistent support procedure that supports that. Let's\n> assign a new amproc number for new-style consistent function, which must\n> support SK_SEARCHARRAY, and pass it some scratch space where it can\n> cache whatever per-scankey data. Because it gets a new amproc number, we\n> can define the arguments as we wish. We can pass a pointer to the\n> per-scankey scratch space as a new argument, for example.\n> \n> We did this backwards-compatibility dance with the 3/4-argument variants\n> of the current consistent functions. But I think assigning a whole new\n> procedure number is better than looking at the number of arguments.\n> \n\nI actually somewhat hate the 3/4-argument dance, and I'm opposed to\ndoing that sort of thing again. First, I'm not quite convinced it's\nworth the effort to jump through this hoop (I recall this being one of\nthe headaches when fixing the BRIN NULL handling). Second, it can only\nbe done once - imagine we now need to add a new optional parameter.\nPresumably, we'd need to keep the existing 3/4 variants, and add new 4/5\nvariants. At which point 4 is ambiguous.\n\nYes, my previous message was mostly about backwards compatibility, and\nthis may seem a bit like an argument against it. But that message was\nmore a question \"If we do this, is it actually backwards compatible the\nway we want/need?\")\n\nAnyway, I think the BrinDesc scratch space is a neat idea, I'll try\ndoing it that way and report back in a couple days.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 9 Jul 2023 18:16:26 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "On 09/07/2023 19:16, Tomas Vondra wrote:\n> On 7/8/23 23:57, Heikki Linnakangas wrote:\n>> The new preprocess support function feels a bit too inflexible to me.\n>> True, you can store whatever you want in the ScanKey that it returns,\n>> but since that's the case, why not just make it void * ? It seems that\n>> the constraint here was that you need to pass a ScanKey to the\n>> consistent function, because the consistent function's signature is what\n>> it is. But we can change the signature, if we introduce a new support\n>> amproc number for it.\n> \n> Now sure I follow - what should be made (void *)? Oh, you mean not\n> passing the preprocessed array as a scan key at all, and instead passing\n> it as a new (void*) parameter to the (new) consistent function?\n\nRight.\n\n>> It would be unpleasant to force all BRIN opclasses to immediately\n>> implement the searcharray-logic. If we don't want to do that, we need to\n>> implement generic SK_SEARCHARRAY handling in BRIN AM itself. That would\n>> be pretty easy, right? Just call the regular consistent function for\n>> every element in the array.\n> \n> True, although the question is how many out-of-core opclasses are there.\n> My impression is the number is pretty close to 0, in which case we're\n> making ourselves to jump through all kinds of hoops, making the code\n> more complex, with almost no benefit in the end.\n\nPerhaps. How many of the opclasses can do something smart with \nSEARCHARRAY? If the answer is \"all\" or \"almost all\", then it seems \nreasonable to just require them all to handle it. If the answer is \n\"some\", then it would still be nice to provide a naive default \nimplementation in the AM itself. Otherwise all the opclasses need to \ninclude a bunch of boilerplate to support SEARCHARRAY.\n\n>> If an opclass wants to provide a faster/better implementation, it can\n>> provide a new consistent support procedure that supports that. Let's\n>> assign a new amproc number for new-style consistent function, which must\n>> support SK_SEARCHARRAY, and pass it some scratch space where it can\n>> cache whatever per-scankey data. Because it gets a new amproc number, we\n>> can define the arguments as we wish. We can pass a pointer to the\n>> per-scankey scratch space as a new argument, for example.\n>>\n>> We did this backwards-compatibility dance with the 3/4-argument variants\n>> of the current consistent functions. But I think assigning a whole new\n>> procedure number is better than looking at the number of arguments.\n> \n> I actually somewhat hate the 3/4-argument dance, and I'm opposed to\n> doing that sort of thing again. First, I'm not quite convinced it's\n> worth the effort to jump through this hoop (I recall this being one of\n> the headaches when fixing the BRIN NULL handling). Second, it can only\n> be done once - imagine we now need to add a new optional parameter.\n> Presumably, we'd need to keep the existing 3/4 variants, and add new 4/5\n> variants. At which point 4 is ambiguous.\n\nMy point is that we should assign a new amproc number to distinguish the \nnew variant, instead of looking at the number of arguments. That way \nit's not ambiguous, and you can define whatever arguments you want for \nthe new variant.\n\nYet another idea is to introduce a new amproc for a consistent function \nthat *only* handles the SEARCHARRAY case, and keep the old consistent \nfunction as it is for the scalars. So every opclass would need to \nimplement the current consistent function, just like today. But if an \nopclass wants to support SEARCHARRAY, it could optionally also provide \nan \"consistent_array\" function.\n\n> Yes, my previous message was mostly about backwards compatibility, and\n> this may seem a bit like an argument against it. But that message was\n> more a question \"If we do this, is it actually backwards compatible the\n> way we want/need?\")\n> \n> Anyway, I think the BrinDesc scratch space is a neat idea, I'll try\n> doing it that way and report back in a couple days.\n\nCool. In 0005-Support-SK_SEARCHARRAY-in-BRIN-bloom-20230702.patch, you \nused the preprocess function to pre-calculate the scankey's hash, even \nfor scalars. You could use the scratch space in BrinDesc for that, \nbefore doing anything with SEARCHARRAYs.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sun, 9 Jul 2023 21:05:53 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "On 7/9/23 20:05, Heikki Linnakangas wrote:\n> On 09/07/2023 19:16, Tomas Vondra wrote:\n>> On 7/8/23 23:57, Heikki Linnakangas wrote:\n>>> The new preprocess support function feels a bit too inflexible to me.\n>>> True, you can store whatever you want in the ScanKey that it returns,\n>>> but since that's the case, why not just make it void * ? It seems that\n>>> the constraint here was that you need to pass a ScanKey to the\n>>> consistent function, because the consistent function's signature is what\n>>> it is. But we can change the signature, if we introduce a new support\n>>> amproc number for it.\n>>\n>> Now sure I follow - what should be made (void *)? Oh, you mean not\n>> passing the preprocessed array as a scan key at all, and instead passing\n>> it as a new (void*) parameter to the (new) consistent function?\n> \n> Right.\n> \n>>> It would be unpleasant to force all BRIN opclasses to immediately\n>>> implement the searcharray-logic. If we don't want to do that, we need to\n>>> implement generic SK_SEARCHARRAY handling in BRIN AM itself. That would\n>>> be pretty easy, right? Just call the regular consistent function for\n>>> every element in the array.\n>>\n>> True, although the question is how many out-of-core opclasses are there.\n>> My impression is the number is pretty close to 0, in which case we're\n>> making ourselves to jump through all kinds of hoops, making the code\n>> more complex, with almost no benefit in the end.\n> \n> Perhaps. How many of the opclasses can do something smart with\n> SEARCHARRAY? If the answer is \"all\" or \"almost all\", then it seems\n> reasonable to just require them all to handle it. If the answer is\n> \"some\", then it would still be nice to provide a naive default\n> implementation in the AM itself. Otherwise all the opclasses need to\n> include a bunch of boilerplate to support SEARCHARRAY.\n> \n\nFor the built-in, I think all can do something smart.\n\nFor external, hard to say - my guess is they could do something\ninteresting too.\n\n\n>>> If an opclass wants to provide a faster/better implementation, it can\n>>> provide a new consistent support procedure that supports that. Let's\n>>> assign a new amproc number for new-style consistent function, which must\n>>> support SK_SEARCHARRAY, and pass it some scratch space where it can\n>>> cache whatever per-scankey data. Because it gets a new amproc number, we\n>>> can define the arguments as we wish. We can pass a pointer to the\n>>> per-scankey scratch space as a new argument, for example.\n>>>\n>>> We did this backwards-compatibility dance with the 3/4-argument variants\n>>> of the current consistent functions. But I think assigning a whole new\n>>> procedure number is better than looking at the number of arguments.\n>>\n>> I actually somewhat hate the 3/4-argument dance, and I'm opposed to\n>> doing that sort of thing again. First, I'm not quite convinced it's\n>> worth the effort to jump through this hoop (I recall this being one of\n>> the headaches when fixing the BRIN NULL handling). Second, it can only\n>> be done once - imagine we now need to add a new optional parameter.\n>> Presumably, we'd need to keep the existing 3/4 variants, and add new 4/5\n>> variants. At which point 4 is ambiguous.\n> \n> My point is that we should assign a new amproc number to distinguish the\n> new variant, instead of looking at the number of arguments. That way\n> it's not ambiguous, and you can define whatever arguments you want for\n> the new variant.\n> \n\nRight, I agree.\n\n> Yet another idea is to introduce a new amproc for a consistent function\n> that *only* handles the SEARCHARRAY case, and keep the old consistent\n> function as it is for the scalars. So every opclass would need to\n> implement the current consistent function, just like today. But if an\n> opclass wants to support SEARCHARRAY, it could optionally also provide\n> an \"consistent_array\" function.\n> \n\nHmm, we probably need to do something like this anyway. Because even\nwith the scratch space in BrinDesc, we still need to track whether the\nopclass can process arrays or not. And if it can't we need to emulate it\nin brin.c.\n\n\n>> Yes, my previous message was mostly about backwards compatibility, and\n>> this may seem a bit like an argument against it. But that message was\n>> more a question \"If we do this, is it actually backwards compatible the\n>> way we want/need?\")\n>>\n>> Anyway, I think the BrinDesc scratch space is a neat idea, I'll try\n>> doing it that way and report back in a couple days.\n> \n> Cool. In 0005-Support-SK_SEARCHARRAY-in-BRIN-bloom-20230702.patch, you\n> used the preprocess function to pre-calculate the scankey's hash, even\n> for scalars. You could use the scratch space in BrinDesc for that,\n> before doing anything with SEARCHARRAYs.\n> \n\nYeah, that's a good idea.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 9 Jul 2023 23:44:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "On 7/9/23 23:44, Tomas Vondra wrote:\n> ...\n>>> Yes, my previous message was mostly about backwards compatibility, and\n>>> this may seem a bit like an argument against it. But that message was\n>>> more a question \"If we do this, is it actually backwards compatible the\n>>> way we want/need?\")\n>>>\n>>> Anyway, I think the BrinDesc scratch space is a neat idea, I'll try\n>>> doing it that way and report back in a couple days.\n>>\n>> Cool. In 0005-Support-SK_SEARCHARRAY-in-BRIN-bloom-20230702.patch, you\n>> used the preprocess function to pre-calculate the scankey's hash, even\n>> for scalars. You could use the scratch space in BrinDesc for that,\n>> before doing anything with SEARCHARRAYs.\n>>\n> \n> Yeah, that's a good idea.\n> \n\nI started looking at this (the scratch space in BrinDesc), and it's not\nas straightforward. The trouble is BrinDesc is \"per attribute\" but the\nscratch space is \"per scankey\" (because we'd like to sort values from\nthe scankey array).\n\nWith the \"new\" consistent functions (that get all scan keys at once)\nthis probably is not an issue, because we know which scan key we're\nprocessing and so we can map it to the scratch space. But with the old\nconsistent function that's not the case. Maybe we should support this\nonly with the \"new\" consistent function variant?\n\nThis would however conflict with the idea to have a separate consistent\nfunction for arrays, which \"splits\" the scankeys into multiple groups\nagain. There could be multiple SAOP scan keys, and then what?\n\nI wonder if the scratch space should be in the ScanKey instead?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 14 Jul 2023 16:47:33 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "On Fri, 14 Jul 2023 at 20:17, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 7/9/23 23:44, Tomas Vondra wrote:\n> > ...\n> >>> Yes, my previous message was mostly about backwards compatibility, and\n> >>> this may seem a bit like an argument against it. But that message was\n> >>> more a question \"If we do this, is it actually backwards compatible the\n> >>> way we want/need?\")\n> >>>\n> >>> Anyway, I think the BrinDesc scratch space is a neat idea, I'll try\n> >>> doing it that way and report back in a couple days.\n> >>\n> >> Cool. In 0005-Support-SK_SEARCHARRAY-in-BRIN-bloom-20230702.patch, you\n> >> used the preprocess function to pre-calculate the scankey's hash, even\n> >> for scalars. You could use the scratch space in BrinDesc for that,\n> >> before doing anything with SEARCHARRAYs.\n> >>\n> >\n> > Yeah, that's a good idea.\n> >\n>\n> I started looking at this (the scratch space in BrinDesc), and it's not\n> as straightforward. The trouble is BrinDesc is \"per attribute\" but the\n> scratch space is \"per scankey\" (because we'd like to sort values from\n> the scankey array).\n>\n> With the \"new\" consistent functions (that get all scan keys at once)\n> this probably is not an issue, because we know which scan key we're\n> processing and so we can map it to the scratch space. But with the old\n> consistent function that's not the case. Maybe we should support this\n> only with the \"new\" consistent function variant?\n>\n> This would however conflict with the idea to have a separate consistent\n> function for arrays, which \"splits\" the scankeys into multiple groups\n> again. There could be multiple SAOP scan keys, and then what?\n>\n> I wonder if the scratch space should be in the ScanKey instead?\n\nAre we planning to post an updated patch for this? If the interest has\ngone down and if there are no plans to handle this I'm thinking of\nreturning this commitfest entry in this commitfest and can be opened\nwhen there is more interest.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 14 Jan 2024 16:48:46 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "On 1/14/24 12:18, vignesh C wrote:\n> On Fri, 14 Jul 2023 at 20:17, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 7/9/23 23:44, Tomas Vondra wrote:\n>>> ...\n>>>>> Yes, my previous message was mostly about backwards compatibility, and\n>>>>> this may seem a bit like an argument against it. But that message was\n>>>>> more a question \"If we do this, is it actually backwards compatible the\n>>>>> way we want/need?\")\n>>>>>\n>>>>> Anyway, I think the BrinDesc scratch space is a neat idea, I'll try\n>>>>> doing it that way and report back in a couple days.\n>>>>\n>>>> Cool. In 0005-Support-SK_SEARCHARRAY-in-BRIN-bloom-20230702.patch, you\n>>>> used the preprocess function to pre-calculate the scankey's hash, even\n>>>> for scalars. You could use the scratch space in BrinDesc for that,\n>>>> before doing anything with SEARCHARRAYs.\n>>>>\n>>>\n>>> Yeah, that's a good idea.\n>>>\n>>\n>> I started looking at this (the scratch space in BrinDesc), and it's not\n>> as straightforward. The trouble is BrinDesc is \"per attribute\" but the\n>> scratch space is \"per scankey\" (because we'd like to sort values from\n>> the scankey array).\n>>\n>> With the \"new\" consistent functions (that get all scan keys at once)\n>> this probably is not an issue, because we know which scan key we're\n>> processing and so we can map it to the scratch space. But with the old\n>> consistent function that's not the case. Maybe we should support this\n>> only with the \"new\" consistent function variant?\n>>\n>> This would however conflict with the idea to have a separate consistent\n>> function for arrays, which \"splits\" the scankeys into multiple groups\n>> again. There could be multiple SAOP scan keys, and then what?\n>>\n>> I wonder if the scratch space should be in the ScanKey instead?\n> \n> Are we planning to post an updated patch for this? If the interest has\n> gone down and if there are no plans to handle this I'm thinking of\n> returning this commitfest entry in this commitfest and can be opened\n> when there is more interest.\n> \n\nI still think the patch is a good idea and plan to get back to it, but\nprobably not in this CF. Given that the last update if from July, it's\nfair to bump it - either RWF or just move to the next CF. Up to you.\n\nAs for the patch, I wonder if Heikki has some idea what to do about the\nscratch space? I got stuck on thinking about how to do this with the two\ntypes of consistent functions we support/allow.\n\nTo articulate the issue more clearly - the scratch space is \"per index\"\nbut we need scratch space \"per index key\". That's fine - we can simply\nhave pointers to multiple scratch spaces, I think.\n\nBut we have two ways to do consistent functions - the \"old\" gets scan\nkeys one attribute at a time, \"new\" gets all at once. For the \"new\" it's\nnot a problem, it's simple to identify the right scratch space. But for\nthe \"old\" one, how would that happen? The consistent function has no\nidea which index key it's operating on, and how to identify the correct\nscratch space.\n\nI can think of two ways to deal with this:\n\n1) Only allow SK_SEARCHARRAY for indexes supporting new-style consistent\nfunctions (but I'm not sure how, considering amsearcharray is set way\nbefore we know what the opclass does, or whether it implements the old\nor new consistent function).\n\n2) Allow SK_SEARCHARRAY even with old consistent function, but do some\ndance in bringetbitmap() so to set the scratch space accordingly before\nthe call.\n\nNow that I read Heikki's messages again, I see he suggested assigning a\nnew procnum to a consistent function supporting SK_SEARCHARRAY, which\nseems to be very close to (1). Except that we'd now have 3 ways to\ndefine a consistent function, and that sounds a bit ... too much?\n\nAnyway, thinking about (1), I'm still not sure what to do about existing\nopclasses - it'd be nice to have some backwards compatible solution,\nwithout breaking everything and forcing everyone to implement all the\nnew stuff. Which is kinda why we already have two ways to do consistent\nfunctions. Presumably we'd have to implement some \"default\" handling by\ntranslating the SK_SEARCHARRAY key into simple equality keys ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Jan 2024 00:15:42 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
},
{
"msg_contents": "On Mon, 15 Jan 2024 at 04:45, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 1/14/24 12:18, vignesh C wrote:\n> > On Fri, 14 Jul 2023 at 20:17, Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 7/9/23 23:44, Tomas Vondra wrote:\n> >>> ...\n> >>>>> Yes, my previous message was mostly about backwards compatibility, and\n> >>>>> this may seem a bit like an argument against it. But that message was\n> >>>>> more a question \"If we do this, is it actually backwards compatible the\n> >>>>> way we want/need?\")\n> >>>>>\n> >>>>> Anyway, I think the BrinDesc scratch space is a neat idea, I'll try\n> >>>>> doing it that way and report back in a couple days.\n> >>>>\n> >>>> Cool. In 0005-Support-SK_SEARCHARRAY-in-BRIN-bloom-20230702.patch, you\n> >>>> used the preprocess function to pre-calculate the scankey's hash, even\n> >>>> for scalars. You could use the scratch space in BrinDesc for that,\n> >>>> before doing anything with SEARCHARRAYs.\n> >>>>\n> >>>\n> >>> Yeah, that's a good idea.\n> >>>\n> >>\n> >> I started looking at this (the scratch space in BrinDesc), and it's not\n> >> as straightforward. The trouble is BrinDesc is \"per attribute\" but the\n> >> scratch space is \"per scankey\" (because we'd like to sort values from\n> >> the scankey array).\n> >>\n> >> With the \"new\" consistent functions (that get all scan keys at once)\n> >> this probably is not an issue, because we know which scan key we're\n> >> processing and so we can map it to the scratch space. But with the old\n> >> consistent function that's not the case. Maybe we should support this\n> >> only with the \"new\" consistent function variant?\n> >>\n> >> This would however conflict with the idea to have a separate consistent\n> >> function for arrays, which \"splits\" the scankeys into multiple groups\n> >> again. There could be multiple SAOP scan keys, and then what?\n> >>\n> >> I wonder if the scratch space should be in the ScanKey instead?\n> >\n> > Are we planning to post an updated patch for this? If the interest has\n> > gone down and if there are no plans to handle this I'm thinking of\n> > returning this commitfest entry in this commitfest and can be opened\n> > when there is more interest.\n> >\n>\n> I still think the patch is a good idea and plan to get back to it, but\n> probably not in this CF. Given that the last update if from July, it's\n> fair to bump it - either RWF or just move to the next CF. Up to you.\n\nI have changed the status to RWF, feel free to update the commitfest\nafter handling the comments.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 15 Jan 2024 11:13:32 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BRIN indexes vs. SK_SEARCHARRAY (and preprocessing scan keys)"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm working on rebasing [1], my patch to make relation extension scale\nbetter.\n\nAs part of that I'd like to add tests for relation extension. To be able to\ntest the bulk write strategy path, we need to have a few backends concurrently\nload > 16MB files.\n\nIt seems pretty clear that doing that on all buildfarm machines wouldn't be\nnice / welcome. And it also seems likely that this won't be the last case\nwhere that'd be useful.\n\nSo I'd like to add a 'large' class to PG_TEST_EXTRA, that we can use in tests\nthat we only want to execute on machines with sufficient resources.\n\nMakes sense?\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20221029025420.eplyow6k7tgu6he3%40awork3.anarazel.de\n\n\n",
"msg_date": "Mon, 13 Feb 2023 10:42:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> I'm working on rebasing [1], my patch to make relation extension scale\n> better.\n> \n> As part of that I'd like to add tests for relation extension. To be able to\n> test the bulk write strategy path, we need to have a few backends concurrently\n> load > 16MB files.\n> \n> It seems pretty clear that doing that on all buildfarm machines wouldn't be\n> nice / welcome. And it also seems likely that this won't be the last case\n> where that'd be useful.\n> \n> So I'd like to add a 'large' class to PG_TEST_EXTRA, that we can use in tests\n> that we only want to execute on machines with sufficient resources.\n> \n> Makes sense?\n\n+1 in general. Are there existing tests that we should add into that\nset that you're thinking of..? I've been working with the Kerberos\ntests and that's definitely one that seems to fit this description...\n\nThanks,\n\nStephen",
"msg_date": "Mon, 13 Feb 2023 13:45:41 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> As part of that I'd like to add tests for relation extension. To be able to\n> test the bulk write strategy path, we need to have a few backends concurrently\n> load > 16MB files.\n> It seems pretty clear that doing that on all buildfarm machines wouldn't be\n> nice / welcome. And it also seems likely that this won't be the last case\n> where that'd be useful.\n> So I'd like to add a 'large' class to PG_TEST_EXTRA, that we can use in tests\n> that we only want to execute on machines with sufficient resources.\n\nMakes sense. I see that this approach would result in manual check-world\nruns not running such tests by default either, which sounds right.\n\nBikeshedding a bit ... is \"large\" the right name? It's not awful but\nI wonder if there is a better one; it seems like this class could\neventually include tests that run a long time but don't necessarily\neat disk space. \"resource-intensive\" is too long.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Feb 2023 13:54:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-13 13:45:41 -0500, Stephen Frost wrote:\n> Are there existing tests that we should add into that set that you're\n> thinking of..? I've been working with the Kerberos tests and that's\n> definitely one that seems to fit this description...\n\nI think the kerberos tests are already opt-in, so I don't think we need to\ngate it further.\n\nMaybe the pgbench tests?\n\nI guess there's an argument to be made that we should use this for e.g.\n002_pg_upgrade.pl or 027_stream_regress.pl - but I think both of these test\npretty fundamental behaviour like WAL replay, which is unfortunately is pretty\neasy to break, so I'd be hesitant.\n\nI guess we could stop running the full regression tests in 002_pg_upgrade.pl\nif !large?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 11:06:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-13 13:54:59 -0500, Tom Lane wrote:\n> Bikeshedding a bit ... is \"large\" the right name? It's not awful but\n> I wonder if there is a better one\n\nI did wonder about that too. But didn't come up with something more poignant.\n\n\n> it seems like this class could eventually include tests that run a long time\n> but don't necessarily eat disk space. \"resource-intensive\" is too long.\n\nI'm not sure we'd want to combine time-intensive and disk-space-intensive test\nin the same category. Availability of disk space and cpu cycles don't have to\ncorrelate that well.\n\nlotsadisk, lotsacpu? :)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 11:10:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2023-02-13 13:45:41 -0500, Stephen Frost wrote:\n> > Are there existing tests that we should add into that set that you're\n> > thinking of..? I've been working with the Kerberos tests and that's\n> > definitely one that seems to fit this description...\n> \n> I think the kerberos tests are already opt-in, so I don't think we need to\n> gate it further.\n\nI'd like to lump them in with a bunch of other tests though, to give it\nmore chance to run.. My issue currently is that they're *too* gated.\n\n> Maybe the pgbench tests?\n\nSure.\n\n> I guess there's an argument to be made that we should use this for e.g.\n> 002_pg_upgrade.pl or 027_stream_regress.pl - but I think both of these test\n> pretty fundamental behaviour like WAL replay, which is unfortunately is pretty\n> easy to break, so I'd be hesitant.\n\nHm. If you aren't playing with that part of the code though, maybe it'd\nbe nice to not run them. The pg_dump tests might also make sense to\nsegregate out as they can add up to be a lot, and there's more that we\ncould and probably should be doing there.\n\n> I guess we could stop running the full regression tests in 002_pg_upgrade.pl\n> if !large?\n\nPerhaps... but then what are we testing?\n\nThanks,\n\nStephen",
"msg_date": "Mon, 13 Feb 2023 14:15:24 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-13 14:15:24 -0500, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > On 2023-02-13 13:45:41 -0500, Stephen Frost wrote:\n> > > Are there existing tests that we should add into that set that you're\n> > > thinking of..? I've been working with the Kerberos tests and that's\n> > > definitely one that seems to fit this description...\n> > \n> > I think the kerberos tests are already opt-in, so I don't think we need to\n> > gate it further.\n> \n> I'd like to lump them in with a bunch of other tests though, to give it\n> more chance to run.. My issue currently is that they're *too* gated.\n\nIsn't the reason that we gate them that much that the test poses a security\nhazard on a multi-user system?\n\nI don't think we should combine opting into security hazards with opting into\nusing disk space.\n\n\nFWIW, the kerberos tests run on all CI OSs other than windows. I have\nadditional CI coverage for openbsd and netbsd in a separate branch, providing\nfurther coverage - but I'm not sure we want those additional covered OSs in\ncore PG.\n\nI think the tests for kerberos run frequently enough in practice. I don't know\nhow good the coverage they provide is, though, but that's a separate aspect to\nimprove anyway.\n\n\n> > I guess there's an argument to be made that we should use this for e.g.\n> > 002_pg_upgrade.pl or 027_stream_regress.pl - but I think both of these test\n> > pretty fundamental behaviour like WAL replay, which is unfortunately is pretty\n> > easy to break, so I'd be hesitant.\n> \n> Hm. If you aren't playing with that part of the code though, maybe it'd\n> be nice to not run them.\n\nIt's surprisingly easy to break it accidentally...\n\n\n> The pg_dump tests might also make sense to segregate out as they can add up\n> to be a lot, and there's more that we could and probably should be doing\n> there.\n\nIMO the main issue with the pg_dump test is their verbosity, rather than the\nruntime... ~8.8k subtests is a lot.\n\nfind . -name 'regress_log*'|xargs -n 1 wc -l|sort -nr|head -n 5|less\n12712 ./testrun/pg_dump/002_pg_dump/log/regress_log_002_pg_dump\n5124 ./testrun/pg_rewind/002_databases/log/regress_log_002_databases\n1928 ./testrun/pg_rewind/001_basic/log/regress_log_001_basic\n1589 ./testrun/recovery/017_shm/log/regress_log_017_shm\n1077 ./testrun/pg_rewind/004_pg_xlog_symlink/log/regress_log_004_pg_xlog_symlink\n\n\n> > I guess we could stop running the full regression tests in 002_pg_upgrade.pl\n> > if !large?\n> \n> Perhaps... but then what are we testing?\n\nThere's plenty to pg_upgrade other than the pg_dump comparison aspect.\n\n\nI'm not sure it's worth spending too much energy finding tests that we can run\nless commonly than now. We've pushed back on tests using lots of resources so\nfar, so we don't really have them...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 11:34:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-13 13:54:59 -0500, Tom Lane wrote:\n>> it seems like this class could eventually include tests that run a long time\n>> but don't necessarily eat disk space. \"resource-intensive\" is too long.\n\n> I'm not sure we'd want to combine time-intensive and disk-space-intensive test\n> in the same category. Availability of disk space and cpu cycles don't have to\n> correlate that well.\n\nYeah, I was thinking along the same lines.\n\n> lotsadisk, lotsacpu? :)\n\nbigdisk, bigcpu?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Feb 2023 14:55:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "On 2023-02-13 14:55:45 -0500, Tom Lane wrote:\n> bigdisk, bigcpu?\n\nWorks for me.\n\nI'll probably just add bigdisk as part of adding a test for bulk relation\nextensions, mentioning in a comment that we might want bigcpu if we have a\ntest for it?\n\n\n",
"msg_date": "Mon, 13 Feb 2023 12:32:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-13 14:55:45 -0500, Tom Lane wrote:\n>> bigdisk, bigcpu?\n\n> Works for me.\n\n> I'll probably just add bigdisk as part of adding a test for bulk relation\n> extensions, mentioning in a comment that we might want bigcpu if we have a\n> test for it?\n\nNo objection here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:11:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "On 2023-02-13 Mo 14:34, Andres Freund wrote:\n> Hi,\n>\n> On 2023-02-13 14:15:24 -0500, Stephen Frost wrote:\n>> * Andres Freund (andres@anarazel.de) wrote:\n>>> On 2023-02-13 13:45:41 -0500, Stephen Frost wrote:\n>>>> Are there existing tests that we should add into that set that you're\n>>>> thinking of..? I've been working with the Kerberos tests and that's\n>>>> definitely one that seems to fit this description...\n>>> I think the kerberos tests are already opt-in, so I don't think we need to\n>>> gate it further.\n>> I'd like to lump them in with a bunch of other tests though, to give it\n>> more chance to run.. My issue currently is that they're *too* gated.\n> Isn't the reason that we gate them that much that the test poses a security\n> hazard on a multi-user system?\n\n\nThat's my understanding.\n\n\n>\n> I don't think we should combine opting into security hazards with opting into\n> using disk space.\n\n\nI agree\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-13 Mo 14:34, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-02-13 14:15:24 -0500, Stephen Frost wrote:\n\n\n* Andres Freund (andres@anarazel.de) wrote:\n\n\nOn 2023-02-13 13:45:41 -0500, Stephen Frost wrote:\n\n\nAre there existing tests that we should add into that set that you're\nthinking of..? I've been working with the Kerberos tests and that's\ndefinitely one that seems to fit this description...\n\n\n\nI think the kerberos tests are already opt-in, so I don't think we need to\ngate it further.\n\n\n\nI'd like to lump them in with a bunch of other tests though, to give it\nmore chance to run.. My issue currently is that they're *too* gated.\n\n\n\nIsn't the reason that we gate them that much that the test poses a security\nhazard on a multi-user system?\n\n\n\nThat's my understanding.\n\n\n\n\n\nI don't think we should combine opting into security hazards with opting into\nusing disk space.\n\n\n\n\nI agree\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 13 Feb 2023 16:46:32 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 5:42 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> I'm working on rebasing [1], my patch to make relation extension scale\n> better.\n>\n> As part of that I'd like to add tests for relation extension. To be able to\n> test the bulk write strategy path, we need to have a few backends concurrently\n> load > 16MB files.\n>\n> It seems pretty clear that doing that on all buildfarm machines wouldn't be\n> nice / welcome. And it also seems likely that this won't be the last case\n> where that'd be useful.\n>\n> So I'd like to add a 'large' class to PG_TEST_EXTRA, that we can use in tests\n> that we only want to execute on machines with sufficient resources.\n>\n\nOh, I was been thinking about a similar topic recently, but I was\nunaware of PG_TEST_EXTRA [1]\n\nI've observed suggested test cases get rejected as being overkill, or\nbecause they would add precious seconds to the test execution. OTOH, I\nfelt such tests would still help gain some additional percentages from\nthe \"code coverage\" stats. The kind of tests I am thinking of don't\nnecessarily need a huge disk/CPU - but they just take longer to run\nthan anyone has wanted to burden the build-farm with.\n\n~\n\nSorry for the thread interruption -- but I thought this might be the\nright place to ask: What is the recommended way to deal with such\ntests intended primarily for better code coverage?\n\nI didn't see anything that looked pre-built for 'coverage'. Did I miss\nsomething, or is it a case of just invent-your-own extra tests/values\nfor your own ad-hoc requirements?\n\ne.g.\nmake check EXTRA_TESTS=extra_regress_for_coverage\nmake check-world PG_TEST_EXTRA='extra_tap_tests_for_coverage'\n\nThanks!\n\n------\n[1] https://www.postgresql.org/docs/devel/regress-run.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 14 Feb 2023 09:26:47 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-14 09:26:47 +1100, Peter Smith wrote:\n> I've observed suggested test cases get rejected as being overkill, or\n> because they would add precious seconds to the test execution. OTOH, I\n> felt such tests would still help gain some additional percentages from\n> the \"code coverage\" stats. The kind of tests I am thinking of don't\n> necessarily need a huge disk/CPU - but they just take longer to run\n> than anyone has wanted to burden the build-farm with.\n\nI'd say it depend on the test whether it's worth adding. Code coverage for its\nown sake isn't that useful, they have to actually test something useful. And\ntests have costs beyond runtime, e.g. new tests tend to fail in some edge\ncases.\n\nE.g. just having tests hit more lines, without verifying that the behaviour is\nactually correct, only provides limited additional assurance. It's also not\nvery useful to add a very expensive test that provides only a very small\nadditional amount of coverage.\n\nIOW, even if we add more test categories, it'll still be a tradeoff.\n\n\n> Sorry for the thread interruption -- but I thought this might be the\n> right place to ask: What is the recommended way to deal with such\n> tests intended primarily for better code coverage?\n\nI don't think that exists today.\n\nDo you have an example of the kind of test you're thinking of?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 15:44:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 10:44 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-02-14 09:26:47 +1100, Peter Smith wrote:\n> > I've observed suggested test cases get rejected as being overkill, or\n> > because they would add precious seconds to the test execution. OTOH, I\n> > felt such tests would still help gain some additional percentages from\n> > the \"code coverage\" stats. The kind of tests I am thinking of don't\n> > necessarily need a huge disk/CPU - but they just take longer to run\n> > than anyone has wanted to burden the build-farm with.\n>\n> I'd say it depend on the test whether it's worth adding. Code coverage for its\n> own sake isn't that useful, they have to actually test something useful. And\n> tests have costs beyond runtime, e.g. new tests tend to fail in some edge\n> cases.\n>\n> E.g. just having tests hit more lines, without verifying that the behaviour is\n> actually correct, only provides limited additional assurance. It's also not\n> very useful to add a very expensive test that provides only a very small\n> additional amount of coverage.\n>\n> IOW, even if we add more test categories, it'll still be a tradeoff.\n>\n>\n> > Sorry for the thread interruption -- but I thought this might be the\n> > right place to ask: What is the recommended way to deal with such\n> > tests intended primarily for better code coverage?\n>\n> I don't think that exists today.\n>\n> Do you have an example of the kind of test you're thinking of?\n\nNo, nothing specific in mind. But maybe like these:\n- tests for causing obscure errors that would never otherwise be\nreached without something deliberately designed to fail a certain way\n- tests for trivial user errors apparently deemed not worth bloating\nthe regression tests with -- e.g. many errorConflictingDefElem not\nbeing called [1].\n- timing-related or error tests where some long (multi-second) delay\nis a necessary part of the setup.\n\n------\n[1] https://coverage.postgresql.org/src/backend/commands/subscriptioncmds.c.gcov.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 14 Feb 2023 11:38:06 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-14 11:38:06 +1100, Peter Smith wrote:\n> No, nothing specific in mind. But maybe like these:\n> - tests for causing obscure errors that would never otherwise be\n> reached without something deliberately designed to fail a certain way\n\nI think there's some cases around this that could be usefu, but also a lot\nthat wouldn't.\n\n\n> - tests for trivial user errors apparently deemed not worth bloating\n> the regression tests with -- e.g. many errorConflictingDefElem not\n> being called [1].\n\nI don't think it's worth adding a tests for all of these. The likelihood of\ncatching a problem seems quite small.\n\n\n> - timing-related or error tests where some long (multi-second) delay\n> is a necessary part of the setup.\n\nIME that's almost always a sign that the test wouldn't be stable anyway.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:43:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding \"large\" to PG_TEST_EXTRA"
}
] |
[
{
"msg_contents": "Hi all,\n\nDuring the gssencmode CVE discussion, we noticed that PQconnectPoll()\nhandles the error cases for TLS and GSS transport encryption slightly\ndifferently. After TLS fails, the connection handle is dead and future\ncalls to PQconnectPoll() return immediately. But after GSS encryption\nfails, the connection handle can still be used to reenter the GSS\nhandling code.\n\nThis doesn't appear to have any security implications today -- and a\nclient has to actively try to reuse a handle that's already failed --\nbut it seems undesirable. Michael (cc'd) came up with a patch, which I\nhave attached here and will register in the CF.\n\nThanks,\n--Jacob",
"msg_date": "Mon, 13 Feb 2023 10:49:17 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Align GSS and TLS error handling in PQconnectPoll()"
},
{
"msg_contents": "Patch looks good to me. Definitely an improvement over the status quo.\n\nLooking at the TLS error handling though I see these two lines:\n\n&& conn->allow_ssl_try /* redundant? */\n&& !conn->wait_ssl_try) /* redundant? */\n\nAre they actually redundant like the comment suggests? If so, we\nshould probably remove them (in another patch). If not (or if we don't\nknow), should we have these same checks for GSS?\n\n\n",
"msg_date": "Thu, 16 Feb 2023 12:31:46 +0100",
"msg_from": "Jelte Fennema <me@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Align GSS and TLS error handling in PQconnectPoll()"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 3:31 AM Jelte Fennema <me@jeltef.nl> wrote:\n>\n> Patch looks good to me. Definitely an improvement over the status quo.\n\nThanks for the review!\n\n> Looking at the TLS error handling though I see these two lines:\n>\n> && conn->allow_ssl_try /* redundant? */\n> && !conn->wait_ssl_try) /* redundant? */\n>\n> Are they actually redundant like the comment suggests? If so, we\n> should probably remove them (in another patch). If not (or if we don't\n> know), should we have these same checks for GSS?\n\nIt's a fair point. GSS doesn't have an \"allow\" encryption mode, so\nthey can't be the exact same checks. And we're already not checking\nthe probably-redundant information, so I'd vote against speculatively\nadding it back. (try_gss is already based on gssencmode, which we're\nusing here. So I think rechecking try_gss would only help if we wanted\nto clear it manually while in the middle of processing a GSS exchange.\n From a quick inspection, I don't think that's happening today -- and\nI'm not really sure that it could be useful in the future, because I'd\nthink prefer-mode is supposed to guarantee a retry on failure.)\n\nI suspect this is a much deeper rabbit hole; I think it's work that\nneeds to be done, but I can't sign myself up for it at the moment. The\ncomplexity of this function is off the charts (for instance, why do we\nrecheck conn->try_gss above, if the only apparent way to get to\nCONNECTION_GSS_STARTUP is by having try_gss = true to begin with? is\nthere a goto/retry path I'm missing?). I think it either needs heavy\nassistance from a committer who already has intimate knowledge of this\nstate machine and all of its possible branches, or from a static\nanalysis tool that can help with a step-by-step simplification.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Thu, 16 Feb 2023 09:59:54 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Align GSS and TLS error handling in PQconnectPoll()"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 09:59:54AM -0800, Jacob Champion wrote:\n> On Thu, Feb 16, 2023 at 3:31 AM Jelte Fennema <me@jeltef.nl> wrote:\n>> Patch looks good to me. Definitely an improvement over the status quo.\n> \n> Thanks for the review!\n\nI was looking at that a second time, and with fresh eyes I can see\nthat we would miss to mark conn->status with CONNECTION_BAD when using\ngssencmode=require when the polling fails in pqsecure_open_gss(),\nwhich is just wrong IMO. This code has been introduced by b0b39f7,\nthat has added support for GSS encryption. I am adding Stephen Frost\nin CC to see if he has any comments about all this part of the logic\nwith gssencmode.\n\n> I suspect this is a much deeper rabbit hole; I think it's work that\n> needs to be done, but I can't sign myself up for it at the moment. The\n> complexity of this function is off the charts (for instance, why do we\n> recheck conn->try_gss above, if the only apparent way to get to\n> CONNECTION_GSS_STARTUP is by having try_gss = true to begin with? is\n> there a goto/retry path I'm missing?). I think it either needs heavy\n> assistance from a committer who already has intimate knowledge of this\n> state machine and all of its possible branches, or from a static\n> analysis tool that can help with a step-by-step simplification.\n\nThe first one of these is from 57c0879, the second from bcd713a, which\nI assume is a copy-paste of the first one. I agree that\nPQconnectPoll() has grown beyond the point of making it easy to\nmaintain. I am wondering which approach we could take when it comes\nto simplify something like that. Attempting to reduce the number of\nflags stored in PGconn would be one. The second may be to split the\ninternal logic into more functions, for each state we are going\nthrough? The first may lead to an even cleaner logic for the second\npoint.\n--\nMichael",
"msg_date": "Fri, 17 Feb 2023 15:59:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Align GSS and TLS error handling in PQconnectPoll()"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 10:59 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I am adding Stephen Frost\n> in CC to see if he has any comments about all this part of the logic\n> with gssencmode.\n\nSounds good.\n\n> I agree that\n> PQconnectPoll() has grown beyond the point of making it easy to\n> maintain. I am wondering which approach we could take when it comes\n> to simplify something like that. Attempting to reduce the number of\n> flags stored in PGconn would be one. The second may be to split the\n> internal logic into more functions, for each state we are going\n> through? The first may lead to an even cleaner logic for the second\n> point.\n\nYeah, a mixture of both might be helpful -- the first to reduce the\ninputs to the state machine; the second to reduce interdependencies\nbetween cases, the distance of the potential goto jumps, and the\nnumber of state machine outputs. (When cases are heavily dependent on\neach other, probably best to handle them in the same function?)\nLuckily it looks like the current machine is usually linear.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 17 Feb 2023 09:01:43 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Align GSS and TLS error handling in PQconnectPoll()"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 09:01:43AM -0800, Jacob Champion wrote:\n> On Thu, Feb 16, 2023 at 10:59 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> I am adding Stephen Frost\n>> in CC to see if he has any comments about all this part of the logic\n>> with gssencmode.\n> \n> Sounds good.\n\nHearing nothing on this part, perhaps we should just move on and\nadjust the behavior on HEAD? Thats seems like one step in the good\ndirection. If this brews right, we could always discuss a backpatch\nat some point, if necessary.\n\nThoughts from others?\n--\nMichael",
"msg_date": "Thu, 9 Mar 2023 10:27:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Align GSS and TLS error handling in PQconnectPoll()"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Fri, Feb 17, 2023 at 09:01:43AM -0800, Jacob Champion wrote:\n> > On Thu, Feb 16, 2023 at 10:59 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> I am adding Stephen Frost\n> >> in CC to see if he has any comments about all this part of the logic\n> >> with gssencmode.\n> > \n> > Sounds good.\n> \n> Hearing nothing on this part, perhaps we should just move on and\n> adjust the behavior on HEAD? Thats seems like one step in the good\n> direction. If this brews right, we could always discuss a backpatch\n> at some point, if necessary.\n> \n> Thoughts from others?\n\nI agree with matching how SSL is handled here and in a review of the\npatch proposed didn't see any issues with it. Seems like it's probably\nsomething that should also be back-patched and it doesn't look terribly\nrisky to do so, is there a specific risk that you see?\n\nThanks,\n\nStephen",
"msg_date": "Thu, 9 Mar 2023 09:51:09 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Align GSS and TLS error handling in PQconnectPoll()"
},
{
"msg_contents": "On Thu, Mar 09, 2023 at 09:51:09AM -0500, Stephen Frost wrote:\n> I agree with matching how SSL is handled here and in a review of the\n> patch proposed didn't see any issues with it. Seems like it's probably\n> something that should also be back-patched and it doesn't look terribly\n> risky to do so, is there a specific risk that you see?\n\nNothing specific per se, just my usual\nbe-careful-with-slight-behavior-changes-with-libpq-parameters.\nPerhaps you are right and there is no actual reason to worry here.\n--\nMichael",
"msg_date": "Fri, 10 Mar 2023 10:42:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Align GSS and TLS error handling in PQconnectPoll()"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 10:42:08AM +0900, Michael Paquier wrote:\n> Perhaps you are right and there is no actual reason to worry here.\n\nI have been thinking about that for the last few days, and yes a\nbackpatch should be OK, so done now down to 12.\n--\nMichael",
"msg_date": "Mon, 13 Mar 2023 16:47:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Align GSS and TLS error handling in PQconnectPoll()"
}
] |
[
{
"msg_contents": "While starting to poke at the hashed-enum-partition-key problem\nrecently discussed [1], I realized that pg_dump's flagInhAttrs()\nfunction has a logic issue: its loop changes state that will be\ninspected in other iterations of the loop, and there's no guarantee\nabout the order in which related tables will be visited. Typically\nwe'll see parent tables before children because parents tend to have\nsmaller OIDs, but there are plenty of ways in which that might not\nbe true.\n\nAs far as I can tell, the implications of this are just cosmetic:\nwe might dump DEFAULT or GENERATED expressions that we don't really\nneed to because they match properties of the parent. Still, it's\nbuggy, and somebody might carelessly extend the logic in a way that\nintroduces more-serious bugs. PFA a proposed patch.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/1376149.1675268279%40sss.pgh.pa.us",
"msg_date": "Mon, 13 Feb 2023 16:26:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "OID ordering dependency in pg_dump"
}
] |
[
{
"msg_contents": "It looks like pg_walinspect's GetWALRecordsInfo() routine doesn't take\nsufficient care with memory management. It should avoid memory leaks\nof the kind that lead to OOMs whenever\npg_get_wal_records_info_till_end_of_wal() has to return very many\ntuples. Right now it isn't that hard to make that happen, even on a\nsystem where memory is plentiful. I wasn't expecting that, because all\nof these functions use a tuplestore.\n\nMore concretely, it looks like GetWALRecordInfo() calls\nCStringGetTextDatum/cstring_to_text in a way that accumulates way too\nmuch memory in ExprContext. This could be avoided by using a separate\nmemory context that is reset periodically, or something else along the\nsame lines.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 13 Feb 2023 15:22:02 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "pg_walinspect memory leaks"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-13 15:22:02 -0800, Peter Geoghegan wrote:\n> More concretely, it looks like GetWALRecordInfo() calls\n> CStringGetTextDatum/cstring_to_text in a way that accumulates way too\n> much memory in ExprContext.\n\nAdditionally, we leak two stringinfos for each record.\n\n\n> This could be avoided by using a separate memory context that is reset\n> periodically, or something else along the same lines.\n\nEverything other than a per-row memory context that's reset each time seems\nhard to manage in this case.\n\nSomehwat funnily, GetWALRecordsInfo() then ends up being unnecessarily\ndilligent about cleaning up O(1) memory, after not caring about O(N) memory...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:55:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_walinspect memory leaks"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 6:25 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-02-13 15:22:02 -0800, Peter Geoghegan wrote:\n> > More concretely, it looks like GetWALRecordInfo() calls\n> > CStringGetTextDatum/cstring_to_text in a way that accumulates way too\n> > much memory in ExprContext.\n>\n> Additionally, we leak two stringinfos for each record.\n>\n>\n> > This could be avoided by using a separate memory context that is reset\n> > periodically, or something else along the same lines.\n>\n> Everything other than a per-row memory context that's reset each time seems\n> hard to manage in this case.\n>\n> Somehwat funnily, GetWALRecordsInfo() then ends up being unnecessarily\n> dilligent about cleaning up O(1) memory, after not caring about O(N) memory...\n\nThanks for reporting. I'll get back to you on this soon.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Feb 2023 16:07:42 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_walinspect memory leaks"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 4:07 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Feb 14, 2023 at 6:25 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-02-13 15:22:02 -0800, Peter Geoghegan wrote:\n> > > More concretely, it looks like GetWALRecordInfo() calls\n> > > CStringGetTextDatum/cstring_to_text in a way that accumulates way too\n> > > much memory in ExprContext.\n> >\n> > Additionally, we leak two stringinfos for each record.\n> >\n> >\n> > > This could be avoided by using a separate memory context that is reset\n> > > periodically, or something else along the same lines.\n> >\n> > Everything other than a per-row memory context that's reset each time seems\n> > hard to manage in this case.\n> >\n> > Somehwat funnily, GetWALRecordsInfo() then ends up being unnecessarily\n> > dilligent about cleaning up O(1) memory, after not caring about O(N) memory...\n>\n> Thanks for reporting. I'll get back to you on this soon.\n\nThe memory usage goes up with many WAL records in GetWALRecordsInfo().\nThe affected functions are pg_get_wal_records_info() and\npg_get_wal_records_info_till_end_of_wal(). I think the best way to fix\nthis is to use a temporary memory context (like the jsonfuncs.c),\nreset it after every tuple is put into the tuple store. This fix keeps\nthe memory under limits. I'm attaching the patches here. For HEAD, I'd\nwant to be a bit defensive and use the temporary memory context for\npg_get_wal_fpi_info() too.\n\nAnd, the fix also needs to be back-patched to PG15.\n\n[1]\nHEAD:\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\nCOMMAND\n1105979 ubuntu 20 0 28.5g 28.4g 150492 R 80.7 93.0 1:47.12\npostgres\n\nPATCHED:\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\nCOMMAND\n 13149 ubuntu 20 0 173244 156872 150688 R 79.0 0.5 1:25.09\npostgres\n\npostgres=# select count(*) from\npg_get_wal_records_info_till_end_of_wal('0/1000000');\n count\n----------\n 35285649\n(1 row)\n\npostgres=# select pg_backend_pid();\n pg_backend_pid\n----------------\n 13149\n(1 row)\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 16 Feb 2023 18:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_walinspect memory leaks"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 06:00:00PM +0530, Bharath Rupireddy wrote:\n> The memory usage goes up with many WAL records in GetWALRecordsInfo().\n> The affected functions are pg_get_wal_records_info() and\n> pg_get_wal_records_info_till_end_of_wal(). I think the best way to fix\n> this is to use a temporary memory context (like the jsonfuncs.c),\n> reset it after every tuple is put into the tuple store. This fix keeps\n> the memory under limits. I'm attaching the patches here.\n\nWhat you are doing here looks OK, at quick glance. That's common\nacross the code, see also dblink or file_fdw.\n\n> For HEAD, I'd\n> want to be a bit defensive and use the temporary memory context for\n> pg_get_wal_fpi_info() too.\n\nIf there is a burst of FPWs across the range you are scanning, the\nproblem could be equally worse. Sorry for missing that.\n--\nMichael",
"msg_date": "Fri, 17 Feb 2023 16:56:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_walinspect memory leaks"
},
{
"msg_contents": "On Thu, 2023-02-16 at 18:00 +0530, Bharath Rupireddy wrote:\n> I'm attaching the patches here. For HEAD, I'd\n> want to be a bit defensive and use the temporary memory context for\n> pg_get_wal_fpi_info() too.\n\nI don't see why we shouldn't backpatch that, too?\n\nAlso, it seems like we should do the same thing for the loop in\nGetXLogSummaryStats(). Maybe just for the outer loop is fine (the inner\nloop is only 16 elements); though again, there's not an obvious\ndownside to fixing that, too.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Fri, 17 Feb 2023 15:37:51 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_walinspect memory leaks"
},
{
"msg_contents": "On Sat, Feb 18, 2023 at 5:07 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Thu, 2023-02-16 at 18:00 +0530, Bharath Rupireddy wrote:\n> > I'm attaching the patches here. For HEAD, I'd\n> > want to be a bit defensive and use the temporary memory context for\n> > pg_get_wal_fpi_info() too.\n>\n> I don't see why we shouldn't backpatch that, too?\n\npg_get_wal_fpi_info() is added in v16, so backpatching isn't necessary.\n\n> Also, it seems like we should do the same thing for the loop in\n> GetXLogSummaryStats(). Maybe just for the outer loop is fine (the inner\n> loop is only 16 elements); though again, there's not an obvious\n> downside to fixing that, too.\n\nFirstly, WAL record traversing loop in GetWalStats() really doesn't\nleak memory, because it just increments some counters and doesn't\npalloc any memory. Similarly, the loops in GetXLogSummaryStats() too\ndon't palloc any memory, so no memory leak. I've seen no memory growth\nduring execution of pg_get_wal_stats_till_end_of_wal() for 35million\nWAL records, see [1] PID 543967 (during the execution of the stats\nfunction, the memory usage remained constant). Therefore, I feel that\nthe fix isn't required for GetWalStats().\n\n[1]\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\nCOMMAND\n 543967 ubuntu 20 0 168668 152056 149988 R 99.7 0.5 1:33.72\npostgres\n 412271 ubuntu 20 0 1101852 252724 42904 S 1.3 0.8 2:18.36\nnode\n 412208 ubuntu 20 0 965000 112488 36012 S 0.3 0.4 0:23.46\nnode\n 477193 ubuntu 20 0 5837096 34172 9420 S 0.3 0.1 0:00.93\ncpptools-srv\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 20 Feb 2023 15:17:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_walinspect memory leaks"
},
{
"msg_contents": "On Mon, 2023-02-20 at 15:17 +0530, Bharath Rupireddy wrote:\n\n> Similarly, the loops in GetXLogSummaryStats() too\n> don't palloc any memory, so no memory leak.\n\nBreak on palloc in gdb in that loop and you'll see a palloc in\nCStringGetTextDatum(name). In general, you should expect *GetDatum() to\npalloc unless you're sure that it's pass-by-value. Even\nFloat8GetDatum() has code to account for pass-by-ref float8s.\n\nThere are also a couple calls to psprintf() in the stats_per_record\npath.\n\n> I've seen no memory growth\n> during execution of pg_get_wal_stats_till_end_of_wal() for 35million\n> WAL records, see [1] PID 543967 (during the execution of the stats\n> function, the memory usage remained constant). Therefore, I feel that\n> the fix isn't required for GetWalStats().\n\nThat is true because the loops in GetXLogSummaryStats() are based on\nconstants. It does at most RM_MAX_ID * MAX_XLINFO_TYPES calls to\nFillXLogStatsRow() regardless of the number of WAL records.\nIt's not a significant amount of memory, at least today. But, since\nwe're already using the temp context pattern, we might as well use it\nhere for clarity so that we don't have to guess about whether the\namount of memory is significant or not.\n\nCommitted to 16 with the changes to GetXLogSummaryStats() as well.\nCommitted unmodified version of your 15 backport. Thank you!\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Mon, 20 Feb 2023 11:34:03 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_walinspect memory leaks"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 11:34:03AM -0800, Jeff Davis wrote:\n> Committed to 16 with the changes to GetXLogSummaryStats() as well.\n> Committed unmodified version of your 15 backport. Thank you!\n\nThanks for taking care of the FPI code path, Jeff!\n--\nMichael",
"msg_date": "Tue, 21 Feb 2023 07:54:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_walinspect memory leaks"
}
] |
[
{
"msg_contents": "Hello,\n\nThis is closely related to the prior conversation at [1]. There are a\ncouple places in CONNECTION_AWAITING_RESPONSE where libpq will read a\nhuge number of bytes from a server that we really should have hung up on.\n\nThe attached patch adds a length check for the v2 error compatibility\ncase, and updates the v3 error handling to jump to error_return rather\nthan asking for more data. The existing error_return paths have been\nupdated for consistency.\n\nThanks,\n--Jacob\n\n[1]\nhttps://www.postgresql.org/message-id/a5c5783d-73f3-acbc-997f-1649a7406029%40timescale.com",
"msg_date": "Mon, 13 Feb 2023 15:22:11 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix unbounded authentication exchanges during PQconnectPoll()"
},
{
"msg_contents": "On 14/02/2023 01:22, Jacob Champion wrote:\n> Hello,\n> \n> This is closely related to the prior conversation at [1]. There are a\n> couple places in CONNECTION_AWAITING_RESPONSE where libpq will read a\n> huge number of bytes from a server that we really should have hung up on.\n> \n> The attached patch adds a length check for the v2 error compatibility\n> case, and updates the v3 error handling to jump to error_return rather\n> than asking for more data. The existing error_return paths have been\n> updated for consistency.\n\nLooks mostly OK to me. Just a few nits on the error message style:\n\nThis patch adds the following error messages:\n\n\"server sent overlong v2 error message\"\n\"server sent truncated error message\"\n\"server sent truncated protocol negotiation message\"\n\"server sent truncated authentication request\"\n\nExisting messages that are most similar to this:\n\n\"received invalid response to SSL negotiation: %c\"\n\"received unencrypted data after SSL response\"\n\"received invalid response to GSSAPI negotiation: %c\"\n\"received unencrypted data after GSSAPI encryption response\"\n\"expected authentication request from server, but received %c\"\n\"unexpected message from server during startup\"\n\nThe existing style emphasizes receiving the message, rather than what \nthe server sent. In that style, I'd suggest:\n\n\"received invalid error message\"\n\"received invalid protocol negotiation message\"\n\"received invalid authentication request\"\n\nI don't think the \"overlong\" or \"truncated\" bit is helpful. For example, \nif the pre-v3.0 error message seems to be \"overlong\", it's not clear \nthat's really what happened. More likely, it's just garbage. Similarly, \nthe \"truncated\" cases mean that we didn't receive a null-terminator when \nwe expected one. It might be because the message was truncated, i.e. the \nserver sent it with a too-short message length. But just as likely, it \nforgot to send the null-terminator, or it got confused in some other \nway. So I'd go with just \"invalid\".\n\nFor similar reasons, I don't think we need to distinguish between the V3 \nand pre-V3 errors in the error message. If it's garbage, we probably \ndidn't guess correctly which one it was.\n\nIt's useful to have a unique error message for every different error, so \nthat if you see that error, you can point to the exact place in the code \nwhere it was generated. If we care about that, we could add some detail \nto the messages, like \"received invalid error message; null-terminator \nnot found before end-of-message\". I don't think that's necessary, \nthough, and we've re-used the \"expected authentication request from \nserver, but received %c\" for two different checks already.\n\n> @@ -3370,6 +3389,7 @@ keep_going: /* We will come back to here until there is\n> /* Get the type of request. */\n> if (pqGetInt((int *) &areq, 4, conn))\n> {\n> + libpq_append_conn_error(conn, \"server sent truncated authentication request\");\n> goto error_return;\n> }\n> msgLength -= 4;\n\nThis is unreachable, because we already checked the length. Better safe \nthan sorry I guess, but let's avoid the translation overhead of this at \nleast.\n\nThis isn't from your patch, but a pre-existing message in the vicinity \nthat caught my eye:\n\n> \t\t\t\tif ((beresp == 'R' || beresp == 'v') && (msgLength < 8 || msgLength > 2000))\n> \t\t\t\t{\n> \t\t\t\t\tlibpq_append_conn_error(conn, \"expected authentication request from server, but received %c\",\n> \t\t\t\t\t\t\t\t\t beresp);\n> \t\t\t\t\tgoto error_return;\n> \t\t\t\t}\n\nIf we receive a 'R' or 'v' message that's too long or too short, the \nmessage is confusing because the 'beresp' that it prints is actually \nexpected, but the length is unexpected.\n\n(Wow, that was a long message for such a simple patch. I may have fallen \ninto the trap of bikeshedding, sorry :-) )\n\n- Heikki\n\n\n",
"msg_date": "Tue, 21 Feb 2023 22:35:55 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix unbounded authentication exchanges during\n PQconnectPoll()"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 12:35 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I don't think the \"overlong\" or \"truncated\" bit is helpful. For example,\n> if the pre-v3.0 error message seems to be \"overlong\", it's not clear\n> that's really what happened. More likely, it's just garbage.\n\nI think this is maybe a distinction without a difference, at least at\nthe protocol level -- in the event of a missed terminator, any message\ncould be garbage independently of whether it's too long. But I also\ndon't mind the \"invalid\" wording you've proposed, so done that way in\nv2. (You're probably going to break out Wireshark for this either\nway.)\n\n> It's useful to have a unique error message for every different error, so\n> that if you see that error, you can point to the exact place in the code\n> where it was generated. If we care about that, we could add some detail\n> to the messages, like \"received invalid error message; null-terminator\n> not found before end-of-message\". I don't think that's necessary,\n> though, and we've re-used the \"expected authentication request from\n> server, but received %c\" for two different checks already.\n\n(Note that I've reworded the duplicate message in patch v2, if that\nchanges the calculus.)\n\n> > @@ -3370,6 +3389,7 @@ keep_going: /* We will come back to here until there is\n> > /* Get the type of request. */\n> > if (pqGetInt((int *) &areq, 4, conn))\n> > {\n> > + libpq_append_conn_error(conn, \"server sent truncated authentication request\");\n> > goto error_return;\n> > }\n> > msgLength -= 4;\n>\n> This is unreachable, because we already checked the length. Better safe\n> than sorry I guess, but let's avoid the translation overhead of this at\n> least.\n\nShould we just Assert() instead of an error message?\n\n> This isn't from your patch, but a pre-existing message in the vicinity\n> that caught my eye:\n>\n> > if ((beresp == 'R' || beresp == 'v') && (msgLength < 8 || msgLength > 2000))\n> > {\n> > libpq_append_conn_error(conn, \"expected authentication request from server, but received %c\",\n> > beresp);\n> > goto error_return;\n> > }\n>\n> If we receive a 'R' or 'v' message that's too long or too short, the\n> message is confusing because the 'beresp' that it prints is actually\n> expected, but the length is unexpected.\n\nUpdated. I think there's room for additional improvement here, since\nas of the protocol negotiation improvements, we don't just expect an\nauthentication request anymore.\n\n> (Wow, that was a long message for such a simple patch. I may have fallen\n> into the trap of bikeshedding, sorry :-) )\n\nNo worries :D This code is overdue for a tuneup, I think, and message\ntweaks are cheap.\n\nThanks!\n--Jacob",
"msg_date": "Wed, 22 Feb 2023 10:49:47 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix unbounded authentication exchanges during\n PQconnectPoll()"
},
{
"msg_contents": "On 22/02/2023 20:49, Jacob Champion wrote:\n> On Tue, Feb 21, 2023 at 12:35 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>> @@ -3370,6 +3389,7 @@ keep_going: /* We will come back to here until there is\n>>> /* Get the type of request. */\n>>> if (pqGetInt((int *) &areq, 4, conn))\n>>> {\n>>> + libpq_append_conn_error(conn, \"server sent truncated authentication request\");\n>>> goto error_return;\n>>> }\n>>> msgLength -= 4;\n>>\n>> This is unreachable, because we already checked the length. Better safe\n>> than sorry I guess, but let's avoid the translation overhead of this at\n>> least.\n> \n> Should we just Assert() instead of an error message?\n\nI separated the earlier message-length checks so that you get \"invalid \ninvalid authentication request\" or \"received invalid protocol \nnegotiation message\", depending on whether it was an 'R' or 'v' message. \nWith that, \"invalid invalid authentication request\" becomes translatable \nanyway, which makes the point on translation overhead moot. I added a \ncomment to mention that it's unreachable, though.\n\nI also reformatted the comments a little more.\n\nPushed with those changes, thanks!\n\n- Heikki\n\n\n\n",
"msg_date": "Wed, 22 Feb 2023 21:43:20 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix unbounded authentication exchanges during\n PQconnectPoll()"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 11:43 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I separated the earlier message-length checks so that you get \"invalid\n> invalid authentication request\" or \"received invalid protocol\n> negotiation message\", depending on whether it was an 'R' or 'v' message.\n> With that, \"invalid invalid authentication request\" becomes translatable\n> anyway, which makes the point on translation overhead moot. I added a\n> comment to mention that it's unreachable, though.\n\nLooks good, thank you!\n\n--Jacob\n\n\n",
"msg_date": "Wed, 22 Feb 2023 13:09:25 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix unbounded authentication exchanges during\n PQconnectPoll()"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nI was asked to prototype a feature that helps to distinguish shared\nbuffer usage between index reads and heap reads. Practically it looks\nlike this:\n\n# explain (analyze,verbose,buffers) select nextval('s');\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=8) (actual time=1.159..1.160\nrows=1 loops=1)\n Output: nextval('s'::regclass)\n Buffers: shared hit=7(relation 3, index 4) read=6(relation 1, index\n4, sequence 1) dirtied=1\n Planning Time: 0.214 ms\n Execution Time: 1.238 ms\n(5 rows)\n\n\nThe change is in these parts \"(relation 3, index 4)\" and \"(relation 1,\nindex 4, sequence 1)\". Probably, it should help DBAs to better\nunderstand complex queries.\nI think cluttering output with more text is OK as long as \"verbose\" is\nrequested.\n\nBut there are some caveats:\n1. Some more increments on hot paths. We have to add this tiny toll to\nevery single buffer hit, but it will be seldom of any use.\n2. It's difficult to measure writes caused by query, and even dirties.\nThe patch adds \"evictions\" caused by query, but they have little\npractical sense too.\n\nAll in all I do not have an opinion if this feature is a good tradeoff.\nWhat do you think? Does the feature look useful? Do we want a more\npolished implementation?\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Mon, 13 Feb 2023 16:23:30 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Buffer usage detailed by RelKind in EXPLAIN ANALYZE BUFFERS"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-13 16:23:30 -0800, Andrey Borodin wrote:\n> But there are some caveats:\n> 1. Some more increments on hot paths. We have to add this tiny toll to\n> every single buffer hit, but it will be seldom of any use.\n\nAdditionally, I bet it slows down EXPLAIN (ANALYZE, BUFFERS) noticeably. It's\nalready quite expensive...\n\n\n> All in all I do not have an opinion if this feature is a good tradeoff.\n> What do you think? Does the feature look useful? Do we want a more\n> polished implementation?\n\nUnless the above issues could be avoided, I don't think so.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:29:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Buffer usage detailed by RelKind in EXPLAIN ANALYZE BUFFERS"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 4:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > 1. Some more increments on hot paths. We have to add this tiny toll to\n> > every single buffer hit, but it will be seldom of any use.\n>\n> Additionally, I bet it slows down EXPLAIN (ANALYZE, BUFFERS) noticeably. It's\n> already quite expensive...\n>\n\nI think collection of instrumentation is done unconditionally.\nWe always do that\npgBufferUsage.shared_blks_hit++;\nwhen the buffer is in shared_buffers.\n\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:36:25 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Buffer usage detailed by RelKind in EXPLAIN ANALYZE BUFFERS"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-13 16:36:25 -0800, Andrey Borodin wrote:\n> On Mon, Feb 13, 2023 at 4:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > > 1. Some more increments on hot paths. We have to add this tiny toll to\n> > > every single buffer hit, but it will be seldom of any use.\n> >\n> > Additionally, I bet it slows down EXPLAIN (ANALYZE, BUFFERS) noticeably. It's\n> > already quite expensive...\n> >\n> \n> I think collection of instrumentation is done unconditionally.\n> We always do that\n> pgBufferUsage.shared_blks_hit++;\n> when the buffer is in shared_buffers.\n\nThe problem I'm talking about is the increased overhead in InstrStopNode(),\ndue to BufferUsageAccumDiff() getting more expensive.\n\nAndres\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:39:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Buffer usage detailed by RelKind in EXPLAIN ANALYZE BUFFERS"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 4:39 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> The problem I'm talking about is the increased overhead in InstrStopNode(),\n> due to BufferUsageAccumDiff() getting more expensive.\n>\n\nThanks, now I understand the problem better. According to godbolt.com\nmy patch doubles the number of instructions in this function. Unless\nwe compute only tables\\indexes\\matviews.\nAnyway, without regarding functionality of this particular patch,\nBufferUsageAccumDiff() does not seem slow to me. It's just a\nbranchless bunch of simd instructions. Performance of this function\nmight matter only when called gazillion times per second.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Mon, 13 Feb 2023 18:14:58 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Buffer usage detailed by RelKind in EXPLAIN ANALYZE BUFFERS"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-13 18:14:58 -0800, Andrey Borodin wrote:\n> On Mon, Feb 13, 2023 at 4:39 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > The problem I'm talking about is the increased overhead in InstrStopNode(),\n> > due to BufferUsageAccumDiff() getting more expensive.\n> >\n> \n> Thanks, now I understand the problem better. According to godbolt.com\n> my patch doubles the number of instructions in this function. Unless\n> we compute only tables\\indexes\\matviews.\n> Anyway, without regarding functionality of this particular patch,\n> BufferUsageAccumDiff() does not seem slow to me. It's just a\n> branchless bunch of simd instructions. Performance of this function\n> might matter only when called gazillion times per second.\n\nIt is called gazillions of times per second when you do an EXPLAIN (ANALYZE,\nBUFFERS). Every invocation of an executor node calls it.\n\nHere's a quick pgbench, showing todays code, with -O3, without assertions:\n 298.396 0 SELECT generate_series(1, 10000000) OFFSET 10000000;\n 397.400 0 EXPLAIN (ANALYZE, TIMING OFF) SELECT generate_series(1, 10000000) OFFSET 10000000;\n 717.238 0 EXPLAIN (ANALYZE, TIMING ON) SELECT generate_series(1, 10000000) OFFSET 10000000;\n 419.736 0 EXPLAIN (ANALYZE, BUFFERS, TIMING OFF) SELECT generate_series(1, 10000000) OFFSET 10000000;\n 761.575 0 EXPLAIN (ANALYZE, BUFFERS, TIMING ON) SELECT generate_series(1, 10000000) OFFSET 10000000;\n\nThe effect ends up a lot larger once you add in joins etc, because it ads\nadditional executor node that all have their instrumentation started/stopped.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 18:43:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Buffer usage detailed by RelKind in EXPLAIN ANALYZE BUFFERS"
}
] |
[
{
"msg_contents": "Here are two patches that refactor the mostly repetitive \"${object} is \nvisible\" and get_${object}_oid() functions in namespace.c. This uses \nthe functions in objectaddress.c to look up the appropriate per-catalog \nsystem caches and attribute numbers, similar to other refactoring \npatches I have posted recently.\n\nIn both cases, there are some functions that have special behaviors that \nare not easy to unify, so I left those alone for now.\n\nNotes on 0001-Refactor-is-visible-functions.patch:\n\nAmong the functions that are being unified, some check temp schemas and \nsome skip them. I suppose that this is because some (most) object types \ncannot normally be in temp schemas, but this isn't made explicit in the \ncode. I added a code comment about this, the way I understand it.\n\nThat said, you can create objects explicitly in temp schemas, so I'm not \nsure the existing code is completely correct.\n\nNotes on 0002-Refactor-common-parts-of-get_-_oid-functions.patch:\n\nHere, I only extracted the common parts of each function but left the \nactual functions alone, because they each have to produce their own \nerror message. There is a possibility to generalize this further, \nperhaps in the style of does_not_exist_skipping(), but that looked like \na separate step to me.",
"msg_date": "Tue, 14 Feb 2023 14:32:04 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "some namespace.c refactoring"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Here are two patches that refactor the mostly repetitive \"${object} is \n> visible\" and get_${object}_oid() functions in namespace.c. This uses \n> the functions in objectaddress.c to look up the appropriate per-catalog \n> system caches and attribute numbers, similar to other refactoring \n> patches I have posted recently.\n\nThis does not look like a simple refactoring patch to me. I have\nvery serious concerns first about whether it even preserves the\nexisting semantics, and second about whether there is a performance\npenalty.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Feb 2023 00:04:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: some namespace.c refactoring"
},
{
"msg_contents": "On 2023-Feb-15, Tom Lane wrote:\n\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > Here are two patches that refactor the mostly repetitive \"${object} is \n> > visible\" and get_${object}_oid() functions in namespace.c. This uses \n> > the functions in objectaddress.c to look up the appropriate per-catalog \n> > system caches and attribute numbers, similar to other refactoring \n> > patches I have posted recently.\n> \n> This does not look like a simple refactoring patch to me. I have\n> very serious concerns first about whether it even preserves the\n> existing semantics, and second about whether there is a performance\n> penalty.\n\nI suppose there are two possible questionable angles from a performance\nPOV:\n\n1. the new code uses get_object_property_data() when previously there\nwas a straight dereference after casting to the right struct type. So\nhow expensive is that? I think the answer to that is not great, because\nit does a linear array scan on ObjectProperty. Maybe we need a better\nanswer.\n\n2. other accesses to the data are done using SysCacheGetAttr instead of\nstraight struct access dereferences. I expect that most of the fields\nbeing accessed have attcacheoff set, which allows pretty fast access to\nthe field in question, so it's not *that* bad. (For cases where\nattcacheoff is not set, then the original code would also have to deform\nthe tuple.) Still, it's going to have nontrivial impact in any\nmicrobenchmarking.\n\nThat said, I think most of this code is invoked for DDL, where\nperformance is not so critical; probably just fixing\nget_object_property_data to not be so naïve would suffice.\n\nQueries are another matter. I can't think of a way to determine which\nof these paths are used for queries, so that we can optimize more (IOW:\njust plain not rewrite those); manually going through the callers seems\na bit laborious, but perhaps doable.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nThou shalt check the array bounds of all strings (indeed, all arrays), for\nsurely where thou typest \"foo\" someone someday shall type\n\"supercalifragilisticexpialidocious\" (5th Commandment for C programmers)\n\n\n",
"msg_date": "Wed, 15 Feb 2023 19:04:56 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: some namespace.c refactoring"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 02:32:04PM +0100, Peter Eisentraut wrote:\n> Notes on 0001-Refactor-is-visible-functions.patch:\n> \n> Among the functions that are being unified, some check temp schemas and some\n> skip them. I suppose that this is because some (most) object types cannot\n> normally be in temp schemas, but this isn't made explicit in the code. I\n> added a code comment about this, the way I understand it.\n> \n> That said, you can create objects explicitly in temp schemas, so I'm not\n> sure the existing code is completely correct.\n\n> +\t\t\t/*\n> +\t\t\t * Do not look in temp namespace for object types that don't\n> +\t\t\t * support temporary objects\n> +\t\t\t */\n> +\t\t\tif (!(classid == RelationRelationId || classid == TypeRelationId) &&\n> +\t\t\t\tnamespaceId == myTempNamespace)\n> +\t\t\t\tcontinue;\n\nI think the reason for the class-specific *IsVisible behavior is alignment\nwith the lookup rules that CVE-2007-2138 introduced (commit aa27977). \"CREATE\nFUNCTION pg_temp.f(...)\" works, but calling the resulting function requires a\nschema-qualified name regardless of search_path. Since *IsVisible functions\ndetermine whether you can reach the object without schema qualification, their\noutcomes shall reflect those CVE-2007-2138 rules.\n\n\n",
"msg_date": "Wed, 15 Feb 2023 20:36:16 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: some namespace.c refactoring"
},
{
"msg_contents": "On 15.02.23 19:04, Alvaro Herrera wrote:\n> That said, I think most of this code is invoked for DDL, where\n> performance is not so critical; probably just fixing\n> get_object_property_data to not be so naïve would suffice.\n\nOk, I'll look into that.\n\n> Queries are another matter. I can't think of a way to determine which\n> of these paths are used for queries, so that we can optimize more (IOW:\n> just plain not rewrite those); manually going through the callers seems\n> a bit laborious, but perhaps doable.\n\nThe \"is visible\" functions are only used for the likes of psql, pg_dump, \nquery tree reversing, object descriptions -- nothing that is in a normal \nquery unless you explicitly call it.\n\nThe get_*_oid() functions are used mostly for DDL to find conflicting \nobjects. The text-search related ones can be invoked via some user \nfunctions, if you specify a TS object other than the default one. I \nwill check into what the impact of that is.\n\n\n\n",
"msg_date": "Mon, 20 Feb 2023 15:03:52 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: some namespace.c refactoring"
},
{
"msg_contents": "On 15.02.23 06:04, Tom Lane wrote:\n> I have\n> very serious concerns first about whether it even preserves the\n> existing semantics, and second about whether there is a performance\n> penalty.\n\nWe can work out the performance issues, but what are your concerns about \nsemantics?\n\n\n",
"msg_date": "Mon, 20 Feb 2023 15:04:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: some namespace.c refactoring"
},
{
"msg_contents": "On 20.02.23 15:03, Peter Eisentraut wrote:\n> On 15.02.23 19:04, Alvaro Herrera wrote:\n>> That said, I think most of this code is invoked for DDL, where\n>> performance is not so critical; probably just fixing\n>> get_object_property_data to not be so naïve would suffice.\n> \n> Ok, I'll look into that.\n\nI did a variety of performance testing on this now.\n\nI wrote a C function that calls the \"is visible\" functions in a tight loop:\n\nDatum\npg_test_visible(PG_FUNCTION_ARGS)\n{\n int32 count = PG_GETARG_INT32(0);\n Oid relid = PG_GETARG_OID(1);\n Oid typid = PG_GETARG_OID(2);\n\n for (int i = 0; i < count; i++)\n {\n RelationIsVisible(relid);\n TypeIsVisible(typid);\n //ObjectIsVisible(RelationRelationId, relid);\n //ObjectIsVisible(TypeRelationId, typid);\n }\n\n PG_RETURN_VOID();\n}\n\n(It's calling two different ones to defeat the caching in \nget_object_property_data().)\n\nHere are some run times:\n\nunpatched:\n\nselect pg_test_visible(100_000_000, 'pg_class', 'int4');\nTime: 4536.747 ms (00:04.537)\n\nselect pg_test_visible(100_000_000, 'tenk1', 'widget');\nTime: 10828.802 ms (00:10.829)\n\n(Note that the \"is visible\" functions special case system catalogs.)\n\npatched:\n\nselect pg_test_visible(100_000_000, 'pg_class', 'int4');\nTime: 11409.948 ms (00:11.410)\n\nselect pg_test_visible(100_000_000, 'tenk1', 'widget');\nTime: 18649.496 ms (00:18.649)\n\nSo, it's slower, but it's not clear whether it matters in practice, \nconsidering this test.\n\nI also wondered if this is visible through a normal external function \ncall, so I tried\n\ndo $$ begin perform pg_get_statisticsobjdef(28999) from \ngenerate_series(1, 1_000_000); end $$;\n\n(where that is the OID of the first object from select * from \npg_statistic_ext; in the regression database).\n\nunpatched:\n\nTime: 6952.259 ms (00:06.952)\n\npatched (first patch only):\n\nTime: 6993.655 ms (00:06.994)\n\npatched (both patches):\n\nTime: 7114.290 ms (00:07.114)\n\nSo there is some visible impact, but again, the test isn't realistic.\n\nThen I tried a few ways to make get_object_property_data() faster. I \ntried building a class_id+index cache that is qsort'ed (once) and then \nbsearch'ed, that helped only minimally, not enough to make up the \ndifference. I also tried just searching the class_id+index cache \nlinearly, hoping maybe that if the cache is smaller it would be more \nefficient to access, but that actually made things (minimally) worse, \nprobably because of the indirection. So it might be hard to get much \nmore out of this. I also thought about PerfectHash, but I didn't code \nthat up yet.\n\nAnother way would be to not use get_object_property_data() at all but \nwrite a \"common\" function that we pass in all it needs hardcodedly, like\n\nbool\nRelationIsVisible(Oid relid)\n{\n return IsVisible_common(RELOID,\n Anum_pg_class_relname\n Anum_pg_class_relnamespace);\n}\n\nThis would still save a lot of duplicate code.\n\nBut again, I don't think the micro-performance really matters here.\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 12:07:58 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: some namespace.c refactoring"
},
{
"msg_contents": "On Thu, 23 Feb 2023 at 16:38, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 20.02.23 15:03, Peter Eisentraut wrote:\n> > On 15.02.23 19:04, Alvaro Herrera wrote:\n> >> That said, I think most of this code is invoked for DDL, where\n> >> performance is not so critical; probably just fixing\n> >> get_object_property_data to not be so naïve would suffice.\n> >\n> > Ok, I'll look into that.\n>\n> I did a variety of performance testing on this now.\n>\n> I wrote a C function that calls the \"is visible\" functions in a tight loop:\n>\n> Datum\n> pg_test_visible(PG_FUNCTION_ARGS)\n> {\n> int32 count = PG_GETARG_INT32(0);\n> Oid relid = PG_GETARG_OID(1);\n> Oid typid = PG_GETARG_OID(2);\n>\n> for (int i = 0; i < count; i++)\n> {\n> RelationIsVisible(relid);\n> TypeIsVisible(typid);\n> //ObjectIsVisible(RelationRelationId, relid);\n> //ObjectIsVisible(TypeRelationId, typid);\n> }\n>\n> PG_RETURN_VOID();\n> }\n>\n> (It's calling two different ones to defeat the caching in\n> get_object_property_data().)\n>\n> Here are some run times:\n>\n> unpatched:\n>\n> select pg_test_visible(100_000_000, 'pg_class', 'int4');\n> Time: 4536.747 ms (00:04.537)\n>\n> select pg_test_visible(100_000_000, 'tenk1', 'widget');\n> Time: 10828.802 ms (00:10.829)\n>\n> (Note that the \"is visible\" functions special case system catalogs.)\n>\n> patched:\n>\n> select pg_test_visible(100_000_000, 'pg_class', 'int4');\n> Time: 11409.948 ms (00:11.410)\n>\n> select pg_test_visible(100_000_000, 'tenk1', 'widget');\n> Time: 18649.496 ms (00:18.649)\n>\n> So, it's slower, but it's not clear whether it matters in practice,\n> considering this test.\n>\n> I also wondered if this is visible through a normal external function\n> call, so I tried\n>\n> do $$ begin perform pg_get_statisticsobjdef(28999) from\n> generate_series(1, 1_000_000); end $$;\n>\n> (where that is the OID of the first object from select * from\n> pg_statistic_ext; in the regression database).\n>\n> unpatched:\n>\n> Time: 6952.259 ms (00:06.952)\n>\n> patched (first patch only):\n>\n> Time: 6993.655 ms (00:06.994)\n>\n> patched (both patches):\n>\n> Time: 7114.290 ms (00:07.114)\n>\n> So there is some visible impact, but again, the test isn't realistic.\n>\n> Then I tried a few ways to make get_object_property_data() faster. I\n> tried building a class_id+index cache that is qsort'ed (once) and then\n> bsearch'ed, that helped only minimally, not enough to make up the\n> difference. I also tried just searching the class_id+index cache\n> linearly, hoping maybe that if the cache is smaller it would be more\n> efficient to access, but that actually made things (minimally) worse,\n> probably because of the indirection. So it might be hard to get much\n> more out of this. I also thought about PerfectHash, but I didn't code\n> that up yet.\n>\n> Another way would be to not use get_object_property_data() at all but\n> write a \"common\" function that we pass in all it needs hardcodedly, like\n>\n> bool\n> RelationIsVisible(Oid relid)\n> {\n> return IsVisible_common(RELOID,\n> Anum_pg_class_relname\n> Anum_pg_class_relnamespace);\n> }\n>\n> This would still save a lot of duplicate code.\n>\n> But again, I don't think the micro-performance really matters here.\n\nI'm seeing that there has been no activity in this thread for almost a\nyear now, I'm planning to close this in the current commitfest unless\nsomeone is planning to take it forward.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 21 Jan 2024 18:04:17 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: some namespace.c refactoring"
}
] |
[
{
"msg_contents": "Hello All,\n\nIm working on my postgres FDW extension, to support logical replication I\nneed to use Custom WAL resource manager. In postgres extensions have the\nflexibility to register their resource managers in RmgrTable[]. But Like\nRmgrTable[] we have another resource manager related table RmgrDescTable[],\nthere we didn't have the flexibility to register our\n1) rm_name\n2) rm_desc\n3) rm_identify\nGetRmgrDesc() are widely used in XLogDumpDisplayRecord() and\nXLogDumpDisplayStats() in pg_waldump.c. In function GetRmgrDesc() for\ncustom resource managers by the default they are assign\n 1) rm_desc = default_desc\n 2) rm_identify = default_identify\nSuggest some ways to register my own rm_desc and rm_identify in\nRmgrDescTable[]?\n\nAttaching the Custom WAL resource managers commit for the reference:\nhttps://github.com/postgres/postgres/commit/5c279a6d350205cc98f91fb8e1d3e4442a6b25d1\n\nThanks and regards\nPradeep Kumar\n\nHello All,Im working on my postgres FDW extension, to support logical replication I need to use Custom WAL resource manager. In postgres extensions have the flexibility to register their resource managers in RmgrTable[]. But Like RmgrTable[] we have another resource manager related table RmgrDescTable[], there we didn't have the flexibility to register our 1) rm_name2) rm_desc3) rm_identifyGetRmgrDesc() are widely used in XLogDumpDisplayRecord() and XLogDumpDisplayStats() in pg_waldump.c. In function GetRmgrDesc() for custom resource managers by the default they are assign 1) rm_desc = default_desc 2) rm_identify = default_identifySuggest some ways to register my own rm_desc and rm_identify in RmgrDescTable[]?Attaching the Custom WAL resource managers commit for the reference:https://github.com/postgres/postgres/commit/5c279a6d350205cc98f91fb8e1d3e4442a6b25d1Thanks and regardsPradeep Kumar",
"msg_date": "Tue, 14 Feb 2023 19:09:49 +0530",
"msg_from": "Pradeep Kumar <spradeepkumar29@gmail.com>",
"msg_from_op": true,
"msg_subject": "Extensible Rmgr for Table Ams"
},
{
"msg_contents": "On Tue, 2023-02-14 at 19:09 +0530, Pradeep Kumar wrote:\n\n> GetRmgrDesc() are widely used in XLogDumpDisplayRecord() and\n> XLogDumpDisplayStats() in pg_waldump.c. In function GetRmgrDesc() for\n> custom resource managers by the default they are assign \n> 1) rm_desc = default_desc\n> 2) rm_identify = default_identify\n> Suggest some ways to register my own rm_desc and rm_identify in\n> RmgrDescTable[]?\n\nThe binaries like pg_waldump, etc., don't load server extensions. It\nmakes sense to allow utilities to also parse custom WAL records\nsomehow, but right now I don't have a concrete proposal.\n\nRegards,\n\tJeff Davis\n\n\n",
"msg_date": "Tue, 14 Feb 2023 09:31:09 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Extensible Rmgr for Table Ams"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nIn 82d0a46ea32 AllocSetRealloc() was changed to allow decreasing size of\nexternal chunks and give memory back to the malloc pool. Two\nVALGRIND_MAKE_MEM_UNDEFINED() calls were not changed to work properly in the\ncase of decreasing size: they can mark memory behind the new allocated\nmemory\nUNDEFINED. If this memory was already allocated and initialized, it's\nexpected\nto be DEFINED. So it can cause false valgrind error reports. I fixed it in\n0001\npatch.\n\nAlso, it took me a while to understand what's going on there, so in 0002\npatch\nI tried to improve comments and renamed a variable. Its name \"oldsize\"\nconfused\nme. I first thought \"oldsize\" and \"size\" represent the same parameters of\nthe\nold and new chunk. But actually \"size\" is new \"chunk->requested_size\" and\n\"oldsize\" is old \"chksize\". So I believe it's better to rename \"oldsize\"\ninto\n\"oldchksize\".\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/",
"msg_date": "Tue, 14 Feb 2023 17:49:27 +0300",
"msg_from": "Karina Litskevich <litskevichkarina@gmail.com>",
"msg_from_op": true,
"msg_subject": "Possible false valgrind error reports"
},
{
"msg_contents": "Karina Litskevich <litskevichkarina@gmail.com> writes:\n> In 82d0a46ea32 AllocSetRealloc() was changed to allow decreasing size of\n> external chunks and give memory back to the malloc pool. Two\n> VALGRIND_MAKE_MEM_UNDEFINED() calls were not changed to work properly in the\n> case of decreasing size: they can mark memory behind the new allocated\n> memory\n> UNDEFINED. If this memory was already allocated and initialized, it's\n> expected\n> to be DEFINED. So it can cause false valgrind error reports. I fixed it in\n> 0001 patch.\n\nHmm, I see the concern: adjusting the Valgrind marking of bytes beyond the\nnewly-realloced block is wrong because it might tromp on memory allocated\nin another way. However, I'm not sure about the details of your patch.\n\nThe first hunk in 0001 doesn't seem quite right yet:\n\n * old allocation.\n */\n #ifdef USE_VALGRIND\n- if (oldsize > chunk->requested_size)\n+ if (size > chunk->requested_size && oldsize > chunk->requested_size)\n VALGRIND_MAKE_MEM_UNDEFINED((char *) pointer + chunk->requested_size,\n oldsize - chunk->requested_size);\n #endif\n\nIf size < oldsize, aren't we still doing the wrong thing? Seems like\nmaybe it has to be like\n\n if (size > chunk->requested_size && oldsize > chunk->requested_size)\n VALGRIND_MAKE_MEM_UNDEFINED((char *) pointer + chunk->requested_size,\n Min(size, oldsize) - chunk->requested_size);\n\n * allocation; it could have been as small as one byte. We have to be\n * conservative and just mark the entire old portion DEFINED.\n */\n- VALGRIND_MAKE_MEM_DEFINED(pointer, oldsize);\n+ if (size >= oldsize)\n+ VALGRIND_MAKE_MEM_DEFINED(pointer, oldsize);\n+ else\n+ VALGRIND_MAKE_MEM_DEFINED(pointer, size);\n #endif\n\nThis is OK, though I wonder if it'd read better as\n\n+ VALGRIND_MAKE_MEM_DEFINED(pointer, Min(size, oldsize));\n\n\nI've not thought hard about whether I like the variable renaming proposed\nin 0002. I do suggest though that those comment changes are an integral\npart of the bug fix and hence belong in 0001.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Feb 2023 15:21:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible false valgrind error reports"
},
{
"msg_contents": "Thank you, I moved comment changes to 0001 and rewrote the fix through Min().\n\n> The first hunk in 0001 doesn't seem quite right yet:\n>\n> * old allocation.\n> */\n> #ifdef USE_VALGRIND\n> - if (oldsize > chunk->requested_size)\n> + if (size > chunk->requested_size && oldsize > chunk->requested_size)\n> VALGRIND_MAKE_MEM_UNDEFINED((char *) pointer + chunk->requested_size,\n> oldsize - chunk->requested_size);\n> #endif\n>\n> If size < oldsize, aren't we still doing the wrong thing? Seems like\n> maybe it has to be like\n\nIf size > chunk->requested_size than chksize >= oldsize and so we can mark this\nmemory without worries. Region from size to chksize will be marked NOACCESS\nlater anyway:\n\n/* Ensure any padding bytes are marked NOACCESS. */\nVALGRIND_MAKE_MEM_NOACCESS((char *) pointer + size, chksize - size);\n\nI agree that it's not obvious, so I changed the first hunk like this:\n\n- if (oldsize > chunk->requested_size)\n+ if (Min(size, oldsize) > chunk->requested_size)\n VALGRIND_MAKE_MEM_UNDEFINED((char *) pointer + chunk->requested_size,\n- oldsize - chunk->requested_size);\n+ Min(size, oldsize) - chunk->requested_size);\n\nAny ideas on how to make this place easier to understand and comment above it\nconcise and clear are welcome.\n\nThere is another thing about this version. New line\n+ Min(size, oldsize) - chunk->requested_size);\nis longer than 80 symbols and I don't know what's the best way to avoid this\nwithout making it look weird.\n\nI also noticed that if RANDOMIZE_ALLOCATED_MEMORY is defined then\nrandomize_mem()\nhave already marked this memory UNDEFINED. So we only \"may need to adjust\ntrailing bytes\" if RANDOMIZE_ALLOCATED_MEMORY isn't defined. I reflected it in\nv2 of 0001 too.",
"msg_date": "Fri, 17 Feb 2023 18:58:45 +0300",
"msg_from": "Karina Litskevich <litskevichkarina@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible false valgrind error reports"
},
{
"msg_contents": "Karina Litskevich <litskevichkarina@gmail.com> writes:\n> Thank you, I moved comment changes to 0001 and rewrote the fix through Min().\n\nLooks good. I pushed it after a little more fiddling with the comments.\nThanks for the report and patch!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Feb 2023 18:50:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible false valgrind error reports"
}
] |
[
{
"msg_contents": "Hi,\n\nThe default reaction to SIGQUIT is to create core dumps. We use SIGQUIT to\nimplement immediate shutdowns. We send the signal to the entire process group.\n\nThe result of that is that we regularly produce core dumps for binaries like\nsh/cp. I regularly see this on my local system, I've seen it on CI. Recently\nThomas added logic to show core dumps happing in cfbot ([1]). Plenty unrelated\ncore dumps, but also lots in sh/cp ([2]).\n\nWe found a bunch of issues as part of [3], but I think the issue I'm\ndiscussing here is separate.\n\n\nISTM that signal_child() should downgrade SIGQUIT to SIGTERM when sending to\nthe process group. That way we'd maintain the current behaviour for postgres\nitself, but stop core-dumping archive/restore scripts (as well as other\nsubprocesses that e.g. trusted PLs might create).\n\n\nMakes sense?\n\n\nGreetings,\n\nAndres Freund\n\n\n[1] http://cfbot.cputube.org/highlights/core.html\n\n[2] A small sample:\nhttps://api.cirrus-ci.com/v1/task/5939902693507072/logs/cores.log\nhttps://api.cirrus-ci.com/v1/task/5549174150660096/logs/cores.log\nhttps://api.cirrus-ci.com/v1/task/6153817767542784/logs/cores.log\nhttps://api.cirrus-ci.com/v1/task/6567335205535744/logs/cores.log\nhttps://api.cirrus-ci.com/v1/task/4804998119292928/logs/cores.log\n\n[3] https://postgr.es/m/Y9nGDSgIm83FHcad%40paquier.xyz\n\n\n",
"msg_date": "Tue, 14 Feb 2023 12:29:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> ISTM that signal_child() should downgrade SIGQUIT to SIGTERM when sending to\n> the process group. That way we'd maintain the current behaviour for postgres\n> itself, but stop core-dumping archive/restore scripts (as well as other\n> subprocesses that e.g. trusted PLs might create).\n\nYeah, I had been thinking along the same lines. One issue\nis that that means the backend itself will get SIGQUIT and SIGTERM\nin close succession. We need to make sure that that won't cause\nproblems. It might be prudent to think about what order to send\nthe two signals in.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Feb 2023 15:38:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-14 15:38:24 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > ISTM that signal_child() should downgrade SIGQUIT to SIGTERM when sending to\n> > the process group. That way we'd maintain the current behaviour for postgres\n> > itself, but stop core-dumping archive/restore scripts (as well as other\n> > subprocesses that e.g. trusted PLs might create).\n> \n> Yeah, I had been thinking along the same lines. One issue\n> is that that means the backend itself will get SIGQUIT and SIGTERM\n> in close succession. We need to make sure that that won't cause\n> problems. It might be prudent to think about what order to send\n> the two signals in.\n\nI hope we already deal with that reasonably well - I think it's not uncommon\nfor that to happen, regardless of this change.\n\nJust naively hacking this behaviour change into the current code, would yield\nsending SIGQUIT to postgres, and then SIGTERM to the whole process\ngroup. Which seems like a reasonable order? quickdie() should _exit()\nimmediately in the signal handler, so we shouldn't get to processing the\nSIGTERM. Even if both signals are \"reacted to\" at the same time, possibly\nwith SIGTERM being processed first, the SIGQUIT handler should be executed\nlong before the next CFI().\n\n\nNot really related: I do wonder how often we end up self deadlocking in\nquickdie(), due to the ereport() not beeing reentrant. We'll \"fix\" it soon\nafter, due to postmasters SIGKILL. Perhaps we should turn on\nsend_abort_for_kill on CI?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Feb 2023 12:47:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 12:47:12PM -0800, Andres Freund wrote:\n> Not really related: I do wonder how often we end up self deadlocking in\n> quickdie(), due to the ereport() not beeing reentrant. We'll \"fix\" it soon\n> after, due to postmasters SIGKILL. Perhaps we should turn on\n> send_abort_for_kill on CI?\n\n+1, this seems like it'd be useful in general. I'm guessing this will\nrequire a bit of work on the CI side to generate the backtrace.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Feb 2023 14:23:32 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-14 14:23:32 -0800, Nathan Bossart wrote:\n> On Tue, Feb 14, 2023 at 12:47:12PM -0800, Andres Freund wrote:\n> > Not really related: I do wonder how often we end up self deadlocking in\n> > quickdie(), due to the ereport() not beeing reentrant. We'll \"fix\" it soon\n> > after, due to postmasters SIGKILL. Perhaps we should turn on\n> > send_abort_for_kill on CI?\n> \n> +1, this seems like it'd be useful in general. I'm guessing this will\n> require a bit of work on the CI side to generate the backtrace.\n\nThey're already generated for all current platforms. It's possible that debug\ninfo for some system libraries is missing, but the most important one (like\nlibc) should be available.\n\nSince yesterday the cfbot website allows to easily find the coredumps, too:\nhttp://cfbot.cputube.org/highlights/core-7.html\n\nThere's definitely some work to be done to make the contents of the backtraces\nmore useful though. E.g. printing out the program name, the current directory\n(although that doesn't seem to be doable for all programs). For everything but\nwindows that's in src/tools/ci/cores_backtrace.sh.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Feb 2023 16:20:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 04:20:59PM -0800, Andres Freund wrote:\n> On 2023-02-14 14:23:32 -0800, Nathan Bossart wrote:\n>> On Tue, Feb 14, 2023 at 12:47:12PM -0800, Andres Freund wrote:\n>> > Not really related: I do wonder how often we end up self deadlocking in\n>> > quickdie(), due to the ereport() not beeing reentrant. We'll \"fix\" it soon\n>> > after, due to postmasters SIGKILL. Perhaps we should turn on\n>> > send_abort_for_kill on CI?\n>> \n>> +1, this seems like it'd be useful in general. I'm guessing this will\n>> require a bit of work on the CI side to generate the backtrace.\n> \n> They're already generated for all current platforms. It's possible that debug\n> info for some system libraries is missing, but the most important one (like\n> libc) should be available.\n> \n> Since yesterday the cfbot website allows to easily find the coredumps, too:\n> http://cfbot.cputube.org/highlights/core-7.html\n\nOh, that's nifty. Any reason not to enable send_abort_for_crash, too?\n\n> There's definitely some work to be done to make the contents of the backtraces\n> more useful though. E.g. printing out the program name, the current directory\n> (although that doesn't seem to be doable for all programs). For everything but\n> windows that's in src/tools/ci/cores_backtrace.sh.\n\nGot it, thanks for the info.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 Feb 2023 09:57:41 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-15 09:57:41 -0800, Nathan Bossart wrote:\n> On Tue, Feb 14, 2023 at 04:20:59PM -0800, Andres Freund wrote:\n> > On 2023-02-14 14:23:32 -0800, Nathan Bossart wrote:\n> >> On Tue, Feb 14, 2023 at 12:47:12PM -0800, Andres Freund wrote:\n> >> > Not really related: I do wonder how often we end up self deadlocking in\n> >> > quickdie(), due to the ereport() not beeing reentrant. We'll \"fix\" it soon\n> >> > after, due to postmasters SIGKILL. Perhaps we should turn on\n> >> > send_abort_for_kill on CI?\n> >> \n> >> +1, this seems like it'd be useful in general. I'm guessing this will\n> >> require a bit of work on the CI side to generate the backtrace.\n> > \n> > They're already generated for all current platforms. It's possible that debug\n> > info for some system libraries is missing, but the most important one (like\n> > libc) should be available.\n> > \n> > Since yesterday the cfbot website allows to easily find the coredumps, too:\n> > http://cfbot.cputube.org/highlights/core-7.html\n> \n> Oh, that's nifty. Any reason not to enable send_abort_for_crash, too?\n\nI think it'd be too noisy. Right now you get just a core dump of the crashed\nprocess, but if we set send_abort_for_crash we'd end up with a lot of core\ndumps, making it harder to know what to look at.\n\nWe should never need the send_abort_for_kill path, so I don't think the noise\nissue applies to the same degree.\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Wed, 15 Feb 2023 10:12:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 10:12:58AM -0800, Andres Freund wrote:\n> On 2023-02-15 09:57:41 -0800, Nathan Bossart wrote:\n>> Oh, that's nifty. Any reason not to enable send_abort_for_crash, too?\n> \n> I think it'd be too noisy. Right now you get just a core dump of the crashed\n> process, but if we set send_abort_for_crash we'd end up with a lot of core\n> dumps, making it harder to know what to look at.\n> \n> We should never need the send_abort_for_kill path, so I don't think the noise\n> issue applies to the same degree.\n\nMakes sense.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 Feb 2023 10:15:19 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 12:47:12PM -0800, Andres Freund wrote:\n> Just naively hacking this behaviour change into the current code, would yield\n> sending SIGQUIT to postgres, and then SIGTERM to the whole process\n> group. Which seems like a reasonable order? quickdie() should _exit()\n> immediately in the signal handler, so we shouldn't get to processing the\n> SIGTERM. Even if both signals are \"reacted to\" at the same time, possibly\n> with SIGTERM being processed first, the SIGQUIT handler should be executed\n> long before the next CFI().\n\nI can see the sense in this argument and this order should work, still\nadding more complication for the backends does not sound that\nappealing to me.\n\nWhat would be the advantage of doing that for groups other than\n-StartupPID and -PgArchPID? These are the two groups of processes we\nneed to worry about, AFAIK.\n--\nMichael",
"msg_date": "Wed, 22 Feb 2023 15:47:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> What would be the advantage of doing that for groups other than\n> -StartupPID and -PgArchPID? These are the two groups of processes we\n> need to worry about, AFAIK.\n\nNo, we have the issue for regular backends too, since they could be\nexecuting COPY FROM PROGRAM or the like (not to mention that functions\nin plperlu, plpythonu, etc could spawn child processes).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Feb 2023 09:39:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 09:39:55AM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> What would be the advantage of doing that for groups other than\n>> -StartupPID and -PgArchPID? These are the two groups of processes we\n>> need to worry about, AFAIK.\n> \n> No, we have the issue for regular backends too, since they could be\n> executing COPY FROM PROGRAM or the like (not to mention that functions\n> in plperlu, plpythonu, etc could spawn child processes).\n\nIndeed, right. I completely forgot about these cases.\n--\nMichael",
"msg_date": "Thu, 23 Feb 2023 08:59:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 12:47:12PM -0800, Andres Freund wrote:\n> Just naively hacking this behaviour change into the current code, would yield\n> sending SIGQUIT to postgres, and then SIGTERM to the whole process\n> group. Which seems like a reasonable order? quickdie() should _exit()\n> immediately in the signal handler, so we shouldn't get to processing the\n> SIGTERM. Even if both signals are \"reacted to\" at the same time, possibly\n> with SIGTERM being processed first, the SIGQUIT handler should be executed\n> long before the next CFI().\n\nI have been poking a bit at that, and did a change as simple as this\none in signal_child():\n #ifdef HAVE_SETSID\n+ if (signal == SIGQUIT)\n+ signal = SIGTERM;\n\nFrom what I can see, SIGTERM is actually received by the backends\nbefore SIGQUIT, and I can also see that the backends have enough room\nto process CFIs in some cases, especially short queries, even before \nreaching quickdie() and its exit(). So the window between SIGTERM and\nSIGQUIT is not as long as one would think.\n--\nMichael",
"msg_date": "Tue, 28 Feb 2023 13:45:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 5:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Feb 14, 2023 at 12:47:12PM -0800, Andres Freund wrote:\n> > Just naively hacking this behaviour change into the current code, would yield\n> > sending SIGQUIT to postgres, and then SIGTERM to the whole process\n> > group. Which seems like a reasonable order? quickdie() should _exit()\n> > immediately in the signal handler, so we shouldn't get to processing the\n> > SIGTERM. Even if both signals are \"reacted to\" at the same time, possibly\n> > with SIGTERM being processed first, the SIGQUIT handler should be executed\n> > long before the next CFI().\n>\n> I have been poking a bit at that, and did a change as simple as this\n> one in signal_child():\n> #ifdef HAVE_SETSID\n> + if (signal == SIGQUIT)\n> + signal = SIGTERM;\n>\n> From what I can see, SIGTERM is actually received by the backends\n> before SIGQUIT, and I can also see that the backends have enough room\n> to process CFIs in some cases, especially short queries, even before\n> reaching quickdie() and its exit(). So the window between SIGTERM and\n> SIGQUIT is not as long as one would think.\n\nPop quiz: in what order do signal handlers run, if SIGQUIT and SIGTERM\nare both pending when a process wakes up or unblocks? I *think* the\nanswer on all typical implementation that follow conventions going\nback to ancient Unix (but not standardised, so you can't count on\nit!*), is that pending signals are delivered in order of the bits in\nthe pending signals bitmap from lowest to highest, and SIGQUIT <\nSIGTERM (again: tradition, not standard), and then:\n\n1. If the handlers block each other via their sa_mask so that they\nare serialised (note: ours don't) then you'll see the SIGQUIT handler\nrun and then the SIGTERM handler, for example if you do kill(self,\nSIGTERM), kill(self, SIGQUIT), sigprocmask(SIG_SETMASK, &unblock_all,\nNULL).\n\n2. If the handlers don't block each other (our case), then their\nstack frames will be set up in that order (you might say they start in\nthat order but are immediately interrupted by the next one before they\ncan do anything), so they then run in the reverse order, SIGTERM\nfirst. I guess that is what you saw?\n\nIn theory you could straighten this out by asking what else is pending\nso that we imposed our own priority, if that were a problem, but there\nis something I don't understand: you said we could handle SIGTERM and\nthen make it all the way to CFI() (= non-signal handler code) before\nhandling a SIGQUIT that was sent first. Huh... what am I missing? I\nthought the only risk was handlers running in the opposite of send\norder because they 'overlapped', not non-handler code being allowed to\nrun in between.\n\n*The standard explicitly says that delivery order is unspecified,\nexcept for realtime signals which are aren't using.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 12:29:28 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-28 13:45:41 +0900, Michael Paquier wrote:\n> On Tue, Feb 14, 2023 at 12:47:12PM -0800, Andres Freund wrote:\n> > Just naively hacking this behaviour change into the current code, would yield\n> > sending SIGQUIT to postgres, and then SIGTERM to the whole process\n> > group. Which seems like a reasonable order? quickdie() should _exit()\n> > immediately in the signal handler, so we shouldn't get to processing the\n> > SIGTERM. Even if both signals are \"reacted to\" at the same time, possibly\n> > with SIGTERM being processed first, the SIGQUIT handler should be executed\n> > long before the next CFI().\n> \n> I have been poking a bit at that, and did a change as simple as this\n> one in signal_child():\n> #ifdef HAVE_SETSID\n> + if (signal == SIGQUIT)\n> + signal = SIGTERM;\n\nFWIW, one thing that kept me from actually proposing a patch is that I thought\nit might be useful to write a test for this, but that I didn't yet have the\ncycles to look into that.\n\n\n> From what I can see, SIGTERM is actually received by the backends\n> before SIGQUIT, and I can also see that the backends have enough room\n> to process CFIs in some cases, especially short queries, even before \n> reaching quickdie() and its exit(). So the window between SIGTERM and\n> SIGQUIT is not as long as one would think.\n\nWhat do you mean with the last ssentence? Why would one think that the window\nbetween them is long? Do you mean that it's not as short?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 Mar 2023 15:34:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-02 12:29:28 +1300, Thomas Munro wrote:\n> In theory you could straighten this out by asking what else is pending\n> so that we imposed our own priority, if that were a problem, but there\n> is something I don't understand: you said we could handle SIGTERM and\n> then make it all the way to CFI() (= non-signal handler code) before\n> handling a SIGQUIT that was sent first. Huh... what am I missing? I\n> thought the only risk was handlers running in the opposite of send\n> order because they 'overlapped', not non-handler code being allowed to\n> run in between.\n\nI see ProcessInterrupts() being called too - but it's independent of the\nchanges we discuss here. The reason for it is the CFI() at the end of\nerrfinish().\n\nNote that ProcessInterrupts() immediately returns, due to the\nHOLD_INTERRUPTS() at the start of quickdie().\n\nFWIW, here's the strace output of a backend, enriched with a few debug\nwrite()s.\n\nepoll_wait(5, 0x55b25764fd70, 1, -1) = -1 EINTR (Interrupted system call)\n--- SIGQUIT {si_signo=SIGQUIT, si_code=SI_USER, si_pid=759211, si_uid=1000} ---\n--- SIGTERM {si_signo=SIGTERM, si_code=SI_USER, si_pid=759211, si_uid=1000} ---\nwrite(2, \"start die\\n\", 10) = 10\nkill(759218, SIGURG) = 0\nwrite(2, \"end die\\n\", 8) = 8\nrt_sigreturn({mask=[QUIT URG]}) = 0\nwrite(2, \"start quickdie\\n\", 15) = 15\nrt_sigprocmask(SIG_SETMASK, ~[ILL TRAP ABRT BUS FPE SEGV CONT SYS RTMIN RT_1], NULL, 8) = 0\nsendto(10, \"N\\0\\0\\0tSWARNING\\0VWARNING\\0C57P01\\0Mt\"..., 117, 0, NULL, 0) = 117\nwrite(2, \"ProcessInterrupts\\n\", 18) = 18\nwrite(2, \"ProcessInterrupts held off\\n\", 27) = 27\nwrite(2, \"end quickdie\\n\", 13) = 13\nexit_group(2) = ?\n+++ exited with 2 +++\n\n\nWe do way too many non-signal safe things in quickdie(). But I'm not sure what\nthe alternative is, given we probably do want to send something to the client.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 Mar 2023 16:09:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 03:34:30PM -0800, Andres Freund wrote:\n> On 2023-02-28 13:45:41 +0900, Michael Paquier wrote:\n>> From what I can see, SIGTERM is actually received by the backends\n>> before SIGQUIT, and I can also see that the backends have enough room\n>> to process CFIs in some cases, especially short queries, even before \n>> reaching quickdie() and its exit(). So the window between SIGTERM and\n>> SIGQUIT is not as long as one would think.\n> \n> What do you mean with the last ssentence? Why would one think that the window\n> between them is long? Do you mean that it's not as short?\n\nThat should have been worded as \"short\". In what I looked at, both\nsignal handlers are processed in the same millisecond, still the\nbackend can have time to process a full CFI between the SIGTERM and\nSIGQUIT handlers, before the SIGQUIT handler has the time to exit().\n--\nMichael",
"msg_date": "Thu, 2 Mar 2023 09:59:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 1:09 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-03-02 12:29:28 +1300, Thomas Munro wrote:\n> > ... Huh... what am I missing? I\n> > thought the only risk was handlers running in the opposite of send\n> > order because they 'overlapped', not non-handler code being allowed to\n> > run in between.\n>\n> I see ProcessInterrupts() being called too - but it's independent of the\n> changes we discuss here. The reason for it is the CFI() at the end of\n> errfinish().\n\nAhh, right, I see.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 14:20:58 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: We shouldn't signal process groups with SIGQUIT"
}
] |
[
{
"msg_contents": "It seems odd that stats_ext uses double:\n\npostgres=# SELECT attrelid::regclass, attname, atttypid::regtype, relkind FROM pg_attribute a JOIN pg_class c ON c.oid=a.attrelid WHERE attname='most_common_freqs';\n attrelid | attname | atttypid | relkind \n--------------------+-------------------+--------------------+---------\n pg_stats | most_common_freqs | real[] | v\n pg_stats_ext | most_common_freqs | double precision[] | v\n pg_stats_ext_exprs | most_common_freqs | real[] | v\n\nI'm not sure if that's deliberate ?\n\nThis patch changes extended stats to match.\n\n-- \nJustin",
"msg_date": "Tue, 14 Feb 2023 19:20:46 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg_statistic MCVs use float4 but extended stats use float8"
},
{
"msg_contents": "Hi,\n\nOn 2/15/23 02:20, Justin Pryzby wrote:\n> It seems odd that stats_ext uses double:\n> \n> postgres=# SELECT attrelid::regclass, attname, atttypid::regtype, relkind FROM pg_attribute a JOIN pg_class c ON c.oid=a.attrelid WHERE attname='most_common_freqs';\n> attrelid | attname | atttypid | relkind \n> --------------------+-------------------+--------------------+---------\n> pg_stats | most_common_freqs | real[] | v\n> pg_stats_ext | most_common_freqs | double precision[] | v\n> pg_stats_ext_exprs | most_common_freqs | real[] | v\n> \n> I'm not sure if that's deliberate ?\n> \n\nNot really, I'm not sure why I chose float8 and not float4. Likely a\ncause of muscle memory on 64-bit systems.\n\nI wonder if there are practical reasons to change this, i.e. if the\nfloat8 can have adverse effects on some systems. Yes, it makes the stats\na little bit larger, but I doubt the difference is significant enough to\nmake a difference. Perhaps on 32-bit systems it's worse, because float8\nis going to be pass-by-ref there ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 15 Feb 2023 14:22:07 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_statistic MCVs use float4 but extended stats use float8"
}
] |
[
{
"msg_contents": "This is a derived thread form [1], that discusses some subtle\nbehaviors of KeepLogSeg.\n\n1: https://www.postgresql.org/message-id/20230213194131.hgzs6ropcvhda5w3@awork3.anarazel.de\n\nAt Mon, 13 Feb 2023 11:41:31 -0800, Andres Freund <andres@anarazel.de> wrote\n> Hi,\n> \n> On 2023-02-13 15:45:49 +0900, Kyotaro Horiguchi wrote:\n> > This seems to have a thin connection to the issue, but.\n> \n> I was worried that the changes could lead us to removing WAL without\n> max_slot_wal_keep_size set.\n> \n> \n> > > It seems decidedly not great to not log at least a debug1 (but probably it\n> > > should be LOG) message when KeepLogSeg() decides to limit based on\n> > > max_slot_wal_keep_size.\n> > \n> > It's easy to do that, but that log is highly accompanied by a LOG line\n> > \"terminating process %d to release replication slot \\\"%s\\\"\". I don't\n> > mind adding it if it is a DEBUGx.\n> \n> My problem with that is that we might *NOT* see those log messages for some\n> reason, but that that's impossible to debug as-is. And even if we see them,\n> it's not that easy to figure out by how much we were over\n> max_slot_wal_keep_size, because we always report it in the context of a\n> specific slot.\n\nSince 551aa6b7b9, InvalidatePossiblyObsoleteSlot() emits the following\ndetail message in that case for both \"terminating\" and \"invalidating\"\nmessages.\n\nerrdetail(\"The slot's restart_lsn %X/%X exceeds the limit by %llu bytes.\",\n LSN_FORMAT_ARGS(restart_lsn),\n\t\t (unsigned long long) (oldestLSN - restart_lsn))\n\nWhere oldestLSN is the cutoff LSN by KeepLogSeg().\n\n> Removing WAL that's still needed is a *very* severe operation. Emitting an\n> additional line in case it happens isn't a problem.\n\nTotally agreed about the severity. The message above doesn't\nexplicitly say the source of the cutoff LSN but the only possible\nsource is max_slot_wal_keep_size. I think that DEBUG1 is appropriate\nfor the message from KeepLogSeg(), especially given how often we see\nit.\n\n> > > It feels wrong to subtract max_slot_wal_keep_size from recptr - that's the end\n> > > of the checkpoint record. Given that we, leaving max_slot_wal_keep_size aside,\n> > > only actually remove WAL if older than the segment that RedoRecPtr (the\n> > > logical start of the checkpoint) is in. If the checkpoint is large we'll end\n> > > up removing replication slots even though they potentially would only have\n> > > retained one additional WAL segment.\n> > \n> > I think that it is a controversial part, but that variable is defined\n> > the similar way to wal_keep_size. And I think that all max_wal_size,\n> > wal_keep_size and max_slot_wal_keep_size being defined with the same\n> > base LSN makes things simpler for users (also for developers).\n> > Regardless of checkpoint length, if slots get frequently invalidated,\n> > the setting should be considered to be too small for the system\n> > requirements.\n> \n> I think it's bad that we define wal_keep_size, max_slot_wal_keep_size that\n> way. I don't think bringing max_wal_size into this is useful, as it influences\n> different things.\n\nIn my faint memory, when wal_keep_segments was switched to\nwal_keep_size, in the first cut patch, I translated the latter to the\nformer by rounding up manner but it was rejected and ended up with the\nformula we have now.\n\nSpeaking of max_slot_wal_keep_size, I think it depends on how we\ninterpret the variable. If we see it as the minimum amount to ensure,\nthen we should round it up. But if we see it as the maximum amount\nthat can't be exceeded, then we would round it down like we do\nnow. However, I also think that the \"max\" prefix could imply something\nabout the upper limit.\n\n> > > Isn't it problematic to use ConvertToXSegs() to implement\n> > > max_slot_wal_keep_size, given that it rounds *down*? Particularly for a large\n> > > wal_segment_size that'd afaict lead to being much more aggressive invalidating\n> > > slots.\n> > \n> > I think max_slot_wal_keep_size is, like max_wal_size for checkpoints,\n> > a safeguard for slots not to fill-up WAL directory. Thus they both are\n> > rounded down. If you have 1GB WAL directory and set wal_segment_size\n> > to 4192MB, I don't see it a sane setup. But if segment size is smaller\n> > than one hundredth of max_wal_size, that difference won't matter for\n> > anyone. But anyway, it's a pain in the a.. that the both of them (and\n> > wal_keep_size) don't work in a immediate manner, though..\n> \n> It doesn't matter a lot for 16MB segments, but with 1GB segments it's a\n> different story.\n> \n> To me the way it's done now is a bug, one that can in extreme circumstances\n> lead to data loss.\n\nWhere do we lose data? This behavior won't cause primary to lose any\ndata. Standby is unable to continue replication, but that's just how\nit's set up. If we were to round up during the size conversion, we\nmight run out of storage capacity and end up with a PANIC. If that\nwere to happen, would it be considered a bug? Whether we round up or\nround down, converting to the segment size is necessary, and I think\nit's only natural that having too little margin in the settings could\nlead to trouble.\n\n> > > Also, why do we do something as expensive as\n> > > InvalidateObsoleteReplicationSlots() even when max_slot_wal_keep_size had no\n> > > effect?\n> > \n> > Indeed. Maybe we didn't regard that process as complex at start? I\n> > think we can compare the cutoff segno against\n> > XLogGetReplicationSlotMinimumLSN() before entering the loop over\n> > slots.\n> \n> That'd be better, but I'd probably go further, and really gate it on\n> max_slot_wal_keep_size having had an effect.\n\nI think that would be okay. Should we make KeepLogSeg() return whether\nthe slot has been invalidated or not?\n\n- static void\n+ static bool\n KeepLogSeg(XLogRecPtr recptr, XLogSegNo *logSegNo)\n\n\n> > Thus I think there's room for the following improvements.\n> > \n> > - Prevent KeepLogSeg from returning 0.\n> > \n> > - Add DEBUG log to KeepLogSeg emitted when max_slot_wal_keep_size affects.\n> > \n> > - Check against minimum slot LSN before actually examining through the\n> > slots in Invalidateobsoletereplicationslots.\n> > \n> > I'm not sure about the second item but the others seem back-patchable.\n> > \n> > If we need to continue further discussion, will need another\n> > thread. Anyway I'll come up with the patch for the above three items.\n> \n> Yep, probably a good idea to start another thread.\n> \n> There's also https://www.postgresql.org/message-id/20220223014855.4lsddr464i7mymk2%40alap3.anarazel.de\n> that unfortunately nobody replied to.\n\nUgg. I'll revisit it later..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Feb 2023 11:59:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "KeepLogSeg needs some fixes on behavior"
}
] |
[
{
"msg_contents": "I came across $subject on HEAD and here is the query I'm using.\n\ncreate table t1 (a int, b int);\ncreate table t2 (a int, b int);\ncreate table t3 (a int, b int);\n\ninsert into t1 values (1, 1);\ninsert into t2 values (2, 200);\ninsert into t3 values (3, 3);\n\n# select * from t1 left join t2 on true, lateral (select * from t3 where\nt2.a = t2.b) ss;\n a | b | a | b | a | b\n---+---+---+-----+---+---\n 1 | 1 | 2 | 200 | 3 | 3\n(1 row)\n\n# explain (costs off) select * from t1 left join t2 on true, lateral\n(select * from t3 where t2.a = t2.b) ss;\n QUERY PLAN\n----------------------------------\n Nested Loop\n -> Nested Loop Left Join\n -> Seq Scan on t1\n -> Materialize\n -> Seq Scan on t2\n -> Materialize\n -> Seq Scan on t3\n(7 rows)\n\nAs we can see, the join qual 't2.a = t2.b' disappears in the plan, and\nthat results in the wrong query results.\n\nI did some dig and here is what happened. Firstly both sides of qual\n't2.a = t2.b' could be nulled by the OJ t1/t2 and they are marked so in\ntheir varnullingrels. Then we decide that this qual can form a EC, and\nthe EC's ec_relids is marked as {t2, t1/t2}. Note that t1 is not\nincluded in this ec_relids. So when it comes to building joinrel for\nt1/t2, generate_join_implied_equalities fails to generate the join qual\nfrom that EC.\n\nI'm not sure how to fix this problem yet. I'm considering that while\ncomposing eclass_indexes for each base rel, when we come across an\nojrelid in ec->ec_relids, can we instead mark the base rels in the OJ's\nmin_lefthand/min_righthand that they are 'mentioned' in this EC?\nSomething like the TODO says.\n\n i = -1;\n while ((i = bms_next_member(ec->ec_relids, i)) > 0)\n {\n RelOptInfo *rel = root->simple_rel_array[i];\n\n if (rel == NULL) /* must be an outer join */\n {\n Assert(bms_is_member(i, root->outer_join_rels));\n+ /*\n+ * TODO Mark the base rels in the OJ's min_xxxhand that they\n+ * are 'mentioned' in this EC.\n+ */\n continue;\n }\n\n Assert(rel->reloptkind == RELOPT_BASEREL);\n\n rel->eclass_indexes = bms_add_member(rel->eclass_indexes,\n ec_index);\n\n if (can_generate_joinclause)\n rel->has_eclass_joins = true;\n }\n\nOr maybe we can just expand ec->ec_relids to include OJ's min_xxxhand\nwhen we form a new EC?\n\nThanks\nRichard\n\nI came across $subject on HEAD and here is the query I'm using.create table t1 (a int, b int);create table t2 (a int, b int);create table t3 (a int, b int);insert into t1 values (1, 1);insert into t2 values (2, 200);insert into t3 values (3, 3);# select * from t1 left join t2 on true, lateral (select * from t3 where t2.a = t2.b) ss; a | b | a | b | a | b---+---+---+-----+---+--- 1 | 1 | 2 | 200 | 3 | 3(1 row)# explain (costs off) select * from t1 left join t2 on true, lateral (select * from t3 where t2.a = t2.b) ss; QUERY PLAN---------------------------------- Nested Loop -> Nested Loop Left Join -> Seq Scan on t1 -> Materialize -> Seq Scan on t2 -> Materialize -> Seq Scan on t3(7 rows)As we can see, the join qual 't2.a = t2.b' disappears in the plan, andthat results in the wrong query results.I did some dig and here is what happened. Firstly both sides of qual't2.a = t2.b' could be nulled by the OJ t1/t2 and they are marked so intheir varnullingrels. Then we decide that this qual can form a EC, andthe EC's ec_relids is marked as {t2, t1/t2}. Note that t1 is notincluded in this ec_relids. So when it comes to building joinrel fort1/t2, generate_join_implied_equalities fails to generate the join qualfrom that EC.I'm not sure how to fix this problem yet. I'm considering that whilecomposing eclass_indexes for each base rel, when we come across anojrelid in ec->ec_relids, can we instead mark the base rels in the OJ'smin_lefthand/min_righthand that they are 'mentioned' in this EC?Something like the TODO says. i = -1; while ((i = bms_next_member(ec->ec_relids, i)) > 0) { RelOptInfo *rel = root->simple_rel_array[i]; if (rel == NULL) /* must be an outer join */ { Assert(bms_is_member(i, root->outer_join_rels));+ /*+ * TODO Mark the base rels in the OJ's min_xxxhand that they+ * are 'mentioned' in this EC.+ */ continue; } Assert(rel->reloptkind == RELOPT_BASEREL); rel->eclass_indexes = bms_add_member(rel->eclass_indexes, ec_index); if (can_generate_joinclause) rel->has_eclass_joins = true; }Or maybe we can just expand ec->ec_relids to include OJ's min_xxxhandwhen we form a new EC?ThanksRichard",
"msg_date": "Wed, 15 Feb 2023 11:31:44 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Wrong query results caused by loss of join quals"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> ... As we can see, the join qual 't2.a = t2.b' disappears in the plan, and\n> that results in the wrong query results.\n\nUgh.\n\n> I did some dig and here is what happened. Firstly both sides of qual\n> 't2.a = t2.b' could be nulled by the OJ t1/t2 and they are marked so in\n> their varnullingrels. Then we decide that this qual can form a EC, and\n> the EC's ec_relids is marked as {t2, t1/t2}. Note that t1 is not\n> included in this ec_relids. So when it comes to building joinrel for\n> t1/t2, generate_join_implied_equalities fails to generate the join qual\n> from that EC.\n\nHmm. My intention for this sort of case was that the nulled Vars should\nlook like \"new_members\" to generate_join_implied_equalities_normal,\nsince they are computable at the join node (in filter not join quals)\nbut not computable within either input. Then it would generate the\nnecessary quals to equate them to each other. The reason that that\ndoesn't happen is that get_common_eclass_indexes believes it can ignore\nECs that don't mention t1. The attached quick hack is enough to fix\nthe presented case, but:\n\n* I suspect the other use of get_common_eclass_indexes, in\nhave_relevant_eclass_joinclause, is broken as well.\n\n* This fix throws away a fair bit of the optimization intended by\n3373c7155, since it will result in examining some irrelevant ECs.\nI'm not sure if it's worth complicating get_common_eclass_indexes\nto try to recover that by adding knowledge about outer joins.\n\n* I'm now kind of wondering whether there are pre-existing bugs of the\nsame ilk. Maybe not, because before 2489d76c4 an EC constraint that was\ncomputable at the join but not earlier would have to have mentioned both\nsides of the join ... but I'm not quite sure.\n\nBTW, while looking at this I saw that generate_join_implied_equalities'\ncalculation of nominal_join_relids is wrong for child rels, because it\nfails to fold the join relid into that if appropriate. In cases similar\nto this one, that could result in generate_join_implied_equalities_broken\ndoing the same sort of wrong thing, that is rejecting quals it should\nhave enforced at the join. I don't think the use of nominal_join_relids\nadded by this patch is affected, though: a Var mentioning the outer join\nin varnullingrels would have to have some member of the RHS' base rels\nin varno, and I think a parallel statement can be made about PHVs.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 19 Feb 2023 15:56:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong query results caused by loss of join quals"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 4:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> * I suspect the other use of get_common_eclass_indexes, in\n> have_relevant_eclass_joinclause, is broken as well.\n\n\nIt seems have_relevant_joinclause is broken for the presented case. It\ndoes not get a change to call have_relevant_eclass_joinclause, because\nflag 'has_eclass_joins' is not set for t1 due to t1 being not in the\nEC's ec_relids. As a result, have_relevant_joinclause thinks there is\nno joinclause that involves t1 and t2, which is not right.\n\n\n> * This fix throws away a fair bit of the optimization intended by\n> 3373c7155, since it will result in examining some irrelevant ECs.\n> I'm not sure if it's worth complicating get_common_eclass_indexes\n> to try to recover that by adding knowledge about outer joins.\n\n\nYeah, this is also my concern that we'd lose some optimization about\nfinding ECs.\n\n\n> * I'm now kind of wondering whether there are pre-existing bugs of the\n> same ilk. Maybe not, because before 2489d76c4 an EC constraint that was\n> computable at the join but not earlier would have to have mentioned both\n> sides of the join ... but I'm not quite sure.\n\n\nI also think there is no problem before, because if a clause was\ncomputable at the join but not earlier and only mentioned one side of\nthe join, then it was a non-degenerate outer join qual or an\nouterjoin_delayed qual, and cannot enter into EC.\n\n\n> BTW, while looking at this I saw that generate_join_implied_equalities'\n> calculation of nominal_join_relids is wrong for child rels, because it\n> fails to fold the join relid into that if appropriate.\n\n\nI dug a little into this and it seems this is all right as-is. Among\nall the calls of generate_join_implied_equalities, it seems only\nbuild_joinrel_restrictlist would have outer join's ojrelid in param\n'join_relids'. And build_joinrel_restrictlist does not get called for\nchild rels. The restrictlist of a child rel is constructed from that of\nits parent rel.\n\nThanks\nRichard\n\nOn Mon, Feb 20, 2023 at 4:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n* I suspect the other use of get_common_eclass_indexes, in\nhave_relevant_eclass_joinclause, is broken as well. It seems have_relevant_joinclause is broken for the presented case. Itdoes not get a change to call have_relevant_eclass_joinclause, becauseflag 'has_eclass_joins' is not set for t1 due to t1 being not in theEC's ec_relids. As a result, have_relevant_joinclause thinks there isno joinclause that involves t1 and t2, which is not right. \n* This fix throws away a fair bit of the optimization intended by\n3373c7155, since it will result in examining some irrelevant ECs.\nI'm not sure if it's worth complicating get_common_eclass_indexes\nto try to recover that by adding knowledge about outer joins. Yeah, this is also my concern that we'd lose some optimization aboutfinding ECs. \n* I'm now kind of wondering whether there are pre-existing bugs of the\nsame ilk. Maybe not, because before 2489d76c4 an EC constraint that was\ncomputable at the join but not earlier would have to have mentioned both\nsides of the join ... but I'm not quite sure. I also think there is no problem before, because if a clause wascomputable at the join but not earlier and only mentioned one side ofthe join, then it was a non-degenerate outer join qual or anouterjoin_delayed qual, and cannot enter into EC. \nBTW, while looking at this I saw that generate_join_implied_equalities'\ncalculation of nominal_join_relids is wrong for child rels, because it\nfails to fold the join relid into that if appropriate. I dug a little into this and it seems this is all right as-is. Amongall the calls of generate_join_implied_equalities, it seems onlybuild_joinrel_restrictlist would have outer join's ojrelid in param'join_relids'. And build_joinrel_restrictlist does not get called forchild rels. The restrictlist of a child rel is constructed from that ofits parent rel.ThanksRichard",
"msg_date": "Mon, 20 Feb 2023 18:04:53 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Wrong query results caused by loss of join quals"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Mon, Feb 20, 2023 at 4:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * I suspect the other use of get_common_eclass_indexes, in\n>> have_relevant_eclass_joinclause, is broken as well.\n\n> It seems have_relevant_joinclause is broken for the presented case. It\n> does not get a change to call have_relevant_eclass_joinclause, because\n> flag 'has_eclass_joins' is not set for t1 due to t1 being not in the\n> EC's ec_relids. As a result, have_relevant_joinclause thinks there is\n> no joinclause that involves t1 and t2, which is not right.\n\nI thought about this and decided that it's not really a problem.\nhave_relevant_joinclause is just a heuristic, and I don't think we\nneed to prioritize forming a join if the only relevant clauses look\nlike this. We won't be able to use such clauses for merge or hash,\nso we're going to end up with an unconstrained nestloop, which isn't\nsomething to be eager to form. The join ordering rules will take\ncare of forcing us to make the join when necessary.\n\n>> * This fix throws away a fair bit of the optimization intended by\n>> 3373c7155, since it will result in examining some irrelevant ECs.\n\n> Yeah, this is also my concern that we'd lose some optimization about\n> finding ECs.\n\nThe only easy improvement I can see to make here is to apply the old\nrules at inner joins. Maybe it's worth complicating the data structures\nto be smarter at outer joins, but I rather doubt it: we could easily\nexpend more overhead than we'll save here by examining irrelevant ECs.\nIn any case, if there is a useful optimization here, it can be pursued\nlater.\n\n>> BTW, while looking at this I saw that generate_join_implied_equalities'\n>> calculation of nominal_join_relids is wrong for child rels, because it\n>> fails to fold the join relid into that if appropriate.\n\n> I dug a little into this and it seems this is all right as-is.\n\nI changed it anyway after noting that (a) passing in the ojrelid is\nneedful to be able to distinguish inner and outer joins, and\n(b) the existing comment about the join_relids input is now wrong.\nEven if it happens to not be borked for current callers, that seems\nlike a mighty fragile assumption.\n\nLess-hasty v2 patch attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 22 Feb 2023 15:50:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong query results caused by loss of join quals"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 4:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I thought about this and decided that it's not really a problem.\n> have_relevant_joinclause is just a heuristic, and I don't think we\n> need to prioritize forming a join if the only relevant clauses look\n> like this. We won't be able to use such clauses for merge or hash,\n> so we're going to end up with an unconstrained nestloop, which isn't\n> something to be eager to form. The join ordering rules will take\n> care of forcing us to make the join when necessary.\n\n\nAgreed. And as I tried, in lots of cases joins with such clauses would\nbe accepted by have_join_order_restriction(), which always appears with\nhave_relevant_joinclause().\n\n\n> The only easy improvement I can see to make here is to apply the old\n> rules at inner joins. Maybe it's worth complicating the data structures\n> to be smarter at outer joins, but I rather doubt it: we could easily\n> expend more overhead than we'll save here by examining irrelevant ECs.\n> In any case, if there is a useful optimization here, it can be pursued\n> later.\n\n\nThis makes sense.\n\n\n> I changed it anyway after noting that (a) passing in the ojrelid is\n> needful to be able to distinguish inner and outer joins, and\n> (b) the existing comment about the join_relids input is now wrong.\n> Even if it happens to not be borked for current callers, that seems\n> like a mighty fragile assumption.\n\n\nAgreed. This is reasonable.\n\n\n> Less-hasty v2 patch attached.\n\n\nI think the patch is in good shape now.\n\nThanks\nRichard\n\nOn Thu, Feb 23, 2023 at 4:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nI thought about this and decided that it's not really a problem.\nhave_relevant_joinclause is just a heuristic, and I don't think we\nneed to prioritize forming a join if the only relevant clauses look\nlike this. We won't be able to use such clauses for merge or hash,\nso we're going to end up with an unconstrained nestloop, which isn't\nsomething to be eager to form. The join ordering rules will take\ncare of forcing us to make the join when necessary.Agreed. And as I tried, in lots of cases joins with such clauses wouldbe accepted by have_join_order_restriction(), which always appears withhave_relevant_joinclause(). \nThe only easy improvement I can see to make here is to apply the old\nrules at inner joins. Maybe it's worth complicating the data structures\nto be smarter at outer joins, but I rather doubt it: we could easily\nexpend more overhead than we'll save here by examining irrelevant ECs.\nIn any case, if there is a useful optimization here, it can be pursued\nlater.This makes sense. \nI changed it anyway after noting that (a) passing in the ojrelid is\nneedful to be able to distinguish inner and outer joins, and\n(b) the existing comment about the join_relids input is now wrong.\nEven if it happens to not be borked for current callers, that seems\nlike a mighty fragile assumption.Agreed. This is reasonable. \nLess-hasty v2 patch attached. I think the patch is in good shape now.ThanksRichard",
"msg_date": "Thu, 23 Feb 2023 17:37:44 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Wrong query results caused by loss of join quals"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Thu, Feb 23, 2023 at 4:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Less-hasty v2 patch attached.\n\n> I think the patch is in good shape now.\n\nPushed, thanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Feb 2023 11:06:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong query results caused by loss of join quals"
}
] |
[
{
"msg_contents": "Hi, hackers.\n\nI found that CREATE DATABASE occurs lost of DDL result after crash server.\nThe direct cause is that the checkpoint skips sync for page of template1's main fork\nbecause buffer status is not marked as BM_PERMANENT in BufferAlloc().\n\nHave you any knowledge about it?\n\nReproduction:\n1) Do initdb.\n2) Start server.\n3) Connect to 'postgres' database.\n4) Execute CREATE DATABASE statement with WAL_LOG strategy.\n5) Connect to 'template1' database.\n6) Update pg_class by executing some DDL.\n7) Do checkpoint.\n8) Crash server.\n\n[src/backend/storage/buffer/bufmgr.c]\n1437 if (relpersistence == RELPERSISTENCE_PERMANENT || forkNum == INIT_FORKNUM)\n1438 buf_state |= BM_TAG_VALID | BM_PERMANENT | BUF_USAGECOUNT_ONE;\n1439 else\n !!! walk this route !!!\n1440 buf_state |= BM_TAG_VALID | BUF_USAGECOUNT_ONE;\n\nThe above is occured by the following call.\nThe argument 'permanent' of ReadBufferWithoutRelcache() is passed to\nBufferAlloc() as 'relpersistence'.\n\n[src/backend/commands/]\n 298 buf = ReadBufferWithoutRelcache(rnode, MAIN_FORKNUM, blkno,\n 299 RBM_NORMAL, bstrategy, false);\n\nRegards\nRyo Matsumura\n\n\n",
"msg_date": "Wed, 15 Feb 2023 04:49:38 +0000",
"msg_from": "\"Ryo Matsumura (Fujitsu)\" <matsumura.ryo@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 04:49:38AM +0000, Ryo Matsumura (Fujitsu) wrote:\n> Hi, hackers.\n> \n> I found that CREATE DATABASE occurs lost of DDL result after crash server.\n> The direct cause is that the checkpoint skips sync for page of template1's main fork\n> because buffer status is not marked as BM_PERMANENT in BufferAlloc().\n\nI had some trouble reproducing this when running the commands by hand.\n\nBut it reproduces fine like this:\n\n$ ./tmp_install/usr/local/pgsql/bin/postgres -D ./testrun/regress/regress/tmp_check/data& sleep 2; psql -h /tmp postgres -c \"DROP DATABASE IF EXISTS j\" -c \"CREATE DATABASE j STRATEGY wal_log\" && psql -h /tmp template1 -c \"CREATE TABLE t(i int)\" -c \"INSERT INTO t SELECT generate_series(1,9)\" -c CHECKPOINT; kill -9 %1; wait; ./tmp_install/usr/local/pgsql/bin/postgres -D ./testrun/regress/regress/tmp_check/data& sleep 9; psql -h /tmp template1 -c \"table t\"; kill %1\n[1] 29069\n2023-02-15 10:10:27.584 CST postmaster[29069] LOG: starting PostgreSQL 16devel on x86_64-linux, compiled by gcc-9.4.0, 64-bit\n2023-02-15 10:10:27.584 CST postmaster[29069] LOG: listening on IPv4 address \"127.0.0.1\", port 5432\n2023-02-15 10:10:27.663 CST postmaster[29069] LOG: listening on Unix socket \"/tmp/.s.PGSQL.5432\"\n2023-02-15 10:10:27.728 CST startup[29074] LOG: database system was shut down at 2023-02-15 10:10:13 CST\n2023-02-15 10:10:27.780 CST postmaster[29069] LOG: database system is ready to accept connections\nNOTICE: database \"j\" does not exist, skipping\nDROP DATABASE\nCREATE DATABASE\nCREATE TABLE\nINSERT 0 9\n2023-02-15 10:10:30.160 CST checkpointer[29072] LOG: checkpoint starting: immediate force wait\n2023-02-15 10:10:30.740 CST checkpointer[29072] LOG: checkpoint complete: wrote 943 buffers (5.8%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.070 s, sync=0.369 s, total=0.581 s; sync files=268, longest=0.274 s, average=0.002 s; distance=4322 kB, estimate=4322 kB; lsn=0/BA9E8A0, redo lsn=0/BA9E868\nCHECKPOINT\n[1]+ Killed ./tmp_install/usr/local/pgsql/bin/postgres -D ./testrun/regress/regress/tmp_check/data\n[1] 29088\n2023-02-15 10:10:31.664 CST postmaster[29088] LOG: starting PostgreSQL 16devel on x86_64-linux, compiled by gcc-9.4.0, 64-bit\n2023-02-15 10:10:31.665 CST postmaster[29088] LOG: listening on IPv4 address \"127.0.0.1\", port 5432\n2023-02-15 10:10:31.724 CST postmaster[29088] LOG: listening on Unix socket \"/tmp/.s.PGSQL.5432\"\n2023-02-15 10:10:31.780 CST startup[29094] LOG: database system was interrupted; last known up at 2023-02-15 10:10:30 CST\n2023-02-15 10:10:33.888 CST startup[29094] LOG: database system was not properly shut down; automatic recovery in progress\n2023-02-15 10:10:33.934 CST startup[29094] LOG: redo starts at 0/BA9E868\n2023-02-15 10:10:33.934 CST startup[29094] LOG: invalid record length at 0/BA9E918: wanted 24, got 0\n2023-02-15 10:10:33.934 CST startup[29094] LOG: redo done at 0/BA9E8A0 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n2023-02-15 10:10:34.073 CST checkpointer[29092] LOG: checkpoint starting: end-of-recovery immediate wait\n2023-02-15 10:10:34.275 CST checkpointer[29092] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.035 s, sync=0.026 s, total=0.257 s; sync files=2, longest=0.019 s, average=0.013 s; distance=0 kB, estimate=0 kB; lsn=0/BA9E918, redo lsn=0/BA9E918\n2023-02-15 10:10:34.321 CST postmaster[29088] LOG: database system is ready to accept connections\n2023-02-15 10:10:39.893 CST client backend[29110] psql ERROR: relation \"t\" does not exist at character 7\n2023-02-15 10:10:39.893 CST client backend[29110] psql STATEMENT: table t\nERROR: relation \"t\" does not exist\n\n\n",
"msg_date": "Wed, 15 Feb 2023 10:24:08 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 04:49:38AM +0000, Ryo Matsumura (Fujitsu) wrote:\n> The above is occured by the following call.\n> The argument 'permanent' of ReadBufferWithoutRelcache() is passed to\n> BufferAlloc() as 'relpersistence'.\n> \n> [src/backend/commands/]\n> 298 buf = ReadBufferWithoutRelcache(rnode, MAIN_FORKNUM, blkno,\n> 299 RBM_NORMAL, bstrategy, false);\n\nIndeed, setting that to true (as per the attached patch) seems to fix this.\nI don't see any reason this _shouldn't_ be true from what I've read so far.\nWe're reading pg_class, which will probably never be UNLOGGED.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 15 Feb 2023 16:06:59 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 5:37 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Feb 15, 2023 at 04:49:38AM +0000, Ryo Matsumura (Fujitsu) wrote:\n> > The above is occured by the following call.\n> > The argument 'permanent' of ReadBufferWithoutRelcache() is passed to\n> > BufferAlloc() as 'relpersistence'.\n> >\n> > [src/backend/commands/]\n> > 298 buf = ReadBufferWithoutRelcache(rnode, MAIN_FORKNUM, blkno,\n> > 299 RBM_NORMAL, bstrategy, false);\n>\n> Indeed, setting that to true (as per the attached patch) seems to fix this.\n> I don't see any reason this _shouldn't_ be true from what I've read so far.\n> We're reading pg_class, which will probably never be UNLOGGED.\n\nYes, there is no reason to pass this as false, seems like this is\npassed false by mistake. And your patch fixes the issue.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Feb 2023 10:24:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 10:24:13AM +0530, Dilip Kumar wrote:\n> Yes, there is no reason to pass this as false, seems like this is\n> passed false by mistake. And your patch fixes the issue.\n\nSo, if I am understanding this stuff right, this issue can create data\ncorruption once a DDL updates any pages of pg_class stored in a\ntemplate database that gets copied by this routine. In this case, the\npatch sent makes sure that any page copied will get written once a\ncheckpoint kicks in. Shouldn't we have at least a regression test for\nsuch a scenario? The issue can happen when updating a template\ndatabase after creating a database from it, which is less worrying\nthan the initial impression I got, still I'd like to think that we\nshould have some coverage as of the special logic this code path\nrelies on for pg_class when reading its buffers.\n\nI have not given much attention to this area, but I am a bit\nsuspicious that enforcing the default as WAL_LOG was a good idea for\n15~, TBH. We are usually much more conservative when it comes to\nsuch choices, switching to the new behavior after a few years would\nhave been wiser..\n--\nMichael",
"msg_date": "Thu, 16 Feb 2023 14:26:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 02:26:55PM +0900, Michael Paquier wrote:\n> So, if I am understanding this stuff right, this issue can create data\n> corruption once a DDL updates any pages of pg_class stored in a\n> template database that gets copied by this routine. In this case, the\n> patch sent makes sure that any page copied will get written once a\n> checkpoint kicks in. Shouldn't we have at least a regression test for\n> such a scenario? The issue can happen when updating a template\n> database after creating a database from it, which is less worrying\n> than the initial impression I got, still I'd like to think that we\n> should have some coverage as of the special logic this code path\n> relies on for pg_class when reading its buffers.\n\nI was able to quickly hack together a TAP test that seems to reliably fail\nbefore the fix and pass afterwards. There's probably a better place for\nit, though...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 15 Feb 2023 22:41:20 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 12:11 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Thu, Feb 16, 2023 at 02:26:55PM +0900, Michael Paquier wrote:\n> > So, if I am understanding this stuff right, this issue can create data\n> > corruption once a DDL updates any pages of pg_class stored in a\n> > template database that gets copied by this routine. In this case, the\n> > patch sent makes sure that any page copied will get written once a\n> > checkpoint kicks in. Shouldn't we have at least a regression test for\n> > such a scenario? The issue can happen when updating a template\n> > database after creating a database from it, which is less worrying\n> > than the initial impression I got, still I'd like to think that we\n> > should have some coverage as of the special logic this code path\n> > relies on for pg_class when reading its buffers.\n>\n> I was able to quickly hack together a TAP test that seems to reliably fail\n> before the fix and pass afterwards. There's probably a better place for\n> it, though...\n\nI think the below change is not relevant to this bug right?\n\ndiff --git a/src/test/recovery/meson.build b/src/test/recovery/meson.build\nindex 209118a639..6e9f8a7c7f 100644\n--- a/src/test/recovery/meson.build\n+++ b/src/test/recovery/meson.build\n@@ -39,6 +39,7 @@ tests += {\n 't/031_recovery_conflict.pl',\n 't/032_relfilenode_reuse.pl',\n 't/033_replay_tsp_drops.pl',\n+ 't/100_bugs.pl',\n ],\n },\n }\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Feb 2023 12:31:54 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 12:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I think the below change is not relevant to this bug right?\n>\n> diff --git a/src/test/recovery/meson.build b/src/test/recovery/meson.build\n> index 209118a639..6e9f8a7c7f 100644\n> --- a/src/test/recovery/meson.build\n> +++ b/src/test/recovery/meson.build\n> @@ -39,6 +39,7 @@ tests += {\n> 't/031_recovery_conflict.pl',\n> 't/032_relfilenode_reuse.pl',\n> 't/033_replay_tsp_drops.pl',\n> + 't/100_bugs.pl',\n> ],\n> },\n> }\n\nWhy not? The patch creates 100_bugs.pl so it also adds it to meson.build.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Feb 2023 12:37:57 +0530",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 12:37:57PM +0530, Robert Haas wrote:\n> Why not? The patch creates 100_bugs.pl so it also adds it to meson.build.\n\nIt would not matter for REL_15_STABLE, but on HEAD all the new TAP\ntest files have to be added in their respective meson.build. If you\ndon't do that, the CFBot would not test it, neither would a local\nbuild with meson.\n\nAdding a test in src/test/recovery/ is OK by me. This is a recovery\ncase, and that's a.. Bug. Another possible name would be something\nlike 0XX_create_database.pl, now that's me being picky.\n--\nMichael",
"msg_date": "Thu, 16 Feb 2023 16:17:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 12:38 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Feb 16, 2023 at 12:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I think the below change is not relevant to this bug right?\n> >\n> > diff --git a/src/test/recovery/meson.build b/src/test/recovery/meson.build\n> > index 209118a639..6e9f8a7c7f 100644\n> > --- a/src/test/recovery/meson.build\n> > +++ b/src/test/recovery/meson.build\n> > @@ -39,6 +39,7 @@ tests += {\n> > 't/031_recovery_conflict.pl',\n> > 't/032_relfilenode_reuse.pl',\n> > 't/033_replay_tsp_drops.pl',\n> > + 't/100_bugs.pl',\n> > ],\n> > },\n> > }\n>\n> Why not? The patch creates 100_bugs.pl so it also adds it to meson.build.\n\nYeah my bad, I somehow assumed this was an existing file.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Feb 2023 13:24:08 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On 2023-02-16 12:37:57 +0530, Robert Haas wrote:\n> The patch creates 100_bugs.pl\n\nWhat's the story behind 100_bugs.pl? This name clearly is copied from\nsrc/test/subscription/t/100_bugs.pl - but I've never understood why that is\noutside of the normal numbering space.\n\n\n",
"msg_date": "Thu, 16 Feb 2023 13:29:34 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 2:59 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-02-16 12:37:57 +0530, Robert Haas wrote:\n> > The patch creates 100_bugs.pl\n>\n> What's the story behind 100_bugs.pl? This name clearly is copied from\n> src/test/subscription/t/100_bugs.pl - but I've never understood why that is\n> outside of the normal numbering space.\n>\n\nYeah, I have also previously wondered about this name for\nsrc/test/subscription/t/100_bugs.pl. My guess is that it has been kept\nto distinguish it from the other feature tests which have numbering\nstarting from 001.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 17 Feb 2023 08:54:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On 16.02.23 22:29, Andres Freund wrote:\n> What's the story behind 100_bugs.pl? This name clearly is copied from\n> src/test/subscription/t/100_bugs.pl - but I've never understood why that is\n> outside of the normal numbering space.\n\nMainly to avoid awkwardness for backpatching. The number of tests in \nsrc/test/subscription/ varies quite a bit across branches.\n\n\n",
"msg_date": "Fri, 17 Feb 2023 15:13:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 03:13:32PM +0100, Peter Eisentraut wrote:\n> On 16.02.23 22:29, Andres Freund wrote:\n>> What's the story behind 100_bugs.pl? This name clearly is copied from\n>> src/test/subscription/t/100_bugs.pl - but I've never understood why that is\n>> outside of the normal numbering space.\n> \n> Mainly to avoid awkwardness for backpatching. The number of tests in\n> src/test/subscription/ varies quite a bit across branches.\n\nI'm happy to move this new test to wherever folks think it should go. I'll\nlook around to see if I can find a better place, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 17 Feb 2023 14:35:30 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 02:35:30PM -0800, Nathan Bossart wrote:\n> I'm happy to move this new test to wherever folks think it should go. I'll\n> look around to see if I can find a better place, too.\n\nI think that src/test/recovery/ is the best fit, because this stresses\na code path for WAL replay on pg_class for the template db. The name\nis not specific enough, though, why not just using something like\n0NN_create_database.pl?\n--\nMichael",
"msg_date": "Mon, 20 Feb 2023 17:02:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "> On Thu, Feb 16, 2023 at 10:24:13AM +0530, Dilip Kumar wrote:\n> > Yes, there is no reason to pass this as false, seems like this is\n> > passed false by mistake. And your patch fixes the issue.\n\nOn Thu, Feb 16, 2023 at 02:26:55PM +0900, Michael Paquier wrote:\n> So, if I am understanding this stuff right, this issue can create data\n> corruption once a DDL updates any pages of pg_class stored in a\n> template database that gets copied by this routine. In this case, the\n> patch sent makes sure that any page copied will get written once a\n> checkpoint kicks in.\n\nThank you for comment and patch.\nI think that the patch for dbcommand.c is fixed.\nSo I apply to my environment.\n\n> I have not given much attention to this area, but I am a bit\n> suspicious that enforcing the default as WAL_LOG was a good idea for\n> 15~, TBH. We are usually much more conservative when it comes to\n> such choices, switching to the new behavior after a few years would\n> have been wiser..\n\nI think so too. I was surprised that new strategy is default.\n\nRegards\nRyo Matsumura\n\n\n",
"msg_date": "Mon, 20 Feb 2023 08:22:02 +0000",
"msg_from": "\"Ryo Matsumura (Fujitsu)\" <matsumura.ryo@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 05:02:01PM +0900, Michael Paquier wrote:\n> On Fri, Feb 17, 2023 at 02:35:30PM -0800, Nathan Bossart wrote:\n>> I'm happy to move this new test to wherever folks think it should go. I'll\n>> look around to see if I can find a better place, too.\n> \n> I think that src/test/recovery/ is the best fit, because this stresses\n> a code path for WAL replay on pg_class for the template db. The name\n> is not specific enough, though, why not just using something like\n> 0NN_create_database.pl?\n\nOkay. I've renamed the test file as suggested in v3.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 20 Feb 2023 16:43:22 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 04:43:22PM -0800, Nathan Bossart wrote:\n> On Mon, Feb 20, 2023 at 05:02:01PM +0900, Michael Paquier wrote:\n>> On Fri, Feb 17, 2023 at 02:35:30PM -0800, Nathan Bossart wrote:\n>>> I'm happy to move this new test to wherever folks think it should go. I'll\n>>> look around to see if I can find a better place, too.\n>> \n>> I think that src/test/recovery/ is the best fit, because this stresses\n>> a code path for WAL replay on pg_class for the template db. The name\n>> is not specific enough, though, why not just using something like\n>> 0NN_create_database.pl?\n> \n> Okay. I've renamed the test file as suggested in v3.\n\nThe test enforces a checkpoint after the table creation on the\ntemplate, so what about testing it also without a checkpoint, like the\nextended version attached? I have tweaked a few things in the test,\nwhile on it.\n--\nMichael",
"msg_date": "Tue, 21 Feb 2023 15:13:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 03:13:10PM +0900, Michael Paquier wrote:\n> The test enforces a checkpoint after the table creation on the\n> template, so what about testing it also without a checkpoint, like the\n> extended version attached? I have tweaked a few things in the test,\n> while on it.\n\nWhat is the purpose of testing it without the checkpoint? Other than that\nquestion, the patch looks reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 21 Feb 2023 10:00:11 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 10:00:11AM -0800, Nathan Bossart wrote:\n> What is the purpose of testing it without the checkpoint?\n\nPerhaps none, I was wondering whether it would be worth testing that\nwith the flush phase, but perhaps that's just extra cycles wasted at\nthis point.\n\n> Other than that\n> question, the patch looks reasonable to me.\n\nOkay, applied and backpatched with a minimal test set, then. I have\nkept the tweaks I did to the tests with extra comments.\n--\nMichael",
"msg_date": "Wed, 22 Feb 2023 12:30:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 12:30:20PM +0900, Michael Paquier wrote:\n> Okay, applied and backpatched with a minimal test set, then. I have\n> kept the tweaks I did to the tests with extra comments.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 22 Feb 2023 09:08:28 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DDL result is lost by CREATE DATABASE with WAL_LOG strategy"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nWhen I refer to the GUC \"max_locks_per_transaction\", I find that the description\r\nof the shared lock table size in pg-doc[1] is inconsistent with the code\r\n(guc_table.c). BTW, the GUC \"max_predicate_locks_per_xact\" has similar problems.\r\n\r\nI think the descriptions in pg-doc are correct.\r\n- GUC \"max_locks_per_transaction\"\r\nPlease refer to the macro \"NLOCKENTS\" used to obtain max_table_size in the\r\nfunction InitLocks.\r\n\r\n- GUC \"max_predicate_locks_per_xact\"\r\nPlease refer to the macro \"NPREDICATELOCKTARGETENTS\" used to obtain\r\nmax_table_size in the function InitPredicateLocks.\r\n\r\nAttach the patch to fix the descriptions of these two GUCs in guc_table.c.\r\n\r\n[1] - https://www.postgresql.org/docs/devel/runtime-config-locks.html\r\n\r\nRegards,\r\nWang Wei",
"msg_date": "Wed, 15 Feb 2023 08:16:43 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Fix the description of GUC \"max_locks_per_transaction\" and\n \"max_pred_locks_per_transaction\" in guc_table.c"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 08:16:43AM +0000, wangw.fnst@fujitsu.com wrote:\n> When I refer to the GUC \"max_locks_per_transaction\", I find that the description\n> of the shared lock table size in pg-doc[1] is inconsistent with the code\n> (guc_table.c). BTW, the GUC \"max_predicate_locks_per_xact\" has similar problems.\n> \n> I think the descriptions in pg-doc are correct.\n> - GUC \"max_locks_per_transaction\"\n> Please refer to the macro \"NLOCKENTS\" used to obtain max_table_size in the\n> function InitLocks.\n> \n> - GUC \"max_predicate_locks_per_xact\"\n> Please refer to the macro \"NPREDICATELOCKTARGETENTS\" used to obtain\n> max_table_size in the function InitPredicateLocks.\n\nThe GUC description for max_locks_per_transaction was first added in\nb700a67 (July 2003). Neither the GUC description nor the documentation was\nupdated when max_prepared_transactions was introduced in d0a8968 (July\n2005). However, the documentation was later fixed via 78ef2d3 (August\n2005). It looks like the GUC description for\nmax_predicate_locks_per_transaction was wrong from the start. In dafaa3e\n(February 2011), the GUC description does not include\nmax_prepared_transactions, but the documentation does.\n\nIt's interesting that the documentation cites max_connections, as the\ntables are sized using MaxBackends, which includes more than just\nmax_connections (e.g., autovacuum_max_workers, max_worker_processes,\nmax_wal_senders). After some digging, I see that MaxBackends was the\noriginal variable used for max_connections (which was called max_backends\nuntil 648677c (July 2000)), and it wasn't until autovacuum_max_workers was\nintroduced in e2a186b (April 2007) before max_connections got its own\nMaxConnections variable and started diverging from MaxBackends.\n\nSo, even with your patch applied, I don't think the formulas are correct.\nI don't know if it's worth writing out the exact formula, though. It\ndoesn't seem to be kept up-to-date, and I don't know if users would choose\ndifferent values for max_locks_per_transaction if it _was_ updated.\nPerhaps max_connections is a good enough approximation of MaxBackends most\nof the time...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 21 Feb 2023 16:37:16 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix the description of GUC \"max_locks_per_transaction\" and\n \"max_pred_locks_per_transaction\" in guc_table.c"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 8:37 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\r\n> On Wed, Feb 15, 2023 at 08:16:43AM +0000, wangw.fnst@fujitsu.com wrote:\r\n> > When I refer to the GUC \"max_locks_per_transaction\", I find that the\r\n> description\r\n> > of the shared lock table size in pg-doc[1] is inconsistent with the code\r\n> > (guc_table.c). BTW, the GUC \"max_predicate_locks_per_xact\" has similar\r\n> problems.\r\n> >\r\n> > I think the descriptions in pg-doc are correct.\r\n> > - GUC \"max_locks_per_transaction\"\r\n> > Please refer to the macro \"NLOCKENTS\" used to obtain max_table_size in the\r\n> > function InitLocks.\r\n> >\r\n> > - GUC \"max_predicate_locks_per_xact\"\r\n> > Please refer to the macro \"NPREDICATELOCKTARGETENTS\" used to obtain\r\n> > max_table_size in the function InitPredicateLocks.\r\n> \r\n> The GUC description for max_locks_per_transaction was first added in\r\n> b700a67 (July 2003). Neither the GUC description nor the documentation was\r\n> updated when max_prepared_transactions was introduced in d0a8968 (July\r\n> 2005). However, the documentation was later fixed via 78ef2d3 (August\r\n> 2005). It looks like the GUC description for\r\n> max_predicate_locks_per_transaction was wrong from the start. In dafaa3e\r\n> (February 2011), the GUC description does not include\r\n> max_prepared_transactions, but the documentation does.\r\n> \r\n> It's interesting that the documentation cites max_connections, as the\r\n> tables are sized using MaxBackends, which includes more than just\r\n> max_connections (e.g., autovacuum_max_workers, max_worker_processes,\r\n> max_wal_senders). After some digging, I see that MaxBackends was the\r\n> original variable used for max_connections (which was called max_backends\r\n> until 648677c (July 2000)), and it wasn't until autovacuum_max_workers was\r\n> introduced in e2a186b (April 2007) before max_connections got its own\r\n> MaxConnections variable and started diverging from MaxBackends.\r\n> \r\n> So, even with your patch applied, I don't think the formulas are correct.\r\n> I don't know if it's worth writing out the exact formula, though. It\r\n> doesn't seem to be kept up-to-date, and I don't know if users would choose\r\n> different values for max_locks_per_transaction if it _was_ updated.\r\n> Perhaps max_connections is a good enough approximation of MaxBackends most\r\n> of the time...\r\n\r\nThanks very much for your careful review.\r\n\r\nYes, you are right. I think the formulas in the v1 patch are all approximations.\r\nI think the exact formula (see function InitializeMaxBackends) is:\r\n```\r\n\tmax_locks_per_transaction * (max_connections + autovacuum_max_workers + 1 + \r\n\t\t\t\t\t\t\t\t max_worker_processes + max_wal_senders +\r\n\t\t\t\t\t\t\t\t max_prepared_transactions)\r\n```\r\n\r\nAfter some rethinking, I think users can easily get exact value according to\r\nexact formula, and I think using accurate formula can help users adjust\r\nmax_locks_per_transaction or max_predicate_locks_per_transaction if needed. So,\r\nI used the exact formulas in the attached v2 patch.\r\n\r\nRegards,\r\nWang Wei",
"msg_date": "Wed, 22 Feb 2023 12:40:07 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Fix the description of GUC \"max_locks_per_transaction\" and\n \"max_pred_locks_per_transaction\" in guc_table.c"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 12:40:07PM +0000, wangw.fnst@fujitsu.com wrote:\n> On Wed, Feb 22, 2023 at 8:37 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> So, even with your patch applied, I don't think the formulas are correct.\n>> I don't know if it's worth writing out the exact formula, though. It\n>> doesn't seem to be kept up-to-date, and I don't know if users would choose\n>> different values for max_locks_per_transaction if it _was_ updated.\n>> Perhaps max_connections is a good enough approximation of MaxBackends most\n>> of the time...\n> \n> Thanks very much for your careful review.\n> \n> Yes, you are right. I think the formulas in the v1 patch are all approximations.\n> I think the exact formula (see function InitializeMaxBackends) is:\n> ```\n> \tmax_locks_per_transaction * (max_connections + autovacuum_max_workers + 1 + \n> \t\t\t\t\t\t\t\t max_worker_processes + max_wal_senders +\n> \t\t\t\t\t\t\t\t max_prepared_transactions)\n> ```\n> \n> After some rethinking, I think users can easily get exact value according to\n> exact formula, and I think using accurate formula can help users adjust\n> max_locks_per_transaction or max_predicate_locks_per_transaction if needed. So,\n> I used the exact formulas in the attached v2 patch.\n\nIMHO this is too verbose. Perhaps it could be simplified to something like\n\n\tThe shared lock table is sized on the assumption that at most\n\tmax_locks_per_transaction objects per eligible process or prepared\n\ttransaction will need to be locked at any one time.\n\nBut if others disagree and think the full formula is appropriate, I'm fine\nwith it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 22 Feb 2023 09:07:37 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix the description of GUC \"max_locks_per_transaction\" and\n \"max_pred_locks_per_transaction\" in guc_table.c"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Wed, Feb 22, 2023 at 12:40:07PM +0000, wangw.fnst@fujitsu.com wrote:\n>> After some rethinking, I think users can easily get exact value according to\n>> exact formula, and I think using accurate formula can help users adjust\n>> max_locks_per_transaction or max_predicate_locks_per_transaction if needed. So,\n>> I used the exact formulas in the attached v2 patch.\n\n> IMHO this is too verbose.\n\nYeah, it's impossibly verbose. Even the current wording does not fit\nnicely in pg_settings output.\n\n> Perhaps it could be simplified to something like\n> \tThe shared lock table is sized on the assumption that at most\n> \tmax_locks_per_transaction objects per eligible process or prepared\n> \ttransaction will need to be locked at any one time.\n\nI like the \"per eligible process\" wording, at least for guc_tables.c;\nor maybe it could be \"per server process\"? That would be more\naccurate and not much longer than what we have now.\n\nI've got mixed emotions about trying to put the exact formulas into\nthe SGML docs either. Space isn't such a constraint there, but I\nthink the info would soon go out of date (indeed, I think the existing\nwording was once exactly accurate), and I'm not sure it's worth trying\nto maintain it precisely.\n\nOne reason that I'm not very excited about this is that in fact the\nformula seen in the source code is not exact either; it's a lower\nbound for how much space will be available. That's because we throw\nin 100K slop at the bottom of the shmem sizing calculation, and a\nlarge chunk of that remains available to be eaten by the lock table\nif necessary.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Apr 2023 11:47:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix the description of GUC \"max_locks_per_transaction\" and\n \"max_pred_locks_per_transaction\" in guc_table.c"
},
{
"msg_contents": "On Tues, Apr 4, 2023 at 23:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> Nathan Bossart <nathandbossart@gmail.com> writes:\r\n> > On Wed, Feb 22, 2023 at 12:40:07PM +0000, wangw.fnst@fujitsu.com wrote:\r\n> >> After some rethinking, I think users can easily get exact value according to\r\n> >> exact formula, and I think using accurate formula can help users adjust\r\n> >> max_locks_per_transaction or max_predicate_locks_per_transaction if\r\n> needed. So,\r\n> >> I used the exact formulas in the attached v2 patch.\r\n> \r\n> > IMHO this is too verbose.\r\n> \r\n> Yeah, it's impossibly verbose. Even the current wording does not fit\r\n> nicely in pg_settings output.\r\n> \r\n> > Perhaps it could be simplified to something like\r\n> > \tThe shared lock table is sized on the assumption that at most\r\n> > \tmax_locks_per_transaction objects per eligible process or prepared\r\n> > \ttransaction will need to be locked at any one time.\r\n> \r\n> I like the \"per eligible process\" wording, at least for guc_tables.c;\r\n> or maybe it could be \"per server process\"? That would be more\r\n> accurate and not much longer than what we have now.\r\n> \r\n> I've got mixed emotions about trying to put the exact formulas into\r\n> the SGML docs either. Space isn't such a constraint there, but I\r\n> think the info would soon go out of date (indeed, I think the existing\r\n> wording was once exactly accurate), and I'm not sure it's worth trying\r\n> to maintain it precisely.\r\n\r\nThanks both for sharing your opinions.\r\nI agree that verbose descriptions make maintenance difficult.\r\nFor consistency, I unified the formulas in guc_tables.c and pg-doc into the same\r\nsuggested short formula. Attach the new patch.\r\n\r\n> One reason that I'm not very excited about this is that in fact the\r\n> formula seen in the source code is not exact either; it's a lower\r\n> bound for how much space will be available. That's because we throw\r\n> in 100K slop at the bottom of the shmem sizing calculation, and a\r\n> large chunk of that remains available to be eaten by the lock table\r\n> if necessary.\r\n\r\nThanks for sharing this.\r\nSince no one has reported related issues, I'm also fine to close this entry if\r\nthis related modification is not necessary.\r\n\r\nRegards,\r\nWang Wei",
"msg_date": "Fri, 7 Apr 2023 08:46:37 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Fix the description of GUC \"max_locks_per_transaction\" and\n \"max_pred_locks_per_transaction\" in guc_table.c"
},
{
"msg_contents": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com> writes:\n> On Tues, Apr 4, 2023 at 23:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I like the \"per eligible process\" wording, at least for guc_tables.c;\n>> or maybe it could be \"per server process\"? That would be more\n>> accurate and not much longer than what we have now.\n\n> Thanks both for sharing your opinions.\n> I agree that verbose descriptions make maintenance difficult.\n> For consistency, I unified the formulas in guc_tables.c and pg-doc into the same\n> suggested short formula. Attach the new patch.\n\nAfter studying this for awhile, I decided \"server process\" is probably\nthe better term --- people will have some idea what that means, while\n\"eligible process\" is not a term we use anywhere else. Pushed with\nthat change and some minor other wordsmithing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Apr 2023 13:32:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix the description of GUC \"max_locks_per_transaction\" and\n \"max_pred_locks_per_transaction\" in guc_table.c"
},
{
"msg_contents": "On Sat, Apr 8, 2023 at 1:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> \"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com> writes:\r\n> > On Tues, Apr 4, 2023 at 23:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> >> I like the \"per eligible process\" wording, at least for guc_tables.c;\r\n> >> or maybe it could be \"per server process\"? That would be more\r\n> >> accurate and not much longer than what we have now.\r\n> \r\n> > Thanks both for sharing your opinions.\r\n> > I agree that verbose descriptions make maintenance difficult.\r\n> > For consistency, I unified the formulas in guc_tables.c and pg-doc into the same\r\n> > suggested short formula. Attach the new patch.\r\n> \r\n> After studying this for awhile, I decided \"server process\" is probably\r\n> the better term --- people will have some idea what that means, while\r\n> \"eligible process\" is not a term we use anywhere else. Pushed with\r\n> that change and some minor other wordsmithing.\r\n\r\nMake sense to me\r\nThanks for pushing.\r\n\r\nRegards,\r\nWang Wei\r\n",
"msg_date": "Mon, 10 Apr 2023 01:24:42 +0000",
"msg_from": "\"Wei Wang (Fujitsu)\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Fix the description of GUC \"max_locks_per_transaction\" and\n \"max_pred_locks_per_transaction\" in guc_table.c"
},
{
"msg_contents": "Hi,\n\nOn Fri, Apr 07, 2023 at 01:32:22PM -0400, Tom Lane wrote:\n> \"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com> writes:\n> > On Tues, Apr 4, 2023 at 23:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I like the \"per eligible process\" wording, at least for guc_tables.c;\n> >> or maybe it could be \"per server process\"? That would be more\n> >> accurate and not much longer than what we have now.\n> \n> > Thanks both for sharing your opinions.\n> > I agree that verbose descriptions make maintenance difficult.\n> > For consistency, I unified the formulas in guc_tables.c and pg-doc into the same\n> > suggested short formula. Attach the new patch.\n> \n> After studying this for awhile, I decided \"server process\" is probably\n> the better term --- people will have some idea what that means, while\n> \"eligible process\" is not a term we use anywhere else. Pushed with\n> that change and some minor other wordsmithing.\n\nI stumbled upon this change while looking at the documentation searching\nfor guidance and what max_locks_per_transactions should be set to (or\nrather, a pointer about max_locks_per_transactions not actually being\n\"per transaction\", but a shared pool of roughly\nmax_locks_per_transactions * max_connections).\n\nWhile I agree that the exact formula is too verbose, I find the current\nwording (\"per server process or prepared transaction\") to be misleading;\nI can see how somebody sees that as a dynamic limit based on the current\nnumber of running server processes or prepared transactions, not\nsomething that is allocated at server start based on some hardcoded\nGUCs.\n\nI don't have a good alternative wording for now, but I wanted to point\nout that currently the wording does not seem to imply\nmax_{connection,prepared_transactions} being at play at all. Probably\nthe GUC description cannot be made much clearer without making it too\nverbose, but I think the description in config.sgml has more leeway to\nget a mention of max_connections back.\n\n\nMichael\n\n\n",
"msg_date": "Fri, 26 Apr 2024 13:40:50 +0200",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix the description of GUC \"max_locks_per_transaction\" and\n \"max_pred_locks_per_transaction\" in guc_table.c"
}
] |
[
{
"msg_contents": "Hi All,\n\nI'd like to report what seems to be a missing optimization opportunity or understand why it is not possible to achieve.\n\nTLDR; additional index column B specified in CREATE INDEX .. (A) INCLUDE(B) is not used to filter rows in queries like WHERE B = $1 ORDER BY A during IndexScan. https://dbfiddle.uk/iehtq44L\n\n\nTake following database:\n\n\n CREATE TABLE t(\n a integer NOT NULL,\n b integer NOT NULL,\n d integer NOT NULL\n );\n\n CREATE UNIQUE INDEX t_a_include_b ON t (a) INCLUDE (b);\n -- I'd expect index above to behave as index below for the purpose\n -- of this query\n -- CREATE UNIQUE INDEX ON t(a,b);\n\n INSERT INTO t(\n SELECT random() * 100000000 as a,\n random() * 3 as b,\n generate_series as d FROM generate_series(1,200000)\n ) ON CONFLICT DO NOTHING;\n\n\nIf we filter on `a` and `b` columns while scanning index created as `(a) INCLUDE (b)` it seems to be fetching tuples from heap to check for condition `b = 4` despite both columns available in the index:\n\n SELECT * FROM t WHERE a > 1000000 and b = 4 ORDER BY a ASC LIMIT 10;\n\n\nHere is the plan (notice high \"shared hit\"):\n\n Limit (cost=0.42..10955.01 rows=1 width=12) (actual time=84.283..84.284 rows=0 loops=1)\n Output: a, b, d\n Buffers: shared hit=198307\n -> Index Scan using t_a_include_b on public.t (cost=0.42..10955.01 rows=1 width=12) (actual time=84.280..84.281 rows=0 loops=1)\n Output: a, b, d\n Index Cond: (t.a > 1000000)\n Filter: (t.b = 4)\n Rows Removed by Filter: 197805\n Buffers: shared hit=198307\n Planning:\n Buffers: shared hit=30\n Planning Time: 0.201 ms\n Execution Time: 84.303 ms\n\n\nAnd here is the plan with index on (a,b).\n\n Limit (cost=0.42..4447.90 rows=1 width=12) (actual time=6.883..6.884 rows=0 loops=1)\n Output: a, b, d\n Buffers: shared hit=613\n -> Index Scan using t_a_b_idx on public.t (cost=0.42..4447.90 rows=1 width=12) (actual time=6.880..6.881 rows=0 loops=1)\n Output: a, b, d\n Index Cond: ((t.a > 1000000) AND (t.b = 4))\n Buffers: shared hit=613\n Planning:\n Buffers: shared hit=41\n Planning Time: 0.314 ms\n Execution Time: 6.910 ms\n\n\nBecause query doesn't sort on `b`, only filters on it while sorting on `a`, I'd expect indexes `(a) INCLUDE (b)` and `(a,b)` behave exactly the same with this particular query.\n\nInterestingly, IndexOnlyScan is capable of using additional columns to filter rows without fetching them from the heap, but only for visibible tuples:\n\n VACUUM FREEZE t;\n SELECT a,b FROM t WHERE a > 1000000 and b = 4 ORDER BY a ASC LIMIT 10;\n\n Limit (cost=0.42..6619.76 rows=1 width=8) (actual time=18.479..18.480 rows=0 loops=1)\n Output: a, b\n Buffers: shared hit=662\n -> Index Only Scan using t_a_include_b on public.t (cost=0.42..6619.76 rows=1 width=8) (actual time=18.477..18.477 rows=0 loops=1)\n Output: a, b\n Index Cond: (t.a > 1000000)\n Filter: (t.b = 4)\n Rows Removed by Filter: 197771\n Heap Fetches: 0\n Buffers: shared hit=662\n\nRemoving VACUUM makes it behave like IndexScan and fetch candidate tuples from heap all while returning zero rows in the result.\n\n\nTo make query plan comparable I had to force index scan on both with:\n\n SET enable_bitmapscan to off;\n SET enable_seqscan to off;\n SET max_parallel_workers_per_gather = 0;\n\nSelf contained fully reproducible example is in https://dbfiddle.uk/iehtq44L\n\nRegards,\nMaxim\n\n\n",
"msg_date": "Wed, 15 Feb 2023 08:57:31 +0000",
"msg_from": "Maxim Ivanov <hi@yamlcoder.me>",
"msg_from_op": true,
"msg_subject": "Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 2/15/23 09:57, Maxim Ivanov wrote:\n> Hi All,\n> \n> I'd like to report what seems to be a missing optimization \n> opportunity or understand why it is not possible to achieve.\n> \n> TLDR; additional index column B specified in CREATE INDEX .. (A) \n> INCLUDE(B) is not used to filter rows in queries like WHERE B = $1\n> ORDER BY A during IndexScan. https://dbfiddle.uk/iehtq44L\n> \n> ...\n> \n> Here is the plan (notice high \"shared hit\"):\n> \n> Limit (cost=0.42..10955.01 rows=1 width=12) (actual time=84.283..84.284 rows=0 loops=1)\n> Output: a, b, d\n> Buffers: shared hit=198307\n> -> Index Scan using t_a_include_b on public.t (cost=0.42..10955.01 rows=1 width=12) (actual time=84.280..84.281 rows=0 loops=1)\n> Output: a, b, d\n> Index Cond: (t.a > 1000000)\n> Filter: (t.b = 4)\n> Rows Removed by Filter: 197805\n> Buffers: shared hit=198307\n> Planning:\n> Buffers: shared hit=30\n> Planning Time: 0.201 ms\n> Execution Time: 84.303 ms\n> \n\nYeah. The reason for this behavior is pretty simple:\n\n1) When matching clauses to indexes in match_clause_to_index(), we only\n look at key columns (nkeycolumns). We'd need to check all columns\n (ncolumns) and remember if the clause matched a key or included one.\n\n2) index_getnext_slot would need to get \"candidate\" TIDs using\n conditions on keys, and then check the clauses on included\n columns.\n\nSeems doable, unless I'm missing some fatal issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 15 Feb 2023 14:48:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 2/15/23 09:57, Maxim Ivanov wrote:\n>> TLDR; additional index column B specified in CREATE INDEX .. (A) \n>> INCLUDE(B) is not used to filter rows in queries like WHERE B = $1\n>> ORDER BY A during IndexScan. https://dbfiddle.uk/iehtq44L\n\n> Seems doable, unless I'm missing some fatal issue.\n\nPartly this is lack of round tuits, but there's another significant\nissue: there very likely are index entries corresponding to dead heap\nrows. Applying random user-defined quals to values found in such rows\ncould produce semantic anomalies; for example, divide-by-zero failures\neven though you deleted all the rows having a zero in that column.\n\nThis isn't a problem for operators found in operator families, because\nwe trust those to not have undesirable side effects like raising\ndata-dependent errors. But it'd be an issue if we started to apply\nquals that aren't index quals directly to index rows before doing\nthe heap liveness check. (And, of course, once you've fetched the\nheap row there's no point in having a special path for columns\navailable from the index.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Feb 2023 10:18:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 2/15/23 16:18, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> On 2/15/23 09:57, Maxim Ivanov wrote:\n>>> TLDR; additional index column B specified in CREATE INDEX .. (A) \n>>> INCLUDE(B) is not used to filter rows in queries like WHERE B = $1\n>>> ORDER BY A during IndexScan. https://dbfiddle.uk/iehtq44L\n> \n>> Seems doable, unless I'm missing some fatal issue.\n> \n> Partly this is lack of round tuits, but there's another significant\n> issue: there very likely are index entries corresponding to dead heap\n> rows. Applying random user-defined quals to values found in such rows\n> could produce semantic anomalies; for example, divide-by-zero failures\n> even though you deleted all the rows having a zero in that column.\n> \n> This isn't a problem for operators found in operator families, because\n> we trust those to not have undesirable side effects like raising\n> data-dependent errors. But it'd be an issue if we started to apply\n> quals that aren't index quals directly to index rows before doing\n> the heap liveness check. (And, of course, once you've fetched the\n> heap row there's no point in having a special path for columns\n> available from the index.)\n\nSure, but we can do the same VM check as index-only scan, right?\n\nThat would save some of the I/O to fetch the heap tuple, as long as the\npage is all-visible and the filter eliminates the tuples. It makes the\ncosting a bit trickier, because it needs to consider both how many pages\nare all-visible and selectivity of the condition.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 15 Feb 2023 18:01:50 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "> This isn't a problem for operators found in operator families, because\n> we trust those to not have undesirable side effects like raising\n> data-dependent errors. But it'd be an issue if we started to apply\n> quals that aren't index quals directly to index rows before doing\n> the heap liveness check. (And, of course, once you've fetched the\n> heap row there's no point in having a special path for columns\n> available from the index.)\n\nAssuming operators are pure and don't have global side effects, is it possible to ignore any error during that check? If tuple is not visible it shouldn't matter, if it is visible then error will be reported by the same routine which does filtering now (ExecQual?).\n\n\nIf not, then limiting this optimization to builtin ops is something I can live with :)\n\n\n\n",
"msg_date": "Wed, 15 Feb 2023 17:15:50 +0000",
"msg_from": "Maxim Ivanov <hi+postgresql@yamlcoder.me>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "Hi,\n\nI took a stab at this and implemented the trick with the VM - during\nindex scan, we also extract the filters that only need the indexed\nattributes (just like in IOS). And then, during the execution we:\n\n 1) scan the index using the scan keys (as before)\n\n 2) if the heap page is all-visible, we check the new filters that can\n be evaluated on the index tuple\n\n 3) fetch the heap tuple and evaluate the filters\n\nThis is pretty much exactly the same thing we do for IOS, so I don't see\nwhy this would be incorrect while IOS is correct.\n\nThis also adds \"Index Filter\" to explain output, to show which filters\nare executed on the index tuple (at the moment the filters are a subset\nof \"Filter\"), so if the index tuple matches we'll execute them again on\nthe heap tuple. I guess that could be fixed by having two \"filter\"\nlists, depending on whether we were able to evaluate the index filters.\n\nMost of the patch is pretty mechanical - particularly the planning part\nis about identifying filters that can be evaluated on the index tuple,\nand that code was mostly shamelessly copied from index-only scan.\n\nThe matching of filters to index is done in check_index_filter(), and\nit's simpler than match_clause_to_indexcol() as it does not need to\nconsider operators etc. (I think). But maybe it should be careful about\nother things, not sure.\n\nThe actual magic happens in IndexNext (nodeIndexscan.c). As mentioned\nearlier, the idea is to check VM and evaluate the filters on the index\ntuple if possible, similar to index-only scans. Except that we then have\nto fetch the heap tuple. Unfortunately, this means the code can't use\nindex_getnext_slot() anymore. Perhaps we should invent a new variant\nthat'd allow evaluating the index filters in between.\n\n\nWith the patch applied, the query plan changes from:\n\n QUERY PLAN\n -------------------------------------------------------------------\n Limit (cost=0.42..10929.89 rows=1 width=12)\n (actual time=94.649..94.653 rows=0 loops=1)\n Buffers: shared hit=197575 read=661\n -> Index Scan using t_a_include_b on t\n (cost=0.42..10929.89 rows=1 width=12)\n (actual time=94.646..94.647 rows=0 loops=1)\n Index Cond: (a > 1000000)\n Filter: (b = 4)\n Rows Removed by Filter: 197780\n Buffers: shared hit=197575 read=661\n Planning Time: 0.091 ms\n Execution Time: 94.674 ms\n (9 rows)\n\nto\n\n QUERY PLAN\n -------------------------------------------------------------------\n Limit (cost=0.42..3662.15 rows=1 width=12)\n (actual time=13.663..13.667 rows=0 loops=1)\n Buffers: shared hit=544\n -> Index Scan using t_a_include_b on t\n (cost=0.42..3662.15 rows=1 width=12)\n (actual time=13.659..13.660 rows=0 loops=1)\n Index Cond: (a > 1000000)\n Index Filter: (b = 4)\n Rows Removed by Index Recheck: 197780\n Filter: (b = 4)\n Buffers: shared hit=544\n Planning Time: 0.105 ms\n Execution Time: 13.690 ms\n (10 rows)\n\nwhich is much closer to the \"best\" case:\n\n QUERY PLAN\n -------------------------------------------------------------------\n Limit (cost=0.42..4155.90 rows=1 width=12)\n (actual time=10.152..10.156 rows=0 loops=1)\n Buffers: shared read=543\n -> Index Scan using t_a_b_idx on t\n (cost=0.42..4155.90 rows=1 width=12)\n (actual time=10.148..10.150 rows=0 loops=1)\n Index Cond: ((a > 1000000) AND (b = 4))\n Buffers: shared read=543\n Planning Time: 0.089 ms\n Execution Time: 10.176 ms\n (7 rows)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 8 Jun 2023 19:34:22 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "Hello,\n\nI've cc'd Jeff Davis on this due to a conversation we had at PGCon\nabout applying filters on index tuples during index scans.\n\nI've also cc'd Andres Freund because I think this relates to his\nmusing in [1] that:\n> One thing I have been wondering around this is whether we should not have\n> split the code for IOS and plain indexscans...\n\nI think I remember Peter Geoghegan also wondering (I can't remember if\nthis was in conversation at PGCon about index skip scans or in a\nhackers thread) about how we compose these various index scan\noptimizations.\n\nTo be certain this is probably a thing to tackle as a follow-on to\nthis patch, but it does seem to me that what we are implicitly\nrealizing is that (unlike with bitmap scans, I think) it doesn't\nreally make a lot of conceptual sense to have index only scans be a\nseparate node from index scans. Instead it's likely better to consider\nit an optimization to index scans that can dynamically kick in when\nit's able to be of use. That would allow it to compose with e.g.\nprefetching in the aforelinked thread. At the very least we would need\npragmatic (e.g., cost of dynamically applying optimizations) rather\nthan conceptual reasons to argue they should continue to be separate.\n\nApologies for that lengthy preamble; on to the patch under discussion:\n\nOn Thu, Jun 8, 2023 at 1:34 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> I took a stab at this and implemented the trick with the VM - during\n> index scan, we also extract the filters that only need the indexed\n> attributes (just like in IOS). And then, during the execution we:\n>\n> 1) scan the index using the scan keys (as before)\n>\n> 2) if the heap page is all-visible, we check the new filters that can\n> be evaluated on the index tuple\n>\n> 3) fetch the heap tuple and evaluate the filters\n\nThanks for working on this; I'm excited about this class of work\n(along with index prefetching and other ideas I think there's a lot of\npotential for improving index scans).\n\n> This is pretty much exactly the same thing we do for IOS, so I don't see\n> why this would be incorrect while IOS is correct.\n>\n> This also adds \"Index Filter\" to explain output, to show which filters\n> are executed on the index tuple (at the moment the filters are a subset\n> of \"Filter\"), so if the index tuple matches we'll execute them again on\n> the heap tuple. I guess that could be fixed by having two \"filter\"\n> lists, depending on whether we were able to evaluate the index filters.\n\nGiven that we show index filters and heap filters separately it seems\nlike we might want to maintain separate instrumentation counts of how\nmany tuple were filtered by each set of filters.\n\n> Most of the patch is pretty mechanical - particularly the planning part\n> is about identifying filters that can be evaluated on the index tuple,\n> and that code was mostly shamelessly copied from index-only scan.\n>\n> The matching of filters to index is done in check_index_filter(), and\n> it's simpler than match_clause_to_indexcol() as it does not need to\n> consider operators etc. (I think). But maybe it should be careful about\n> other things, not sure.\n\nThis would end up requiring some refactoring of the existing index\nmatching code (or alternative caching on IndexOptInfo), but\nmatch_filter_to_index() calling check_index_filter() results in\nconstructs a bitmapset of index columns for every possible filter\nwhich seems wasteful (I recognize this is a bit of a proof-of-concept\nlevel v1).\n\n> The actual magic happens in IndexNext (nodeIndexscan.c). As mentioned\n> earlier, the idea is to check VM and evaluate the filters on the index\n> tuple if possible, similar to index-only scans. Except that we then have\n> to fetch the heap tuple. Unfortunately, this means the code can't use\n> index_getnext_slot() anymore. Perhaps we should invent a new variant\n> that'd allow evaluating the index filters in between.\n\nIt does seem there are some refactoring opportunities there.\n\n> With the patch applied, the query plan changes from:\n>\n> ...\n>\n> to\n>\n> QUERY PLAN\n> -------------------------------------------------------------------\n> Limit (cost=0.42..3662.15 rows=1 width=12)\n> (actual time=13.663..13.667 rows=0 loops=1)\n> Buffers: shared hit=544\n> -> Index Scan using t_a_include_b on t\n> (cost=0.42..3662.15 rows=1 width=12)\n> (actual time=13.659..13.660 rows=0 loops=1)\n> Index Cond: (a > 1000000)\n> Index Filter: (b = 4)\n> Rows Removed by Index Recheck: 197780\n> Filter: (b = 4)\n> Buffers: shared hit=544\n> Planning Time: 0.105 ms\n> Execution Time: 13.690 ms\n> (10 rows)\n>\n> ...\n\nI did also confirm that this properly identifies cases Jeff had\nmentioned to me like \"Index Filter: (((a * 2) > 500000) AND ((b % 10)\n= 4))\".\n\nI noticed also you still had questions/TODOs about handling index\nscans for join clauses.\n\nRegards,\nJames Coleman\n\n1: https://www.postgresql.org/message-id/20230609000600.syqy447e6metnvyj%40awork3.anarazel.de\n\n\n",
"msg_date": "Wed, 21 Jun 2023 08:45:19 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 6/21/23 14:45, James Coleman wrote:\n> Hello,\n> \n> I've cc'd Jeff Davis on this due to a conversation we had at PGCon\n> about applying filters on index tuples during index scans.\n> \n> I've also cc'd Andres Freund because I think this relates to his\n> musing in [1] that:\n>> One thing I have been wondering around this is whether we should not have\n>> split the code for IOS and plain indexscans...\n> \n> I think I remember Peter Geoghegan also wondering (I can't remember if\n> this was in conversation at PGCon about index skip scans or in a\n> hackers thread) about how we compose these various index scan\n> optimizations.\n> \n> To be certain this is probably a thing to tackle as a follow-on to\n> this patch, but it does seem to me that what we are implicitly\n> realizing is that (unlike with bitmap scans, I think) it doesn't\n> really make a lot of conceptual sense to have index only scans be a\n> separate node from index scans. Instead it's likely better to consider\n> it an optimization to index scans that can dynamically kick in when\n> it's able to be of use. That would allow it to compose with e.g.\n> prefetching in the aforelinked thread. At the very least we would need\n> pragmatic (e.g., cost of dynamically applying optimizations) rather\n> than conceptual reasons to argue they should continue to be separate.\n> \n\nI agree it seems a bit weird to have IOS as a separate node. In a way, I\nthink there are two dimensions for \"index-only\" scans - which pages can\nbe scanned like that, and which clauses can be evaluated with only the\nindex tuple. The current approach focuses on page visibility, but\nignores the other aspect entirely. Or more precisely, it disables IOS\nentirely as soon as there's a single condition requiring heap tuple.\n\nI agree it's probably better to see this as a single node with various\noptimizations that can be applied when possible / efficient (based on\nplanner and/or dynamically).\n\nI'm not sure I see a direct link to the prefetching patch, but it's true\nthat needs to deal with tids (instead of slots), just like IOS. So if\nthe node worked with tids, maybe the prefetching could be done at that\nlevel (which I now realize may be what Andres meant by doing prefetching\nin the executor).\n\n> Apologies for that lengthy preamble; on to the patch under discussion:\n> \n> On Thu, Jun 8, 2023 at 1:34 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> I took a stab at this and implemented the trick with the VM - during\n>> index scan, we also extract the filters that only need the indexed\n>> attributes (just like in IOS). And then, during the execution we:\n>>\n>> 1) scan the index using the scan keys (as before)\n>>\n>> 2) if the heap page is all-visible, we check the new filters that can\n>> be evaluated on the index tuple\n>>\n>> 3) fetch the heap tuple and evaluate the filters\n> \n> Thanks for working on this; I'm excited about this class of work\n> (along with index prefetching and other ideas I think there's a lot of\n> potential for improving index scans).\n> \n>> This is pretty much exactly the same thing we do for IOS, so I don't see\n>> why this would be incorrect while IOS is correct.\n>>\n>> This also adds \"Index Filter\" to explain output, to show which filters\n>> are executed on the index tuple (at the moment the filters are a subset\n>> of \"Filter\"), so if the index tuple matches we'll execute them again on\n>> the heap tuple. I guess that could be fixed by having two \"filter\"\n>> lists, depending on whether we were able to evaluate the index filters.\n> \n> Given that we show index filters and heap filters separately it seems\n> like we might want to maintain separate instrumentation counts of how\n> many tuple were filtered by each set of filters.\n> \n\nYeah, separate instrumentation counters would be useful. What I was\ntalking about was more about the conditions itself, because right now we\nre-evaluate the index-only clauses on the heap tuple.\n\nImagine an index on t(a) and a query that has WHERE (a = 1) AND (b = 2).\nthe patch splits this into two lists:\n\nindex-only clauses: (a=1)\nclauses: (a=1) AND (b=1)\n\nSo we evaluate (a=1) first, and then we fetch the heap tuple and check\n\"clauses\" again, which however includes the (a=1) again. For cheap\nclauses (or when (a=1) eliminated a lot of tuples using just the index),\nbut for expensive clauses it might hurt.\n\nIt's fixable, we'd just need to keep two versions of the \"clauses\" list,\none for IOS mode (when index-only clauses were checked) and a complete\none when we need to check all clauses.\n\n>> Most of the patch is pretty mechanical - particularly the planning part\n>> is about identifying filters that can be evaluated on the index tuple,\n>> and that code was mostly shamelessly copied from index-only scan.\n>>\n>> The matching of filters to index is done in check_index_filter(), and\n>> it's simpler than match_clause_to_indexcol() as it does not need to\n>> consider operators etc. (I think). But maybe it should be careful about\n>> other things, not sure.\n> \n> This would end up requiring some refactoring of the existing index\n> matching code (or alternative caching on IndexOptInfo), but\n> match_filter_to_index() calling check_index_filter() results in\n> constructs a bitmapset of index columns for every possible filter\n> which seems wasteful (I recognize this is a bit of a proof-of-concept\n> level v1).\n> \n\nProbably, I'm sure there's a couple other places where the current API\nwas a bit cumbersome and we could optimize.\n\n>> The actual magic happens in IndexNext (nodeIndexscan.c). As mentioned\n>> earlier, the idea is to check VM and evaluate the filters on the index\n>> tuple if possible, similar to index-only scans. Except that we then have\n>> to fetch the heap tuple. Unfortunately, this means the code can't use\n>> index_getnext_slot() anymore. Perhaps we should invent a new variant\n>> that'd allow evaluating the index filters in between.\n> \n> It does seem there are some refactoring opportunities there.\n> \n\nActually, I realized maybe we should switch this to index_getnext_tid()\nbecause of the prefetching patch. That would allow us to introduce a\n\"buffer\" of TIDs, populated by the index_getnext_tid(), and then do\nprefetching based on that. It's similar to what bitmap scans do, except\nthat intead of the tbm iterator we get items from index_getnext_tid().\n\nI haven't tried implementing this yet, but I kinda like the idea as it\nworks no matter what exactly the AM does (i.e. it'd work even for cases\nlike GiST with distance searches).\n\n\n>> With the patch applied, the query plan changes from:\n>>\n>> ...\n>>\n>> to\n>>\n>> QUERY PLAN\n>> -------------------------------------------------------------------\n>> Limit (cost=0.42..3662.15 rows=1 width=12)\n>> (actual time=13.663..13.667 rows=0 loops=1)\n>> Buffers: shared hit=544\n>> -> Index Scan using t_a_include_b on t\n>> (cost=0.42..3662.15 rows=1 width=12)\n>> (actual time=13.659..13.660 rows=0 loops=1)\n>> Index Cond: (a > 1000000)\n>> Index Filter: (b = 4)\n>> Rows Removed by Index Recheck: 197780\n>> Filter: (b = 4)\n>> Buffers: shared hit=544\n>> Planning Time: 0.105 ms\n>> Execution Time: 13.690 ms\n>> (10 rows)\n>>\n>> ...\n> \n> I did also confirm that this properly identifies cases Jeff had\n> mentioned to me like \"Index Filter: (((a * 2) > 500000) AND ((b % 10)\n> = 4))\".\n> \n\nGood!\n\n> I noticed also you still had questions/TODOs about handling index\n> scans for join clauses.\n> \n\nNot sure which questions/TODOs you refer to, but I don't recall any\nissues with join clauses. But maybe I just forgot.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 21 Jun 2023 17:28:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 11:28 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 6/21/23 14:45, James Coleman wrote:\n> > Hello,\n> >\n> > I've cc'd Jeff Davis on this due to a conversation we had at PGCon\n> > about applying filters on index tuples during index scans.\n> >\n> > I've also cc'd Andres Freund because I think this relates to his\n> > musing in [1] that:\n> >> One thing I have been wondering around this is whether we should not have\n> >> split the code for IOS and plain indexscans...\n> >\n> > I think I remember Peter Geoghegan also wondering (I can't remember if\n> > this was in conversation at PGCon about index skip scans or in a\n> > hackers thread) about how we compose these various index scan\n> > optimizations.\n> >\n> > To be certain this is probably a thing to tackle as a follow-on to\n> > this patch, but it does seem to me that what we are implicitly\n> > realizing is that (unlike with bitmap scans, I think) it doesn't\n> > really make a lot of conceptual sense to have index only scans be a\n> > separate node from index scans. Instead it's likely better to consider\n> > it an optimization to index scans that can dynamically kick in when\n> > it's able to be of use. That would allow it to compose with e.g.\n> > prefetching in the aforelinked thread. At the very least we would need\n> > pragmatic (e.g., cost of dynamically applying optimizations) rather\n> > than conceptual reasons to argue they should continue to be separate.\n> >\n>\n> I agree it seems a bit weird to have IOS as a separate node. In a way, I\n> think there are two dimensions for \"index-only\" scans - which pages can\n> be scanned like that, and which clauses can be evaluated with only the\n> index tuple. The current approach focuses on page visibility, but\n> ignores the other aspect entirely. Or more precisely, it disables IOS\n> entirely as soon as there's a single condition requiring heap tuple.\n>\n> I agree it's probably better to see this as a single node with various\n> optimizations that can be applied when possible / efficient (based on\n> planner and/or dynamically).\n>\n> I'm not sure I see a direct link to the prefetching patch, but it's true\n> that needs to deal with tids (instead of slots), just like IOS. So if\n> the node worked with tids, maybe the prefetching could be done at that\n> level (which I now realize may be what Andres meant by doing prefetching\n> in the executor).\n\nThe link to prefetching is that IOS (as a separate node) won't benefit\nfrom prefetching (I think) with your current prefetching patch (in the\ncase where the VM doesn't allow us to just use the index tuple),\nwhereas if the nodes were combined that would more naturally be\ncomposable.\n\n> > Apologies for that lengthy preamble; on to the patch under discussion:\n> >\n> > On Thu, Jun 8, 2023 at 1:34 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> I took a stab at this and implemented the trick with the VM - during\n> >> index scan, we also extract the filters that only need the indexed\n> >> attributes (just like in IOS). And then, during the execution we:\n> >>\n> >> 1) scan the index using the scan keys (as before)\n> >>\n> >> 2) if the heap page is all-visible, we check the new filters that can\n> >> be evaluated on the index tuple\n> >>\n> >> 3) fetch the heap tuple and evaluate the filters\n> >\n> > Thanks for working on this; I'm excited about this class of work\n> > (along with index prefetching and other ideas I think there's a lot of\n> > potential for improving index scans).\n> >\n> >> This is pretty much exactly the same thing we do for IOS, so I don't see\n> >> why this would be incorrect while IOS is correct.\n> >>\n> >> This also adds \"Index Filter\" to explain output, to show which filters\n> >> are executed on the index tuple (at the moment the filters are a subset\n> >> of \"Filter\"), so if the index tuple matches we'll execute them again on\n> >> the heap tuple. I guess that could be fixed by having two \"filter\"\n> >> lists, depending on whether we were able to evaluate the index filters.\n> >\n> > Given that we show index filters and heap filters separately it seems\n> > like we might want to maintain separate instrumentation counts of how\n> > many tuple were filtered by each set of filters.\n> >\n>\n> Yeah, separate instrumentation counters would be useful. What I was\n> talking about was more about the conditions itself, because right now we\n> re-evaluate the index-only clauses on the heap tuple.\n>\n> Imagine an index on t(a) and a query that has WHERE (a = 1) AND (b = 2).\n> the patch splits this into two lists:\n>\n> index-only clauses: (a=1)\n> clauses: (a=1) AND (b=1)\n>\n> So we evaluate (a=1) first, and then we fetch the heap tuple and check\n> \"clauses\" again, which however includes the (a=1) again. For cheap\n> clauses (or when (a=1) eliminated a lot of tuples using just the index),\n> but for expensive clauses it might hurt.\n>\n> It's fixable, we'd just need to keep two versions of the \"clauses\" list,\n> one for IOS mode (when index-only clauses were checked) and a complete\n> one when we need to check all clauses.\n\nIn some cases (where the VM doesn't allow us to use just the index\ntuple) we'd have to execute both lists against the heap tuple, right?\n\n> >> Most of the patch is pretty mechanical - particularly the planning part\n> >> is about identifying filters that can be evaluated on the index tuple,\n> >> and that code was mostly shamelessly copied from index-only scan.\n> >>\n> >> The matching of filters to index is done in check_index_filter(), and\n> >> it's simpler than match_clause_to_indexcol() as it does not need to\n> >> consider operators etc. (I think). But maybe it should be careful about\n> >> other things, not sure.\n> >\n> > This would end up requiring some refactoring of the existing index\n> > matching code (or alternative caching on IndexOptInfo), but\n> > match_filter_to_index() calling check_index_filter() results in\n> > constructs a bitmapset of index columns for every possible filter\n> > which seems wasteful (I recognize this is a bit of a proof-of-concept\n> > level v1).\n> >\n>\n> Probably, I'm sure there's a couple other places where the current API\n> was a bit cumbersome and we could optimize.\n>\n> >> The actual magic happens in IndexNext (nodeIndexscan.c). As mentioned\n> >> earlier, the idea is to check VM and evaluate the filters on the index\n> >> tuple if possible, similar to index-only scans. Except that we then have\n> >> to fetch the heap tuple. Unfortunately, this means the code can't use\n> >> index_getnext_slot() anymore. Perhaps we should invent a new variant\n> >> that'd allow evaluating the index filters in between.\n> >\n> > It does seem there are some refactoring opportunities there.\n> >\n>\n> Actually, I realized maybe we should switch this to index_getnext_tid()\n> because of the prefetching patch. That would allow us to introduce a\n> \"buffer\" of TIDs, populated by the index_getnext_tid(), and then do\n> prefetching based on that. It's similar to what bitmap scans do, except\n> that intead of the tbm iterator we get items from index_getnext_tid().\n>\n> I haven't tried implementing this yet, but I kinda like the idea as it\n> works no matter what exactly the AM does (i.e. it'd work even for cases\n> like GiST with distance searches).\n\nOh, interesting, I'll let you keep chewing on that then.\n\n> >> With the patch applied, the query plan changes from:\n> >>\n> >> ...\n> >>\n> >> to\n> >>\n> >> QUERY PLAN\n> >> -------------------------------------------------------------------\n> >> Limit (cost=0.42..3662.15 rows=1 width=12)\n> >> (actual time=13.663..13.667 rows=0 loops=1)\n> >> Buffers: shared hit=544\n> >> -> Index Scan using t_a_include_b on t\n> >> (cost=0.42..3662.15 rows=1 width=12)\n> >> (actual time=13.659..13.660 rows=0 loops=1)\n> >> Index Cond: (a > 1000000)\n> >> Index Filter: (b = 4)\n> >> Rows Removed by Index Recheck: 197780\n> >> Filter: (b = 4)\n> >> Buffers: shared hit=544\n> >> Planning Time: 0.105 ms\n> >> Execution Time: 13.690 ms\n> >> (10 rows)\n> >>\n> >> ...\n> >\n> > I did also confirm that this properly identifies cases Jeff had\n> > mentioned to me like \"Index Filter: (((a * 2) > 500000) AND ((b % 10)\n> > = 4))\".\n> >\n>\n> Good!\n>\n> > I noticed also you still had questions/TODOs about handling index\n> > scans for join clauses.\n> >\n>\n> Not sure which questions/TODOs you refer to, but I don't recall any\n> issues with join clauses. But maybe I just forgot.\n\nI was referring to the comment:\n\n+ * FIXME Maybe this should fill the filterset too?\n\nabove match_eclass_clauses_to_index()'s definition.\n\nRegards,\nJames\n\n\n",
"msg_date": "Wed, 21 Jun 2023 12:17:45 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 6/21/23 18:17, James Coleman wrote:\n> On Wed, Jun 21, 2023 at 11:28 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>>\n>>\n>> On 6/21/23 14:45, James Coleman wrote:\n>>> Hello,\n>>>\n>>> I've cc'd Jeff Davis on this due to a conversation we had at PGCon\n>>> about applying filters on index tuples during index scans.\n>>>\n>>> I've also cc'd Andres Freund because I think this relates to his\n>>> musing in [1] that:\n>>>> One thing I have been wondering around this is whether we should not have\n>>>> split the code for IOS and plain indexscans...\n>>>\n>>> I think I remember Peter Geoghegan also wondering (I can't remember if\n>>> this was in conversation at PGCon about index skip scans or in a\n>>> hackers thread) about how we compose these various index scan\n>>> optimizations.\n>>>\n>>> To be certain this is probably a thing to tackle as a follow-on to\n>>> this patch, but it does seem to me that what we are implicitly\n>>> realizing is that (unlike with bitmap scans, I think) it doesn't\n>>> really make a lot of conceptual sense to have index only scans be a\n>>> separate node from index scans. Instead it's likely better to consider\n>>> it an optimization to index scans that can dynamically kick in when\n>>> it's able to be of use. That would allow it to compose with e.g.\n>>> prefetching in the aforelinked thread. At the very least we would need\n>>> pragmatic (e.g., cost of dynamically applying optimizations) rather\n>>> than conceptual reasons to argue they should continue to be separate.\n>>>\n>>\n>> I agree it seems a bit weird to have IOS as a separate node. In a way, I\n>> think there are two dimensions for \"index-only\" scans - which pages can\n>> be scanned like that, and which clauses can be evaluated with only the\n>> index tuple. The current approach focuses on page visibility, but\n>> ignores the other aspect entirely. Or more precisely, it disables IOS\n>> entirely as soon as there's a single condition requiring heap tuple.\n>>\n>> I agree it's probably better to see this as a single node with various\n>> optimizations that can be applied when possible / efficient (based on\n>> planner and/or dynamically).\n>>\n>> I'm not sure I see a direct link to the prefetching patch, but it's true\n>> that needs to deal with tids (instead of slots), just like IOS. So if\n>> the node worked with tids, maybe the prefetching could be done at that\n>> level (which I now realize may be what Andres meant by doing prefetching\n>> in the executor).\n> \n> The link to prefetching is that IOS (as a separate node) won't benefit\n> from prefetching (I think) with your current prefetching patch (in the\n> case where the VM doesn't allow us to just use the index tuple),\n> whereas if the nodes were combined that would more naturally be\n> composable.\n> \n\nYeah, mostly. Although just unifying \"regular\" indexscans and IOS would\nnot allow prefetching for IOS.\n\nThe reason why the current design does not allow doing prefetching for\nIOS is that the prefetching happens deep in indexam.c, and down there we\ndon't know which TIDs are not from all-visible pages and would need\nprefetching. Which is another good reason to do the prefetching at the\nexecutor level, I believe.\n\n>>> Apologies for that lengthy preamble; on to the patch under discussion:\n>>>\n>>> On Thu, Jun 8, 2023 at 1:34 PM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> Hi,\n>>>>\n>>>> I took a stab at this and implemented the trick with the VM - during\n>>>> index scan, we also extract the filters that only need the indexed\n>>>> attributes (just like in IOS). And then, during the execution we:\n>>>>\n>>>> 1) scan the index using the scan keys (as before)\n>>>>\n>>>> 2) if the heap page is all-visible, we check the new filters that can\n>>>> be evaluated on the index tuple\n>>>>\n>>>> 3) fetch the heap tuple and evaluate the filters\n>>>\n>>> Thanks for working on this; I'm excited about this class of work\n>>> (along with index prefetching and other ideas I think there's a lot of\n>>> potential for improving index scans).\n>>>\n>>>> This is pretty much exactly the same thing we do for IOS, so I don't see\n>>>> why this would be incorrect while IOS is correct.\n>>>>\n>>>> This also adds \"Index Filter\" to explain output, to show which filters\n>>>> are executed on the index tuple (at the moment the filters are a subset\n>>>> of \"Filter\"), so if the index tuple matches we'll execute them again on\n>>>> the heap tuple. I guess that could be fixed by having two \"filter\"\n>>>> lists, depending on whether we were able to evaluate the index filters.\n>>>\n>>> Given that we show index filters and heap filters separately it seems\n>>> like we might want to maintain separate instrumentation counts of how\n>>> many tuple were filtered by each set of filters.\n>>>\n>>\n>> Yeah, separate instrumentation counters would be useful. What I was\n>> talking about was more about the conditions itself, because right now we\n>> re-evaluate the index-only clauses on the heap tuple.\n>>\n>> Imagine an index on t(a) and a query that has WHERE (a = 1) AND (b = 2).\n>> the patch splits this into two lists:\n>>\n>> index-only clauses: (a=1)\n>> clauses: (a=1) AND (b=1)\n>>\n>> So we evaluate (a=1) first, and then we fetch the heap tuple and check\n>> \"clauses\" again, which however includes the (a=1) again. For cheap\n>> clauses (or when (a=1) eliminated a lot of tuples using just the index),\n>> but for expensive clauses it might hurt.\n>>\n>> It's fixable, we'd just need to keep two versions of the \"clauses\" list,\n>> one for IOS mode (when index-only clauses were checked) and a complete\n>> one when we need to check all clauses.\n> \n> In some cases (where the VM doesn't allow us to use just the index\n> tuple) we'd have to execute both lists against the heap tuple, right?\n> \n\nNot quite. I suspect you imagine we'd have two lists\n\nL1: (a=1)\nL2: (b=1)\n\nand that for all-visible pages we'd check L1 on index, and then maybe L2\non heap. And for non-all-visible pages we'd check both on heap.\n\nBut that doesn't work, because L1 has references to attnums in the index\ntuple, while L2 has attnums to heap.\n\nSo we'd need\n\nL1: (a=1) -> against index tuple\nL2: (b=1) -> against heap tuple\nL3: (a=1) AND (b=1) -> against heap tuple\n\nAnd for non-all-visible pages we'd only use L3. (I wonder if we could\ncheck if the tuple is visible and then go back and check L1 on index\ntuple, but I doubt it'd be really more efficient.)\n\n\n>>>> Most of the patch is pretty mechanical - particularly the planning part\n>>>> is about identifying filters that can be evaluated on the index tuple,\n>>>> and that code was mostly shamelessly copied from index-only scan.\n>>>>\n>>>> The matching of filters to index is done in check_index_filter(), and\n>>>> it's simpler than match_clause_to_indexcol() as it does not need to\n>>>> consider operators etc. (I think). But maybe it should be careful about\n>>>> other things, not sure.\n>>>\n>>> This would end up requiring some refactoring of the existing index\n>>> matching code (or alternative caching on IndexOptInfo), but\n>>> match_filter_to_index() calling check_index_filter() results in\n>>> constructs a bitmapset of index columns for every possible filter\n>>> which seems wasteful (I recognize this is a bit of a proof-of-concept\n>>> level v1).\n>>>\n>>\n>> Probably, I'm sure there's a couple other places where the current API\n>> was a bit cumbersome and we could optimize.\n>>\n>>>> The actual magic happens in IndexNext (nodeIndexscan.c). As mentioned\n>>>> earlier, the idea is to check VM and evaluate the filters on the index\n>>>> tuple if possible, similar to index-only scans. Except that we then have\n>>>> to fetch the heap tuple. Unfortunately, this means the code can't use\n>>>> index_getnext_slot() anymore. Perhaps we should invent a new variant\n>>>> that'd allow evaluating the index filters in between.\n>>>\n>>> It does seem there are some refactoring opportunities there.\n>>>\n>>\n>> Actually, I realized maybe we should switch this to index_getnext_tid()\n>> because of the prefetching patch. That would allow us to introduce a\n>> \"buffer\" of TIDs, populated by the index_getnext_tid(), and then do\n>> prefetching based on that. It's similar to what bitmap scans do, except\n>> that intead of the tbm iterator we get items from index_getnext_tid().\n>>\n>> I haven't tried implementing this yet, but I kinda like the idea as it\n>> works no matter what exactly the AM does (i.e. it'd work even for cases\n>> like GiST with distance searches).\n> \n> Oh, interesting, I'll let you keep chewing on that then.\n> \n\nCool!\n\n>>>> With the patch applied, the query plan changes from:\n>>>>\n>>>> ...\n>>>>\n>>>> to\n>>>>\n>>>> QUERY PLAN\n>>>> -------------------------------------------------------------------\n>>>> Limit (cost=0.42..3662.15 rows=1 width=12)\n>>>> (actual time=13.663..13.667 rows=0 loops=1)\n>>>> Buffers: shared hit=544\n>>>> -> Index Scan using t_a_include_b on t\n>>>> (cost=0.42..3662.15 rows=1 width=12)\n>>>> (actual time=13.659..13.660 rows=0 loops=1)\n>>>> Index Cond: (a > 1000000)\n>>>> Index Filter: (b = 4)\n>>>> Rows Removed by Index Recheck: 197780\n>>>> Filter: (b = 4)\n>>>> Buffers: shared hit=544\n>>>> Planning Time: 0.105 ms\n>>>> Execution Time: 13.690 ms\n>>>> (10 rows)\n>>>>\n>>>> ...\n>>>\n>>> I did also confirm that this properly identifies cases Jeff had\n>>> mentioned to me like \"Index Filter: (((a * 2) > 500000) AND ((b % 10)\n>>> = 4))\".\n>>>\n>>\n>> Good!\n>>\n>>> I noticed also you still had questions/TODOs about handling index\n>>> scans for join clauses.\n>>>\n>>\n>> Not sure which questions/TODOs you refer to, but I don't recall any\n>> issues with join clauses. But maybe I just forgot.\n> \n> I was referring to the comment:\n> \n> + * FIXME Maybe this should fill the filterset too?\n> \n> above match_eclass_clauses_to_index()'s definition.\n> \n\nAh, yeah. I'm sure there are some loose ends in the matching code.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 21 Jun 2023 21:20:15 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "Hi,\n\nhere's a minor update of the patch, rebased to a current master and\naddressing a couple issues reported by cfbot. Most are minor tweaks, but\nthe last one (4) is a somewhat more serious issue.\n\n\n1) \"tid\" might have not been initialized in the IndexNext loop\n\n\n2) add enable_indexonlyfilter GUC to postgresql.conf.sample (which is\nchecked by one regression test)\n\n\n3) accepts a couple plan changes, either switching to index scan (thanks\nto the costing changes) or showing the extra index-only filters in the\nexplain output. The plan changes seem reasonable.\n\n\n4) problems with opcintype != opckeytype (name_ops)\n\nWhile running the tests, I ran into an issue with name_ops, causing\nfailures for \\dT and other catalog queries. The root cause is that\nname_ops has opcintype = name, but opckeytype = cstring. The index-only\nclauses are copied from the table, with Vars mutated to reference the\nINDEX_VAR. But the type is not, so when we get to evaluating the\nexpressions, CheckVarSlotCompatibility() fails because the Var has name,\nbut the iss_IndexSlot (created with index tuple descriptor) has cstring.\n\nThe rebased patch fixes this by explicitly adjusting types of the\ndescriptor in ExecInitIndexScan().\n\nHowever, maybe this indicates the very idea of evaluating expressions\nusing slot with index tuple descriptor is misguided. This made me look\nat regular index-only scan (nodeIndexonlyscan.c), and that uses a slot\nwith the \"table\" structure, and instead of evaluating the expression on\nthe index index tuple it expands the index tuple into the table slot.\nWhich is what StoreIndexTuple() does.\n\nSo maybe this should do what IOS does - expand the index tuple into\n\"table slot\" and evaluate the expression on that. That'd also make the\nINDEX_VAR tweak in createplan.c unnecessary - in fact, that seemed a bit\nstrange anyway, so ditching fix_indexfilter_mutator would be good.\n\n\nHowever, I wonder if the stuff StoreIndexTuple() is doing is actually\nsafe. I mean, it's essentially copying values from the index tuple into\nthe slot, ignoring the type difference. What if opcintype and opckeytype\nare not binary compatible? Is it possible to define an opclass with such\nopckeytype? I haven't notice any check enforcing such compatibility ...\n\nAlso, it's a bit confusing the SGML docs say opckeytype is not supported\nfor btree, but name_ops clearly does that. Later I found it's actually\nmentioned in pg_opclass.dat as a hack, to save space in catalogs.\n\nBut then btree also has amstorage=false ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 15 Jul 2023 16:20:28 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 7/15/23 16:20, Tomas Vondra wrote:\n>\n> ...\n> \n> 4) problems with opcintype != opckeytype (name_ops)\n> \n> While running the tests, I ran into an issue with name_ops, causing\n> failures for \\dT and other catalog queries. The root cause is that\n> name_ops has opcintype = name, but opckeytype = cstring. The index-only\n> clauses are copied from the table, with Vars mutated to reference the\n> INDEX_VAR. But the type is not, so when we get to evaluating the\n> expressions, CheckVarSlotCompatibility() fails because the Var has name,\n> but the iss_IndexSlot (created with index tuple descriptor) has cstring.\n> \n> The rebased patch fixes this by explicitly adjusting types of the\n> descriptor in ExecInitIndexScan().\n> \n> However, maybe this indicates the very idea of evaluating expressions\n> using slot with index tuple descriptor is misguided. This made me look\n> at regular index-only scan (nodeIndexonlyscan.c), and that uses a slot\n> with the \"table\" structure, and instead of evaluating the expression on\n> the index index tuple it expands the index tuple into the table slot.\n> Which is what StoreIndexTuple() does.\n> \n> So maybe this should do what IOS does - expand the index tuple into\n> \"table slot\" and evaluate the expression on that. That'd also make the\n> INDEX_VAR tweak in createplan.c unnecessary - in fact, that seemed a bit\n> strange anyway, so ditching fix_indexfilter_mutator would be good.\n> \n\nThis kept bothering me, so I looked at it today, and reworked it to use\nthe IOS approach. It's a bit more complicated because for IOS both slots\nhave the same overall structure, except for the data types. But for\nregular index scans that's not the case - the code has to \"expand\" the\nindex tuple into the larger \"table slot\". This works, and in general I\nthink the result is much cleaner - in particular, it means we don't need\nto switch the Var nodes to reference the INDEX_VAR.\n\nWhile experimenting with this I realized again that we're not matching\nexpressions to IOS. So if you have an expression index on (a+b), that\ncan't be used even if the query only uses this particular expression.\nThe same limitation applies to index-only filters, of course. It's not\nthe fault of this patch, but perhaps it'd be an interesting improvement.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 16 Jul 2023 22:36:35 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "Hi,\n\n\nOn Sun, 2023-07-16 at 22:36 +0200, Tomas Vondra wrote:\n> This kept bothering me, so I looked at it today, and reworked it to\n> use\n> the IOS approach.\n\nInitial comments on patch 20230716:\n\n* check_index_filter() alredy looks at \"canreturn\", which should mean\nthat you don't need to later check for opcintype<>opckeytype. But\nthere's a comment in IndexNext() indicating that's a problem -- under\nwhat conditions is it a problem?\n\n* (may be a matter of taste) Recomputing the bitmapset from the\ncanreturn array in check_index_filter() for each call seems awkward. I\nwould just iterate through the bitmapset and check that all are set\ntrue in the amcanreturn array.\n\n* There are some tiny functions that don't seem to add much value or\nhave slightly weird APIs. For instance, match_filter_to_index() could\nprobably just return a boolean, and maybe doesn't even need to exist\nbecause it's such a thin wrapper over check_index_filter(). Similarly\nfor fix_indexfilter_clause(). I'm OK with tiny functions even if the\nonly value is a comment, but I didn't find these particularly helpful.\n\n* fix_indexfilter_references() could use a better comment. Perhaps\nrefactor so that you can share code with fix_indexqual_references()?\n\n* it looks like index filters are duplicated with ordinary filters, is\nthere a reason for that?\n\n* I'm confused about the relationship of an IOS to an index filter. It\nseems like the index filter only works for an ordinary index scan? Why\nis that?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 18 Jul 2023 13:21:31 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 7/18/23 22:21, Jeff Davis wrote:\n> Hi,\n> \n> \n> On Sun, 2023-07-16 at 22:36 +0200, Tomas Vondra wrote:\n>> This kept bothering me, so I looked at it today, and reworked it to\n>> use\n>> the IOS approach.\n> \n> Initial comments on patch 20230716:\n> \n> * check_index_filter() alredy looks at \"canreturn\", which should mean\n> that you don't need to later check for opcintype<>opckeytype. But\n> there's a comment in IndexNext() indicating that's a problem -- under\n> what conditions is it a problem?\n> \n\nThe comment in IndexNext() is a bit obsolete. There was an issue when\nusing a slot matching the index, because then StoreIndexTuple might fail\nbecause of type mismatch (as explained in [1]). But that's no longer an\nissue, thanks to switching to the table slot in the last patch version.\n\n> * (may be a matter of taste) Recomputing the bitmapset from the\n> canreturn array in check_index_filter() for each call seems awkward. I\n> would just iterate through the bitmapset and check that all are set\n> true in the amcanreturn array.\n> \n\ncheck_index_filter() is a simplified version of check_index_only(), and\nthat calculates the bitmap this way.\n\n> * There are some tiny functions that don't seem to add much value or\n> have slightly weird APIs. For instance, match_filter_to_index() could\n> probably just return a boolean, and maybe doesn't even need to exist\n> because it's such a thin wrapper over check_index_filter(). Similarly\n> for fix_indexfilter_clause(). I'm OK with tiny functions even if the\n> only value is a comment, but I didn't find these particularly helpful.\n> \n\nYes, I agree some of this could be simplified. I only did the bare\nminimum to get this bit working.\n\n> * fix_indexfilter_references() could use a better comment. Perhaps\n> refactor so that you can share code with fix_indexqual_references()?\n> \n\nI don't think this can share code with fix_indexqual_references(),\nbecause that changes the Var nodes to point to the index (because it\nthen gets translated to scan keys). The filters don't need that.\n\n> * it looks like index filters are duplicated with ordinary filters, is\n> there a reason for that?\n> \n\nGood point. That used to be necessary, because the index-only filters\ncan be evaluated only on all-visible pages, and filters were had Vars\nreferencing the index tuple. We'd have to maintain another list of\nclauses, which didn't seem worth it.\n\nBut now that the filters reference the heap tuple, we could not include\nthem into the second list.\n\n> * I'm confused about the relationship of an IOS to an index filter. It\n> seems like the index filter only works for an ordinary index scan? Why\n> is that?\n\nWhat would it do for IOS? IOS evaluates all filters on the index tuple,\nand it does not need the heap tuple at all (assuming allvisible=true).\n\nIndex-only filters try to do something like that even for regular index\nscans, by evaluating as many expression on the index tuple, but may\nstill require fetching the heap tuple in the end.\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/97985ef2-ef9b-e62e-6fd4-e00a573d4ead@enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 19 Jul 2023 00:36:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Wed, 2023-07-19 at 00:36 +0200, Tomas Vondra wrote:\n> > * I'm confused about the relationship of an IOS to an index filter.\n> > It\n> > seems like the index filter only works for an ordinary index scan?\n> > Why\n> > is that?\n> \n> What would it do for IOS?\n\nThe way it's presented is slightly confusing. If you have table x with\nand index on column i, then:\n\n EXPLAIN (ANALYZE, BUFFERS)\n SELECT i, j FROM x WHERE i = 7 and (i % 1000 = 7);\n\n Index Scan using x_idx on x (cost=0.42..8.45 rows=1 width=8)\n(actual time=0.094..0.098 rows=1 loops=1)\n Index Cond: (i = 7)\n Index Filter: ((i % 1000) = 7)\n\nBut if you remove \"j\" from the target list, you get:\n\n EXPLAIN (ANALYZE, BUFFERS)\n SELECT i FROM x WHERE i = 7 and (i % 1000 = 7);\n\n Index Only Scan using x_idx on x (cost=0.42..4.45 rows=1 width=4)\n(actual time=0.085..0.088 rows=1 loops=1)\n Index Cond: (i = 7)\n Filter: ((i % 1000) = 7)\n\nThe confused me at first because the \"Filter\" in the second plan means\nthe same thing as the \"Index Filter\" in the first plan. Should we call\nthe filter in an IOS an \"Index Filter\" too? Or is that redundant?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 18 Jul 2023 16:22:17 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 7/19/23 01:22, Jeff Davis wrote:\n> On Wed, 2023-07-19 at 00:36 +0200, Tomas Vondra wrote:\n>>> * I'm confused about the relationship of an IOS to an index filter.\n>>> It\n>>> seems like the index filter only works for an ordinary index scan?\n>>> Why\n>>> is that?\n>>\n>> What would it do for IOS?\n> \n> The way it's presented is slightly confusing. If you have table x with\n> and index on column i, then:\n> \n> EXPLAIN (ANALYZE, BUFFERS)\n> SELECT i, j FROM x WHERE i = 7 and (i % 1000 = 7);\n> \n> Index Scan using x_idx on x (cost=0.42..8.45 rows=1 width=8)\n> (actual time=0.094..0.098 rows=1 loops=1)\n> Index Cond: (i = 7)\n> Index Filter: ((i % 1000) = 7)\n> \n> But if you remove \"j\" from the target list, you get:\n> \n> EXPLAIN (ANALYZE, BUFFERS)\n> SELECT i FROM x WHERE i = 7 and (i % 1000 = 7);\n> \n> Index Only Scan using x_idx on x (cost=0.42..4.45 rows=1 width=4)\n> (actual time=0.085..0.088 rows=1 loops=1)\n> Index Cond: (i = 7)\n> Filter: ((i % 1000) = 7)\n> \n> The confused me at first because the \"Filter\" in the second plan means\n> the same thing as the \"Index Filter\" in the first plan. Should we call\n> the filter in an IOS an \"Index Filter\" too? Or is that redundant?\n> \n\nI agree the naming in explain is a bit confusing.\n\nI wonder if Andres was right (in the index prefetch thread) that\nsplitting regular index scans and index-only scans may not be ideal. In\na way, this patch moves those nodes closer, both in capability and code\n(because now both use index_getnext_tid etc.).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 19 Jul 2023 11:16:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Wed, 2023-07-19 at 11:16 +0200, Tomas Vondra wrote:\n> I wonder if Andres was right (in the index prefetch thread) that\n> splitting regular index scans and index-only scans may not be ideal.\n> In\n> a way, this patch moves those nodes closer, both in capability and\n> code\n> (because now both use index_getnext_tid etc.).\n\nYeah. I could also imagine decomposing the index scan node into more\npieces, but I don't think it would work out to be a clean data flow.\nEither way, probably out of scope for this patch.\n\nFor this patch I think we should just tweak the EXPLAIN output so that\nit's a little more clear what parts are index-only (at least if VM bit\nis set) and what parts need to go to the heap.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 19 Jul 2023 10:17:12 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 7/19/23 19:17, Jeff Davis wrote:\n> On Wed, 2023-07-19 at 11:16 +0200, Tomas Vondra wrote:\n>> I wonder if Andres was right (in the index prefetch thread) that\n>> splitting regular index scans and index-only scans may not be ideal.\n>> In\n>> a way, this patch moves those nodes closer, both in capability and\n>> code\n>> (because now both use index_getnext_tid etc.).\n> \n> Yeah. I could also imagine decomposing the index scan node into more\n> pieces, but I don't think it would work out to be a clean data flow.\n> Either way, probably out of scope for this patch.\n> \n\nOK\n\n> For this patch I think we should just tweak the EXPLAIN output so that\n> it's a little more clear what parts are index-only (at least if VM bit\n> is set) and what parts need to go to the heap.\n> \n\nMakes sense, I also need to think about maybe not having duplicate\nclauses in the two lists. What annoys me on that it partially prevents\nthe cost-based reordering done by order_qual_clauses(). So maybe we\nshould have three lists ... Also, some of the expressions count be\nfairly expensive.\n\nBTW could you double-check how I expanded the index_getnext_slot()? I\nrecall I wasn't entirely confident the result is correct, and I wanted\nto try getting rid of the \"while (true)\" loop.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 19 Jul 2023 20:03:43 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Wed, 2023-07-19 at 20:03 +0200, Tomas Vondra wrote:\n> Makes sense, I also need to think about maybe not having duplicate\n> clauses in the two lists. What annoys me on that it partially\n> prevents\n> the cost-based reordering done by order_qual_clauses(). So maybe we\n> should have three lists ... Also, some of the expressions count be\n> fairly expensive.\n\nCan we just calculate the costs of the pushdown and do it when it's a\nwin? If the random_page_cost savings exceed the costs from evaluating\nthe clause earlier, then push down.\n\n> BTW could you double-check how I expanded the index_getnext_slot()? I\n> recall I wasn't entirely confident the result is correct, and I\n> wanted\n> to try getting rid of the \"while (true)\" loop.\n\nI suggest refactoring slightly to have the two loops in different\nfunctions (rather than nested loops in the same function) to make\ncontrol flow a bit more clear. I'm not sure if the new function for the\ninner loop should be defined in nodeIndexscan.c or indexam.c; I suppose\nit depends on how clean the signature looks.\n\nAlso please expand the tests a bit to show more EXPLAIN plans that\nillustrate the different cases.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 19 Jul 2023 13:46:22 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 1:46 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Wed, 2023-07-19 at 20:03 +0200, Tomas Vondra wrote:\n> > Makes sense, I also need to think about maybe not having duplicate\n> > clauses in the two lists. What annoys me on that it partially\n> > prevents\n> > the cost-based reordering done by order_qual_clauses(). So maybe we\n> > should have three lists ... Also, some of the expressions count be\n> > fairly expensive.\n>\n> Can we just calculate the costs of the pushdown and do it when it's a\n> win? If the random_page_cost savings exceed the costs from evaluating\n> the clause earlier, then push down.\n\nMy patch that teaches nbtree to execute ScalarArrayOps intelligently\n(by dynamically choosing to not re-descend the btree to perform\nanother \"primitive index scan\" when the data we need is located on the\nsame leaf page as the current ScalarArrayOps arrays) took an\ninteresting turn recently -- one that seems related.\n\nI found that certain kinds of queries are dramatically faster once you\nteach the optimizer to accept that multi-column ScalarArrayOps can be\ntrusted to return tuples in logical/index order, at least under some\ncircumstances. For example:\n\npg@regression:5555 [583930]=# create index order_by_saop on\ntenk1(two,four,twenty);\nCREATE INDEX\n\npg@regression:5555 [583930]=# EXPLAIN (ANALYZE, BUFFERS)\nselect ctid, thousand from tenk1\nwhere two in (0,1) and four in (1,2) and twenty in (1,2)\norder by two, four, twenty limit 20;\n\nThis shows \"Buffers: shared hit=1377\" on HEAD, versus \"Buffers: shared\nhit=13\" with my patch. All because we can safely terminate the scan\nearly now. The vast majority of the buffer hits the patch will avoid\nare against heap pages, even though I started out with the intention\nof eliminating unnecessary repeat index page accesses.\n\nNote that build_index_paths() currently refuses to allow SAOP clauses\nto be recognized as ordered with a multi-column index and a query with\na clause for more than the leading column -- that is something that\nthe patch needs to address (to get this particular improvement, at\nleast). Allowing such an index path to have useful pathkeys is\ntypically safe (in the sense that it won't lead to wrong answers to\nqueries), but we still make a conservative assumption that they can\nlead to wrong answers. There are comments about \"equality constraints\"\nthat describe the restrictions right now.\n\nBut it's not just the question of basic correctness -- the optimizer\nis very hesitant to use multi-column SAOPs, even in cases that don't\ncare about ordering. So it's also (I think, implicitly) a question of\n*risk*. The risk of getting very inefficient SAOP execution in nbtree,\nwhen it turns out that a huge number of \"primitive index scans\" are\nneeded. But, if nbtree is taught to \"coalesce together\" primitive\nindex scans at runtime, that risk goes way down.\n\nAnyway, this seems related to what you're talking about because the\nrelationship between selectivity and ordering seems particularly\nimportant in this context. And because it suggests that there is at\nleast some scope for adding \"run time insurance\" to the executor,\nwhich is valuable in the optimizer if it bounds the potential\ndownside. If you can be practically certain that there is essentially\nzero risk of serious problems when the costing miscalculates (for a\nlimited subset of cases), then life might be a lot easier -- clearly\nwe should be biased in one particular direction with a case that has\nthat kind of general profile.\n\nMy current understanding of the optimizer side of things --\nparticularly things like costing for \"filter quals/pqquals\" versus\ncosting for \"true index quals\" -- is rather basic. That will have to\nchange. Curious to hear your thoughts (if any) on how what you're\ndiscussing may relate to what I need to do with my patch. Right now my\npatch assumes that making SAOP clauses into proper index quals (that\nusually preserve index ordering) is an unalloyed good (when safe!).\nThis assumption is approximately true on average, as far as I can\ntell. But it's probably quite untrue in various specific cases, that\nsomebody is bound to care about.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 19 Jul 2023 14:38:25 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 7/19/23 23:38, Peter Geoghegan wrote:\n> On Wed, Jul 19, 2023 at 1:46 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>> On Wed, 2023-07-19 at 20:03 +0200, Tomas Vondra wrote:\n>>> Makes sense, I also need to think about maybe not having duplicate\n>>> clauses in the two lists. What annoys me on that it partially\n>>> prevents\n>>> the cost-based reordering done by order_qual_clauses(). So maybe we\n>>> should have three lists ... Also, some of the expressions count be\n>>> fairly expensive.\n>>\n>> Can we just calculate the costs of the pushdown and do it when it's a\n>> win? If the random_page_cost savings exceed the costs from evaluating\n>> the clause earlier, then push down.\n> \n> My patch that teaches nbtree to execute ScalarArrayOps intelligently\n> (by dynamically choosing to not re-descend the btree to perform\n> another \"primitive index scan\" when the data we need is located on the\n> same leaf page as the current ScalarArrayOps arrays) took an\n> interesting turn recently -- one that seems related.\n> \n> I found that certain kinds of queries are dramatically faster once you\n> teach the optimizer to accept that multi-column ScalarArrayOps can be\n> trusted to return tuples in logical/index order, at least under some\n> circumstances. For example:\n> \n> pg@regression:5555 [583930]=# create index order_by_saop on\n> tenk1(two,four,twenty);\n> CREATE INDEX\n> \n> pg@regression:5555 [583930]=# EXPLAIN (ANALYZE, BUFFERS)\n> select ctid, thousand from tenk1\n> where two in (0,1) and four in (1,2) and twenty in (1,2)\n> order by two, four, twenty limit 20;\n> \n> This shows \"Buffers: shared hit=1377\" on HEAD, versus \"Buffers: shared\n> hit=13\" with my patch. All because we can safely terminate the scan\n> early now. The vast majority of the buffer hits the patch will avoid\n> are against heap pages, even though I started out with the intention\n> of eliminating unnecessary repeat index page accesses.\n> \n> Note that build_index_paths() currently refuses to allow SAOP clauses\n> to be recognized as ordered with a multi-column index and a query with\n> a clause for more than the leading column -- that is something that\n> the patch needs to address (to get this particular improvement, at\n> least). Allowing such an index path to have useful pathkeys is\n> typically safe (in the sense that it won't lead to wrong answers to\n> queries), but we still make a conservative assumption that they can\n> lead to wrong answers. There are comments about \"equality constraints\"\n> that describe the restrictions right now.\n> \n> But it's not just the question of basic correctness -- the optimizer\n> is very hesitant to use multi-column SAOPs, even in cases that don't\n> care about ordering. So it's also (I think, implicitly) a question of\n> *risk*. The risk of getting very inefficient SAOP execution in nbtree,\n> when it turns out that a huge number of \"primitive index scans\" are\n> needed. But, if nbtree is taught to \"coalesce together\" primitive\n> index scans at runtime, that risk goes way down.\n> \n> Anyway, this seems related to what you're talking about because the\n> relationship between selectivity and ordering seems particularly\n> important in this context. And because it suggests that there is at\n> least some scope for adding \"run time insurance\" to the executor,\n> which is valuable in the optimizer if it bounds the potential\n> downside. If you can be practically certain that there is essentially\n> zero risk of serious problems when the costing miscalculates (for a\n> limited subset of cases), then life might be a lot easier -- clearly\n> we should be biased in one particular direction with a case that has\n> that kind of general profile.\n> \n> My current understanding of the optimizer side of things --\n> particularly things like costing for \"filter quals/pqquals\" versus\n> costing for \"true index quals\" -- is rather basic. That will have to\n> change. Curious to hear your thoughts (if any) on how what you're\n> discussing may relate to what I need to do with my patch. Right now my\n> patch assumes that making SAOP clauses into proper index quals (that\n> usually preserve index ordering) is an unalloyed good (when safe!).\n> This assumption is approximately true on average, as far as I can\n> tell. But it's probably quite untrue in various specific cases, that\n> somebody is bound to care about.\n> \n\nI think the SAOP patch may need to be much more careful about this, but\nfor this patch it's simpler because it doesn't really change any of the\nindex internals, or the indexscan in general.\n\nIf I simplify that a bit, we're just reordering the clauses in a way to\nmaybe eliminate the heap fetch. The main risk seems to be that this will\nforce an expensive qual to the front of the list, just because it can be\nevaluated on the index tuple. But the difference would need to be worse\nthan what we save by not doing the I/O - considering how expensive the\nI/O is, that seems unlikely. Could happen for expensive quals that don't\nreally eliminate many rows, I guess.\n\nAnyway, I see this as extension of what order_qual_clauses() does. The\nmain issue is that even order_qual_clauses() admits the estimates are\nsomewhat unreliable, so we can't expect to make perfect decisions.\n\n\nFWIW: While reading order_qual_clauses() I realized the code may need to\nbe more careful about leakproof stuff. Essentially, if any of the\nnon-index clauses is leakproof, we can only do leakproof quals on the index.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 20 Jul 2023 13:35:40 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 4:35 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> I think the SAOP patch may need to be much more careful about this, but\n> for this patch it's simpler because it doesn't really change any of the\n> index internals, or the indexscan in general.\n\nIt's true that the SAOP patch needs relatively complicated\ninfrastructure to assess whether or not the technique is safe with a\ngiven set of quals. You cannot safely get an ordered index scan for\nsomething like \"select * from table where a < 5 and b in (1,2,3) order\nby a, b\". With or without my patch. My patch isn't really changing all\nthat much about the behavior in nbtree, as these things go. It's\n*surprising* how little has to change about the high level structure\nof index scans, in fact.\n\n(Actually, I'm glossing over a lot. The MDAM paper describes\ntechniques that'd make even the really challenging cases safe, through\na process of converting quals from conjunctive normal form into\ndisjunctive normal form. This is more or less the form that the state\nmachine implemented by _bt_advance_array_keys() produces already,\ntoday. But even with all this practically all of the heavy lifting\ntakes place before the index scan even begins, during preprocessing --\nso you still require surprisingly few changes to index scans\nthemselves.)\n\n> If I simplify that a bit, we're just reordering the clauses in a way to\n> maybe eliminate the heap fetch. The main risk seems to be that this will\n> force an expensive qual to the front of the list, just because it can be\n> evaluated on the index tuple.\n\nMy example query might have been poorly chosen, because it involved a\nlimit. What I'm thinking of is more general than that.\n\n> But the difference would need to be worse\n> than what we save by not doing the I/O - considering how expensive the\n> I/O is, that seems unlikely. Could happen for expensive quals that don't\n> really eliminate many rows, I guess.\n\nThat sounds like the same principle that I was talking about. I think\nthat it can be pushed quite far, though. I am mostly talking about the\nworst case, and it seems like you might not be.\n\nYou can easily construct examples where some kind of skew causes big\nproblems with a multi-column index. I'm thinking of indexes whose\nleading columns are low cardinality, and queries where including the\nsecond column as an index qual looks kind of marginal to the\noptimizer. Each grouping represented in the most significant index\ncolumn might easily have its own unique characteristics; the\ndistribution of values in subsequent columns might be quite\ninconsistent across each grouping, in whatever way.\n\nSince nothing stops a qual on a lower order column having a wildly\ndifferent selectivity for one particular grouping, it might not make\nsense to say that a problem in this area is due to a bad selectivity\nestimate. Even if we have perfect estimates, what good can they do if\nthe optimal strategy is to vary our strategy at runtime, *within* an\nindividual index scan, as different parts of the key space (different\ngroupings) are traversed through? To skip or not to skip, say. This\nisn't about picking the cheapest plan, really.\n\nThat's another huge advantage of index quals -- they can (at least in\nprinciple) allow you skip over big parts of the index when it just\nends up making sense, in whatever way, for whatever reason. In the\nindex, and in the heap. Often both. You'd likely always prefer to err\nin the direction of having more index quals rather than fewer, when\ndoing so doesn't substantially change the plan itself. It could be\nvery cheap insurance, even without any limit. (It would probably also\nbe a lot faster, but it needn't be.)\n\n> Anyway, I see this as extension of what order_qual_clauses() does. The\n> main issue is that even order_qual_clauses() admits the estimates are\n> somewhat unreliable, so we can't expect to make perfect decisions.\n\nThe attribute value independence assumption is wishful thinking, in no\nsmall part -- it's quite surprising that it works as well as it does,\nreally.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 20 Jul 2023 20:32:33 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 7/21/23 05:32, Peter Geoghegan wrote:\n> On Thu, Jul 20, 2023 at 4:35 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> I think the SAOP patch may need to be much more careful about this, but\n>> for this patch it's simpler because it doesn't really change any of the\n>> index internals, or the indexscan in general.\n> \n> It's true that the SAOP patch needs relatively complicated\n> infrastructure to assess whether or not the technique is safe with a\n> given set of quals. You cannot safely get an ordered index scan for\n> something like \"select * from table where a < 5 and b in (1,2,3) order\n> by a, b\". With or without my patch. My patch isn't really changing all\n> that much about the behavior in nbtree, as these things go. It's\n> *surprising* how little has to change about the high level structure\n> of index scans, in fact.\n> \n> (Actually, I'm glossing over a lot. The MDAM paper describes\n> techniques that'd make even the really challenging cases safe, through\n> a process of converting quals from conjunctive normal form into\n> disjunctive normal form. This is more or less the form that the state\n> machine implemented by _bt_advance_array_keys() produces already,\n> today. But even with all this practically all of the heavy lifting\n> takes place before the index scan even begins, during preprocessing --\n> so you still require surprisingly few changes to index scans\n> themselves.)\n> \n\nAh, OK. I was assuming the execution might be more complex. But I was\nthinking more about the costing part - if you convert the clauses in\nsome way, does that affect the reliability of estimates somehow? If the\nconversion from AND to OR makes the list of clauses more complex, that\nmight be an issue ...\n\nThe index-only filter does no such conversion, it just uses the same\nclauses as before.\n\n>> If I simplify that a bit, we're just reordering the clauses in a way to\n>> maybe eliminate the heap fetch. The main risk seems to be that this will\n>> force an expensive qual to the front of the list, just because it can be\n>> evaluated on the index tuple.\n> \n> My example query might have been poorly chosen, because it involved a\n> limit. What I'm thinking of is more general than that.\n> \n\nI wasn't really thinking about LIMIT, and I don't think it changes the\noverall behavior very much (sure, it's damn difficult to estimate for\nskewed data sets, but meh).\n\nThe case I had in mind is something like this:\n\nCREATE TABLE t (a int, b int, c int);\nCREATE INDEX ON t (a);\nINSERT INTO t SELECT i, i, i FROM generate_series(1,1000000) s(i);\n\nSELECT * FROM t WHERE bad_qual(a) AND b < 1 AND c < 1 ORDER BY a;\n\nwhere bad_qual is expensive and matches almost all rows. Without the\nindex-only filters, we'd evaluate the conditions in this order\n\n [b < 1], [c < 1], [bad_qual(a)]\n\nbut with naive index-only filters we do this:\n\n [bad_qual(a)], [b < 1], [c < 1]\n\nwhich is bad as it runs the expensive thing on every row.\n\nFWIW the \"ORDER BY\" is necessary, because otherwise we may not even\nbuild the index path (no index keys, no interesting pathkeys). It's just\nan opportunistic optimization - if already doing index scan, try doing\nthis too. I wonder if we should relax that ...\n\n>> But the difference would need to be worse\n>> than what we save by not doing the I/O - considering how expensive the\n>> I/O is, that seems unlikely. Could happen for expensive quals that don't\n>> really eliminate many rows, I guess.\n> \n> That sounds like the same principle that I was talking about. I think\n> that it can be pushed quite far, though. I am mostly talking about the\n> worst case, and it seems like you might not be.\n> \n\nYeah, I was proposing the usual costing approach, based on \"average\"\nbehavior. It has the usual weakness that if the estimates are far off,\nwe can have issues. There have been various discussions about maybe\nconsidering how reliable the estimates are, to defend against that.\nWhich is tough to do.\n\nIn a way, focusing on the worst case does that by assuming the worst\ncombination - which is fine, although it may choose the slower (but\nsafer) approach in some cases.\n\n> You can easily construct examples where some kind of skew causes big\n> problems with a multi-column index. I'm thinking of indexes whose\n> leading columns are low cardinality, and queries where including the\n> second column as an index qual looks kind of marginal to the\n> optimizer. Each grouping represented in the most significant index\n> column might easily have its own unique characteristics; the\n> distribution of values in subsequent columns might be quite\n> inconsistent across each grouping, in whatever way.\n> \n> Since nothing stops a qual on a lower order column having a wildly\n> different selectivity for one particular grouping, it might not make\n> sense to say that a problem in this area is due to a bad selectivity\n> estimate. Even if we have perfect estimates, what good can they do if\n> the optimal strategy is to vary our strategy at runtime, *within* an\n> individual index scan, as different parts of the key space (different\n> groupings) are traversed through? To skip or not to skip, say. This\n> isn't about picking the cheapest plan, really.\n> \n\nWell, yeah. It's one thing to assign some cost estimate to the plan, and\nanother thing what happens at runtime. It would be nice to be able to\nreflect the expected runtime behavior in the cost (otherwise you get\nconfused users complaining that we pick the \"wrong\" plan).\n\n> That's another huge advantage of index quals -- they can (at least in\n> principle) allow you skip over big parts of the index when it just\n> ends up making sense, in whatever way, for whatever reason. In the\n> index, and in the heap. Often both. You'd likely always prefer to err\n> in the direction of having more index quals rather than fewer, when\n> doing so doesn't substantially change the plan itself. It could be\n> very cheap insurance, even without any limit. (It would probably also\n> be a lot faster, but it needn't be.)\n> \n\nTrue, although my patch doesn't change the number of index quals. It's\njust about the other quals.\n\n>> Anyway, I see this as extension of what order_qual_clauses() does. The\n>> main issue is that even order_qual_clauses() admits the estimates are\n>> somewhat unreliable, so we can't expect to make perfect decisions.\n> \n> The attribute value independence assumption is wishful thinking, in no\n> small part -- it's quite surprising that it works as well as it does,\n> really.\n> \n\nYeah. For OLTP it works pretty well, for OLAP not so much.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 21 Jul 2023 13:52:55 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 4:52 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> > (Actually, I'm glossing over a lot. The MDAM paper describes\n> > techniques that'd make even the really challenging cases safe, through\n> > a process of converting quals from conjunctive normal form into\n> > disjunctive normal form. This is more or less the form that the state\n> > machine implemented by _bt_advance_array_keys() produces already,\n> > today. But even with all this practically all of the heavy lifting\n> > takes place before the index scan even begins, during preprocessing --\n> > so you still require surprisingly few changes to index scans\n> > themselves.)\n> >\n>\n> Ah, OK. I was assuming the execution might be more complex.\n\nSort of. Execution of individual \"primitive index scans\" effectively\nworks the same way as it does already -- the preprocessing is required\nto generate disjunctive primitive index scans that look like one big\nindex scan when combined (which is how native execution of SAOPs by\nnbtree works today).\n\nThe challenge during execution of index scans (execution proper, not\npreprocessing) comes from processing a \"flattened\" DNF representation\nof your original quals efficiently. If you have (say) 3 SAOPs, then\nthe total number of distinct DNF quals is the cartesian product of the\n3 arrays -- which is multiplicative. But, you can skip over the\nsingle-value DNF quals quickly when they have no matches. Which isn't\nall that hard.\n\nWe get some very useful invariants with these DNF quals: you can have\nat most one individual DNF qual as a match for any individual index\ntuple. Plus the quals are materialized in key space order, which is\nideally suited to processing by an ordered scan. So just as you can\nuse the array keys to skip over parts of the index when searching for\nan index tuple, you can use an index tuple to skip over the arrays\nwhen searching for the next relevant set of array keys. It works both\nways!\n\n> But I was\n> thinking more about the costing part - if you convert the clauses in\n> some way, does that affect the reliability of estimates somehow?\n\nObviously, it doesn't affect the selectivity at all. That seems most\nimportant (you kinda said the same thing yourself).\n\n> If the\n> conversion from AND to OR makes the list of clauses more complex, that\n> might be an issue ...\n\nThat's definitely a concern. Even still, the biggest problem by far in\nthis general area is selectivity estimation. Which, in a way, can be\nmade a lot easier by this general approach.\n\nLet's say we have the tenk1 table, with the same composite index as in\nmy example upthread (on \"(two,four,twenty)\"). Further suppose you have\na very simple query: \"select count(*) from tenk1\". On master (and with\nthe patch) that's going to give you an index-only scan on the\ncomposite index (assuming it's the only index), which is quite\nefficient despite being a full index scan -- 11 buffer hits.\n\nThis much more complicated count(*) query is where it gets interesting:\n\nselect\n count(*),\n two,\n four,\n twenty\nfrom\n tenk1_dyn_saop\nwhere\n two in (0, 1)\n and four in (1, 2, 3, 4)\n and twenty in (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)\ngroup by\n two,\n four,\n twenty\norder by\n two,\n four,\n twenty;\n\nIt's inherently very difficult to predict how selective this query\nwill be using basic statistics. But maybe it doesn't need to matter so\nmuch, so often.\n\nThe patch can execute this with an index-only scan + GroupAggregate.\nWhat ends up happening is that the patch gets 9 buffer hits -- so\npretty close to 11. Master can use almost the same query plan (it uses\nquals but needs to hashagg+ sort). It ends up getting 245 buffer hits\n-- vastly more than what we see for the full index scan case (and\nnothing to do with the sort/an issue with a limit). That's nearly as\nmany hits as you'd get with a sequential scan. (BTW, I don't need to\ncoax the query planner to get this result on master.)\n\nWith the patch you can vary the predicate in whatever way, so that the\nselectivity shifts up or down. Occasionally you'll get maybe one extra\nbuffer access relative to the base full index scan case, but overall\nthe patch makes the worst case look very much like a full index scan\n(plus some relatively tiny CPU overhead). This is just common sense,\nin a way; selectivities are always between 0.0 and 1.0. Why shouldn't\nwe be able to think about it like that?\n\n> I wasn't really thinking about LIMIT, and I don't think it changes the\n> overall behavior very much (sure, it's damn difficult to estimate for\n> skewed data sets, but meh).\n>\n> The case I had in mind is something like this:\n>\n> CREATE TABLE t (a int, b int, c int);\n> CREATE INDEX ON t (a);\n> INSERT INTO t SELECT i, i, i FROM generate_series(1,1000000) s(i);\n>\n> SELECT * FROM t WHERE bad_qual(a) AND b < 1 AND c < 1 ORDER BY a;\n>\n> where bad_qual is expensive and matches almost all rows.\n\nYou must distinguish between quals that can become required scan keys\n(or can terminate the scan), and all other quals. This is really\nimportant for my pending SAOP patch, but I think it might be important\nhere too. I wonder if the best place to address the possibility of\nsuch a regression is in the index AM itself.\n\nLet's make your example a bit more concrete: let's assume that\nbad_qual is a very expensive integer comparison, against a column that\nhas only one possible value. So now your example becomes:\n\nCREATE TABLE t (a expensive_int, b int, c int);\nCREATE INDEX ON t (a);\nINSERT INTO t SELECT 42, i, i FROM generate_series(1,1000000) s(i);\nSELECT * FROM t a in (7, 42) AND b < 1 AND c < 1 ORDER BY a;\n\n(I'm using a SAOP here because the planner will more or less disregard\nthe ORDER BY if I make it \"= 42\" instead. Maybe that makes it\nsimpler.)\n\nSure, you're getting a full index scan here, and you get all these\nuseless comparisons on \"a\" -- that's a real risk. But AFAICT there is\nno real need for it. There is another nbtree patch that might help. A\npatch that teaches nbtree's _bt_readpage function to skip useless\ncomparisons like this:\n\nhttps://postgr.es/m/079c3f8e-3371-abe2-e93c-fc8a0ae3f571@garret.ru\n\nIn order for this kind of optimization to be possible at all, we must\nbe able to reason about \"a\" as a column whose values will always be in\nkey space order. That is, nbtree must recognize that \"a\" is the most\nsignificant key column, not (say) a low-order column from a composite\nindex -- it's a required column in both directions. If _bt_readpage\ncan determine that the first tuple on a leaf page has the value \"42\",\nand the high key has that same value, then we can skip all of the\ncomparisons of \"a\" for that page, right away (in fact we don't require\nany comparisons). Now it doesn't matter that they're super expensive\ncomparisons (or it hardly matters).\n\nIt's natural to think of things like this _bt_readpage optimization as\nsomething that makes existing types of plan shapes run faster. But you\ncan also think of them as things that make new and fundamentally\nbetter plan shapes feasible, by making risky things much less risky.\n\n> FWIW the \"ORDER BY\" is necessary, because otherwise we may not even\n> build the index path (no index keys, no interesting pathkeys). It's just\n> an opportunistic optimization - if already doing index scan, try doing\n> this too. I wonder if we should relax that ...\n\nI'm kinda doing the same thing with ordering in my own patch. In\ngeneral, even if the query really doesn't care about the index order,\nthere may be a lot of value in making the nbtree code understand that\nthis is an ordered index scan. That's what enables skipping, in all\nits forms (skipping individual comparisons, skipping whole subsections\nof the index, etc).\n\nI'm not saying that this is 100% problem free. But it seems like a\npromising high level direction.\n\n> In a way, focusing on the worst case does that by assuming the worst\n> combination - which is fine, although it may choose the slower (but\n> safer) approach in some cases.\n\nI don't think that it has to be slower on average (even by a tiny\nbit). It might just end up being slightly faster on average, and way\nfaster on occasion.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 21 Jul 2023 12:17:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 7/21/23 21:17, Peter Geoghegan wrote:\n> ...\n>> But I was\n>> thinking more about the costing part - if you convert the clauses in\n>> some way, does that affect the reliability of estimates somehow?\n> \n> Obviously, it doesn't affect the selectivity at all. That seems most\n> important (you kinda said the same thing yourself).\n> \n\nSorry, I think I meant 'cost estimates', not the selectivity estimates.\nIf you convert the original \"simple\" clauses into the more complex list,\npresumably we'd cost that differently, right? I may be entirely wrong,\nbut my intuition is that costing these tiny clauses will be much more\ndifficult than costing the original clauses.\n\n>> If the\n>> conversion from AND to OR makes the list of clauses more complex, that\n>> might be an issue ...\n> \n> That's definitely a concern. Even still, the biggest problem by far in\n> this general area is selectivity estimation. Which, in a way, can be\n> made a lot easier by this general approach.\n> \n> Let's say we have the tenk1 table, with the same composite index as in\n> my example upthread (on \"(two,four,twenty)\"). Further suppose you have\n> a very simple query: \"select count(*) from tenk1\". On master (and with\n> the patch) that's going to give you an index-only scan on the\n> composite index (assuming it's the only index), which is quite\n> efficient despite being a full index scan -- 11 buffer hits.\n> \n> This much more complicated count(*) query is where it gets interesting:\n> \n> select\n> count(*),\n> two,\n> four,\n> twenty\n> from\n> tenk1_dyn_saop\n> where\n> two in (0, 1)\n> and four in (1, 2, 3, 4)\n> and twenty in (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)\n> group by\n> two,\n> four,\n> twenty\n> order by\n> two,\n> four,\n> twenty;\n> \n> It's inherently very difficult to predict how selective this query\n> will be using basic statistics. But maybe it doesn't need to matter so\n> much, so often.\n> \n> The patch can execute this with an index-only scan + GroupAggregate.\n> What ends up happening is that the patch gets 9 buffer hits -- so\n> pretty close to 11. Master can use almost the same query plan (it uses\n> quals but needs to hashagg+ sort). It ends up getting 245 buffer hits\n> -- vastly more than what we see for the full index scan case (and\n> nothing to do with the sort/an issue with a limit). That's nearly as\n> many hits as you'd get with a sequential scan. (BTW, I don't need to\n> coax the query planner to get this result on master.)\n> \n> With the patch you can vary the predicate in whatever way, so that the\n> selectivity shifts up or down. Occasionally you'll get maybe one extra\n> buffer access relative to the base full index scan case, but overall\n> the patch makes the worst case look very much like a full index scan\n> (plus some relatively tiny CPU overhead). This is just common sense,\n> in a way; selectivities are always between 0.0 and 1.0. Why shouldn't\n> we be able to think about it like that?\n> \n\nRight, I agree with this reasoning in principle.\n\nBut I'm getting a bit lost regarding what's the proposed costing\nstrategy. It's hard to follow threads spanning days, with various other\ndistractions, etc.\n\nIn principle, I think:\n\na) If we estimate the scan to return almost everything (or rather if we\nexpect it to visit almost the whole index), it makes perfect sense to\ncost is as a full index scan.\n\nb) What should we do if we expect to read only a fraction of the index?\nIf we're optimistic, and cost is according to the estimates, but then\nend up most of the index, how bad could it be (compared to the optimal\nplan choice)? Similarly, if we're pessimistic/defensive and cost it as\nfull index scan, how many \"good\" cases would we reject based on the\nartificially high cost estimate?\n\nI don't have a very good idea how sensitive the cost is to selectivity\nchanges, which I think is crucial for making judgments.\n\n>> I wasn't really thinking about LIMIT, and I don't think it changes the\n>> overall behavior very much (sure, it's damn difficult to estimate for\n>> skewed data sets, but meh).\n>>\n>> The case I had in mind is something like this:\n>>\n>> CREATE TABLE t (a int, b int, c int);\n>> CREATE INDEX ON t (a);\n>> INSERT INTO t SELECT i, i, i FROM generate_series(1,1000000) s(i);\n>>\n>> SELECT * FROM t WHERE bad_qual(a) AND b < 1 AND c < 1 ORDER BY a;\n>>\n>> where bad_qual is expensive and matches almost all rows.\n> \n> You must distinguish between quals that can become required scan keys\n> (or can terminate the scan), and all other quals. This is really\n> important for my pending SAOP patch, but I think it might be important\n> here too. I wonder if the best place to address the possibility of\n> such a regression is in the index AM itself.\n> \n> Let's make your example a bit more concrete: let's assume that\n> bad_qual is a very expensive integer comparison, against a column that\n> has only one possible value. So now your example becomes:\n> \n> CREATE TABLE t (a expensive_int, b int, c int);\n> CREATE INDEX ON t (a);\n> INSERT INTO t SELECT 42, i, i FROM generate_series(1,1000000) s(i);\n> SELECT * FROM t a in (7, 42) AND b < 1 AND c < 1 ORDER BY a;\n> \n> (I'm using a SAOP here because the planner will more or less disregard\n> the ORDER BY if I make it \"= 42\" instead. Maybe that makes it\n> simpler.)\n> \n> Sure, you're getting a full index scan here, and you get all these\n> useless comparisons on \"a\" -- that's a real risk. But AFAICT there is\n> no real need for it. There is another nbtree patch that might help. A\n> patch that teaches nbtree's _bt_readpage function to skip useless\n> comparisons like this:\n> \n> https://postgr.es/m/079c3f8e-3371-abe2-e93c-fc8a0ae3f571@garret.ru\n> \n> In order for this kind of optimization to be possible at all, we must\n> be able to reason about \"a\" as a column whose values will always be in\n> key space order. That is, nbtree must recognize that \"a\" is the most\n> significant key column, not (say) a low-order column from a composite\n> index -- it's a required column in both directions. If _bt_readpage\n> can determine that the first tuple on a leaf page has the value \"42\",\n> and the high key has that same value, then we can skip all of the\n> comparisons of \"a\" for that page, right away (in fact we don't require\n> any comparisons). Now it doesn't matter that they're super expensive\n> comparisons (or it hardly matters).\n> \n> It's natural to think of things like this _bt_readpage optimization as\n> something that makes existing types of plan shapes run faster. But you\n> can also think of them as things that make new and fundamentally\n> better plan shapes feasible, by making risky things much less risky.\n> \n\nThat'd be an interesting optimization, but I don't think that matters\nfor this patch, as it's not messing with index scan keys at all. I mean,\nit does not affect what scan keys get passed to the AM at all, or what\nscan keys are required. And it does not influence what the AM does. So\nall this seems interesting, but rather orthogonal to this patch.\n\n\n>> FWIW the \"ORDER BY\" is necessary, because otherwise we may not even\n>> build the index path (no index keys, no interesting pathkeys). It's just\n>> an opportunistic optimization - if already doing index scan, try doing\n>> this too. I wonder if we should relax that ...\n> \n> I'm kinda doing the same thing with ordering in my own patch. In\n> general, even if the query really doesn't care about the index order,\n> there may be a lot of value in making the nbtree code understand that\n> this is an ordered index scan. That's what enables skipping, in all\n> its forms (skipping individual comparisons, skipping whole subsections\n> of the index, etc).\n> \n> I'm not saying that this is 100% problem free. But it seems like a\n> promising high level direction.\n> \n\nI was rather thinking about maybe relaxing the rules about which index\npaths we create, to include indexes that might be interesting thanks to\nindex-only filters (unless already picked thanks to sorting).\n\n>> In a way, focusing on the worst case does that by assuming the worst\n>> combination - which is fine, although it may choose the slower (but\n>> safer) approach in some cases.\n> \n> I don't think that it has to be slower on average (even by a tiny\n> bit). It might just end up being slightly faster on average, and way\n> faster on occasion.\n> \n\nOK, that'd be nice. I don't have a very good intuition about behavior\nfor these queries, I'd need to play with & benchmark it the way I did\nfor the prefetching patch etc.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 23 Jul 2023 14:04:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Sun, Jul 23, 2023 at 5:04 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Sorry, I think I meant 'cost estimates', not the selectivity estimates.\n> If you convert the original \"simple\" clauses into the more complex list,\n> presumably we'd cost that differently, right? I may be entirely wrong,\n> but my intuition is that costing these tiny clauses will be much more\n> difficult than costing the original clauses.\n\nI think that that's definitely true (it is more difficult), but that\nthere may be a bigger picture.\n\n> Right, I agree with this reasoning in principle.\n>\n> But I'm getting a bit lost regarding what's the proposed costing\n> strategy.\n\nTo be clear, I don't have a clue how to better estimate the\ncardinality of multiple attributes from a composite index. The big and\nimmediate change to the SAOP costing with my patch is that\ngenericcostestimate/btcostestimate can safely assume that visiting\neach leaf page more than once is a physical impossibility. Because it\nis. It is no longer necessary to treat SAOPs similarly to a nested\nloop join during costing, which is how it works today.\n\nNow, whenever you add increasingly complicated clauses to a\nmulti-column SAOP query (like the ones I've shown you), it makes sense\nfor the cost to \"saturate\" at a certain point. That should be\nrepresentative of the physical reality, for both CPU costs and I/O\ncosts. Right now the worst case is really relevant to the average\ncase, since the risk of the costs just exploding at runtime is very\nreal.\n\nIf the only problem in this area was the repeated accesses to the same\nleaf pages (accesses that happen in very close succession anyway),\nthen all of this would be a nice win, but not much more. It certainly\nwouldn't be expected to change the way we think about stuff that isn't\ndirectly and obviously relevant. But, it's not just the index pages.\nOnce you start to consider the interactions with filter/qpquals, it\ngets much more interesting. Now you're talking about completely\navoiding physical I/Os for heap accesses, which has the potential to\nmake a dramatic difference to some types of queries, particularly in\nthe worst case.\n\n> It's hard to follow threads spanning days, with various other\n> distractions, etc.\n\nI have to admit that my thinking on this very high level stuff is\nrather unrefined. As much as anything else, I'm trying to invent (or\ndiscover) a shared vocabulary for discussing these issues. I might\nhave gone about it clumsily, at times. I appreciate being able to\nbounce this stuff off you.\n\n> I don't have a very good idea how sensitive the cost is to selectivity\n> changes, which I think is crucial for making judgments.\n\nI'm not trying to find a way for the optimizer to make better\njudgments about costing with a multi-column index. What I'm suggesting\n(rather tentatively) is to find a way for it to make fewer (even no)\njudgements at all.\n\nIf you can find a way of reducing the number of possible choices\nwithout any real downside -- in particular by just not producing index\npaths that cannot possibly be a win in some cases -- then you reduce\nthe number of bad choices. The challenge is making that kind of\napproach in the optimizer actually representative of the executor in\nthe real world. The executor has to have robust performance under a\nvariety of runtime conditions for any of this to make sense.\n\n> > It's natural to think of things like this _bt_readpage optimization as\n> > something that makes existing types of plan shapes run faster. But you\n> > can also think of them as things that make new and fundamentally\n> > better plan shapes feasible, by making risky things much less risky.\n> >\n>\n> That'd be an interesting optimization, but I don't think that matters\n> for this patch, as it's not messing with index scan keys at all. I mean,\n> it does not affect what scan keys get passed to the AM at all, or what\n> scan keys are required. And it does not influence what the AM does. So\n> all this seems interesting, but rather orthogonal to this patch.\n\nYour patch is approximately the opposite of what I'm talking about, in\nterms of its structure. The important commonality is that each patch\nadds \"superfluous\" quals that can be very useful at runtime, under the\nright circumstances -- which can be hard to predict. Another\nsimilarity is that both patches inspire some of the same kind of\nlingering doubts about extreme cases -- cases where (somehow) the\nusual/expected cost asymmetry that usually works in our favor doesn't\napply.\n\nMy current plan is to post v1 of my patch early next week. It would be\nbetter to discuss this on the thread that I create for that patch.\n\nYou're right that \"exploiting index ordering\" on the index AM side is\ntotally unrelated to your patch. Sorry about that.\n\n> I was rather thinking about maybe relaxing the rules about which index\n> paths we create, to include indexes that might be interesting thanks to\n> index-only filters (unless already picked thanks to sorting).\n\nThat seems like it would make sense. In general I think that we\noveruse and over rely on bitmap index scans -- we should try to chip\naway at the artificial advantages that bitmap index scans have, so\nthat we can get the benefit of index scans more often -- accessing the\nheap/VM inline does open up a lot of possibilities that bitmap scans\nwill never be able to match. (BTW, I'm hoping that your work on index\nprefetching will help with that.)\n\nI see that your patch has this diff change (which is 1 out of only 2\nor 3 plan changes needed by the patch):\n\n--- a/src/test/regress/expected/create_index.out\n+++ b/src/test/regress/expected/create_index.out\n@@ -1838,18 +1838,13 @@ DROP TABLE onek_with_null;\n EXPLAIN (COSTS OFF)\n SELECT * FROM tenk1\n WHERE thousand = 42 AND (tenthous = 1 OR tenthous = 3 OR tenthous = 42);\n- QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n- Bitmap Heap Scan on tenk1\n- Recheck Cond: (((thousand = 42) AND (tenthous = 1)) OR ((thousand\n= 42) AND (tenthous = 3)) OR ((thousand = 42) AND (tenthous = 42)))\n- -> BitmapOr\n- -> Bitmap Index Scan on tenk1_thous_tenthous\n- Index Cond: ((thousand = 42) AND (tenthous = 1))\n- -> Bitmap Index Scan on tenk1_thous_tenthous\n- Index Cond: ((thousand = 42) AND (tenthous = 3))\n- -> Bitmap Index Scan on tenk1_thous_tenthous\n- Index Cond: ((thousand = 42) AND (tenthous = 42))\n-(9 rows)\n+ QUERY PLAN\n+-----------------------------------------------------------------------\n+ Index Scan using tenk1_thous_tenthous on tenk1\n+ Index Cond: (thousand = 42)\n+ Index Filter: ((tenthous = 1) OR (tenthous = 3) OR (tenthous = 42))\n+ Filter: ((tenthous = 1) OR (tenthous = 3) OR (tenthous = 42))\n+(4 rows)\n\nThat does seem like an improvement to me. But, an even better plan is\npossible. Or would be possible once this SAOP-OR-transformation patch\nis in place (if it was then combined with my own SAOP patch):\n\nhttps://www.postgresql.org/message-id/flat/919bfbcb-f812-758d-d687-71f89f0d9a68%40postgrespro.ru#9d877caf48c4e331e507b5c63914228e\n\nThat could give us an index scan plan that is perfectly efficient.\nLike the new plan shown here, it would pin/lock a single leaf page\nfrom the tenk1_thous_tenthous index, once. But unlike the plan shown\nhere, it would be able to terminate as soon as the index scan reached\nan index tuple where thousand=42 and tenthous>42. That makes a\nsignificant difference if we have to do heap accesses for those extra\ntuples.\n\nPlus this hypothetical other plan of mine would be more robust: it\nwould tolerate misestimates. It happens to be the case that there just\naren't that many tuples with thousand=42 -- they take up only a\nfraction of one leaf page. But...why take a chance on that being true,\nif we don't have to? The old bitmap index scan plan has this same\nadvantage already -- it is robust in the very same way. Because it\nmanaged to pass down specific \"tenthous\" index quals. It would be nice\nto have both advantages, together. (To be clear, I'm certainly not\nsuggesting that the example argues against what you want to do here --\nit's just an example that jumped out at me.)\n\nPerhaps this example will make my confusion about the boundaries\nbetween each of our patches a bit more understandable. I was confused\n-- and I still am. I look forward to being less confused at some point\nin the future.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 23 Jul 2023 12:56:11 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Sun, 2023-07-23 at 14:04 +0200, Tomas Vondra wrote:\n> But I'm getting a bit lost regarding what's the proposed costing\n> strategy. It's hard to follow threads spanning days, with various\n> other\n> distractions, etc.\n\nI'm getting a bit lost in this discussion as well -- for the purposes\nof this feature, we only need to know whether to push down a clause as\nan Index Filter or not, right?\n\nCould we start out conservatively and push down as an Index Filter\nunless there is some other clause ahead of it that can't be pushed\ndown? That would allow users to have some control by writing clauses in\nthe desired order or wrapping them in functions with a declared cost.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 24 Jul 2023 10:36:25 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 10:36 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> I'm getting a bit lost in this discussion as well -- for the purposes\n> of this feature, we only need to know whether to push down a clause as\n> an Index Filter or not, right?\n\nI think so.\n\n> Could we start out conservatively and push down as an Index Filter\n> unless there is some other clause ahead of it that can't be pushed\n> down? That would allow users to have some control by writing clauses in\n> the desired order or wrapping them in functions with a declared cost.\n\nI'm a bit concerned about cases like the one I described from the\nregression tests.\n\nThe case in question shows a cheaper plan replacing a more expensive\nplan -- so it's a win by every conventional measure. But, the new plan\nis less robust in the sense that I described yesterday: it will be\nmuch slower than the current plan when there happens to be many more\n\"thousand = 42\" tuples than expected. We have a very high chance of a\nsmall benefit (we save repeated index page accesses), but a very low\nchance of a high cost (we incur many more heap accesses). Which seems\nless than ideal.\n\nOne obvious way of avoiding that problem (that's probably overly\nconservative) is to just focus on the original complaint from Maxim.\nThe original concern was limited to non-key columns from INCLUDE\nindexes. If you only apply the optimization there then you don't run\nthe risk of generating a path that \"out competes\" a more robust path\nin the sense that I've described. This is obviously true because there\ncan't possibly be index quals/scan keys for non-key columns within the\nindex AM.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 24 Jul 2023 10:54:31 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Jul 24, 2023 at 10:36 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>> Could we start out conservatively and push down as an Index Filter\n>> unless there is some other clause ahead of it that can't be pushed\n>> down? That would allow users to have some control by writing clauses in\n>> the desired order or wrapping them in functions with a declared cost.\n\n> I'm a bit concerned about cases like the one I described from the\n> regression tests.\n\nPlease do not put in any code that assumes that restriction clause\norder is preserved, or encourages users to think it is. There are\nalready cases where that's not so, eg equivalence clauses tend to\nget shuffled around.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Jul 2023 13:58:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Mon, 2023-07-24 at 13:58 -0400, Tom Lane wrote:\n> Please do not put in any code that assumes that restriction clause\n> order is preserved, or encourages users to think it is.\n\nAgreed. I didn't mean to add any extra guarantee of preserving clause\norder; just to follow the current way order_qual_clauses() works, which\nhas a comment saying:\n\n\"So we just order by security level then estimated per-tuple cost,\nbeing careful not to change the order when (as is often the case) the\nestimates are identical.\"\n\nI assumed that the reason for \"being careful\" above was to not\nunnecessarily override how the user writes the qual clauses, but\nperhaps there's another reason?\n\nRegardless, my point was just to make minimal changes now that are\nunlikely to cause regressions. If we come up with better ways of\nordering the clauses later, that could be part of a separate change. (I\nthink Peter G. is pointing out a complication with that idea, to which\nI'll respond separately.)\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 24 Jul 2023 11:23:32 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Mon, 2023-07-24 at 10:54 -0700, Peter Geoghegan wrote:\n> The case in question shows a cheaper plan replacing a more expensive\n> plan -- so it's a win by every conventional measure. But, the new\n> plan\n> is less robust in the sense that I described yesterday: it will be\n> much slower than the current plan when there happens to be many more\n> \"thousand = 42\" tuples than expected. We have a very high chance of a\n> small benefit (we save repeated index page accesses), but a very low\n> chance of a high cost (we incur many more heap accesses). Which seems\n> less than ideal.\n\nI see. You're concerned that lowering the cost of an index scan path\ntoo much, due to pushing down a clause as an Index Filter, could cause\nit to out-compete a more \"robust\" plan.\n\nThat might be true but I'm not sure what to do about that unless we\nincorporate some \"robustness\" measure into the costing. If every\nmeasure we have says one plan is better, don't we have to choose it?\n\n> The original concern was limited to non-key columns from INCLUDE\n> indexes. If you only apply the optimization there then you don't run\n> the risk of generating a path that \"out competes\" a more robust path\n> in the sense that I've described. This is obviously true because\n> there\n> can't possibly be index quals/scan keys for non-key columns within\n> the\n> index AM.\n\nIf I understand correctly, you mean we couldn't use an Index Filter on\na key column? That seems overly restrictive, there are plenty of\nclauses that might be useful as an Index Filter but cannot be an Index\nCond for one reason or another.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 24 Jul 2023 11:37:13 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Mon, 2023-07-24 at 13:58 -0400, Tom Lane wrote:\n>> Please do not put in any code that assumes that restriction clause\n>> order is preserved, or encourages users to think it is.\n\n> Agreed. I didn't mean to add any extra guarantee of preserving clause\n> order; just to follow the current way order_qual_clauses() works, which\n> has a comment saying:\n> \"So we just order by security level then estimated per-tuple cost,\n> being careful not to change the order when (as is often the case) the\n> estimates are identical.\"\n> I assumed that the reason for \"being careful\" above was to not\n> unnecessarily override how the user writes the qual clauses, but\n> perhaps there's another reason?\n\nI think the point was just to not make any unnecessary behavioral\nchanges from the way things were before we added that sorting logic.\nBut there are other places that will result in clause ordering changes,\nplus there's the whole business of possibly-intermixed restriction and\njoin clauses.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Jul 2023 14:42:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 11:37 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> I see. You're concerned that lowering the cost of an index scan path\n> too much, due to pushing down a clause as an Index Filter, could cause\n> it to out-compete a more \"robust\" plan.\n\nThe optimizer correctly determines that 3 index scans (plus a bitmap\nOR node) are more expensive than 1 very similar index scan. It's hard\nto argue with that.\n\n> That might be true but I'm not sure what to do about that unless we\n> incorporate some \"robustness\" measure into the costing. If every\n> measure we have says one plan is better, don't we have to choose it?\n\nI'm mostly concerned about the possibility itself -- it's not a matter\nof tuning the cost. I agree that that approach would probably be\nhopeless.\n\nThere is a principled (albeit fairly involved) way of addressing this.\nThe patch allows the optimizer to produce a plan that has 1 index\nscan, that treats the first column as an index qual, and the second\ncolumn as a filter condition. There is no fundamental reason why we\ncan't just have 1 index scan that makes both columns into index quals\n(instead of 3 highly duplicative variants of the same index scan).\nThat's what I'm working towards right now.\n\n> If I understand correctly, you mean we couldn't use an Index Filter on\n> a key column? That seems overly restrictive, there are plenty of\n> clauses that might be useful as an Index Filter but cannot be an Index\n> Cond for one reason or another.\n\nI think that you're probably right about it being overly restrictive\n-- that was just a starting point for discussion. Perhaps there is an\nidentifiable class of clauses that can benefit, but don't have the\ndownside that I'm concerned about.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 24 Jul 2023 11:59:32 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 11:59 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > That might be true but I'm not sure what to do about that unless we\n> > incorporate some \"robustness\" measure into the costing. If every\n> > measure we have says one plan is better, don't we have to choose it?\n>\n> I'm mostly concerned about the possibility itself -- it's not a matter\n> of tuning the cost. I agree that that approach would probably be\n> hopeless.\n\nThis seems related to the fact that EXPLAIN doesn't expose the\ndifference between what Markus Winand calls \"Access Predicates\" and\n\"Index Filter Predicates\", as explained here:\n\nhttps://use-the-index-luke.com/sql/explain-plan/postgresql/filter-predicates\n\nThat is, both \"Access Predicates\" and \"Index Filter Predicates\" are\nshown after an \"Index Cond: \" entry in Postgres EXPLAIN output, in\ngeneral. Even though these are two very different things. I believe\nthat the underlying problem for the implementation (the reason why we\ncan't easily break this out further in EXPLAIN output) is that we\ndon't actually know what kind of predicate it is ourselves -- at least\nnot until execution time. We wait until then to do nbtree\npreprocessing/scan setup. Though perhaps we should do more of this\nduring planning instead [1], for several reasons (fixing this is just\none of those reasons).\n\nThe risk to \"robustness\" for cases like the one I drew attention to on\nthis thread would probably have been obvious all along if EXPLAIN\noutput were more like what Markus would have us do -- he certainly has\na good point here, in general.\n\nBreaking things out in EXPLAIN output along these lines might also\ngive us a better general sense of when a similar plan shift like this\nwas actually okay -- even according to something like my\nnon-traditional \"query robustness\" criteria. It's much harder for me\nto argue that a shift in plans from what Markus calls an \"Index Filter\nPredicate\" to what the patch will show under \"Index Filter:\" is a\nnovel new risk. That would be a much less consequential difference,\nbecause those two things are fairly similar anyway.\n\nBesides, such a shift in plan would have to \"legitimately win\" for\nthings to work out like this. If we're essentially picking between two\ndifferent subtypes of \"Index Filter Predicate\", then there can't be\nthe same weird second order effects that we see when an \"Access\nPredicate\" is out-competed by an \"Index Filter Predicate\". It's\npossible that expression evaluation of a small-ish conjunctive\npredicate like \"Index Filter: ((tenthous = 1) OR (tenthous = 3) OR\n(tenthous = 42))\" will be faster than a natively executed SAOP. You\ncan't do that kind of expression evaluation in the index AM itself\n(assuming that there is an opclass for nbtree to use in the first\nplace, which there might not be in the case of any non-key INCLUDE\ncolumns). With the patch, you can do all this. And I think that you\ncan derisk it without resorting to the overly conservative approach of\nlimiting ourselves to non-key columns from INCLUDE indexes.\n\nTo summarize: As Markus says on the same page. \"Index filter\npredicates give a false sense of safety; even though an index is used,\nthe performance degrades rapidly on a growing data volume or system\nload\". That's essentially what I want to avoid here. I'm much less\nconcerned about competition between what are really \"Index Filter\nPredicate\" subtypes. Allowing that competition to take place is not\nentirely risk-free, of course, but it seems about as risky as anything\nelse in this area.\n\n[1] https://www.postgresql.org/message-id/2587523.1647982549@sss.pgh.pa.us\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 1 Aug 2023 17:50:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 8/2/23 02:50, Peter Geoghegan wrote:\n> On Mon, Jul 24, 2023 at 11:59 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>>> That might be true but I'm not sure what to do about that unless we\n>>> incorporate some \"robustness\" measure into the costing. If every\n>>> measure we have says one plan is better, don't we have to choose it?\n>>\n>> I'm mostly concerned about the possibility itself -- it's not a matter\n>> of tuning the cost. I agree that that approach would probably be\n>> hopeless.\n> \n> This seems related to the fact that EXPLAIN doesn't expose the\n> difference between what Markus Winand calls \"Access Predicates\" and\n> \"Index Filter Predicates\", as explained here:\n> \n> https://use-the-index-luke.com/sql/explain-plan/postgresql/filter-predicates\n> \n> That is, both \"Access Predicates\" and \"Index Filter Predicates\" are\n> shown after an \"Index Cond: \" entry in Postgres EXPLAIN output, in\n> general. Even though these are two very different things. I believe\n> that the underlying problem for the implementation (the reason why we\n> can't easily break this out further in EXPLAIN output) is that we\n> don't actually know what kind of predicate it is ourselves -- at least\n> not until execution time. We wait until then to do nbtree\n> preprocessing/scan setup. Though perhaps we should do more of this\n> during planning instead [1], for several reasons (fixing this is just\n> one of those reasons).\n> \n\nHow come we don't know that until the execution time? Surely when\nbuilding the paths/plans, we match the clauses to the index keys, no? Or\nare you saying that just having a scan key is not enough for it to be\n\"access predicate\"?\n\nAnyway, this patch is mostly about \"Index Cond\" mixing two types of\npredicates. But the patch is really about \"Filter\" predicates - moving\nsome of them from table to index. So quite similar to the \"index filter\npredicates\" except that those are handled by the index AM.\n\n> The risk to \"robustness\" for cases like the one I drew attention to on\n> this thread would probably have been obvious all along if EXPLAIN\n> output were more like what Markus would have us do -- he certainly has\n> a good point here, in general.\n> \n> Breaking things out in EXPLAIN output along these lines might also\n> give us a better general sense of when a similar plan shift like this\n> was actually okay -- even according to something like my\n> non-traditional \"query robustness\" criteria. It's much harder for me\n> to argue that a shift in plans from what Markus calls an \"Index Filter\n> Predicate\" to what the patch will show under \"Index Filter:\" is a\n> novel new risk. That would be a much less consequential difference,\n> because those two things are fairly similar anyway.\n> \n\nBut differentiating between access and filter predicates (at the index\nAM level) seems rather independent of what this patch aims to do.\n\nFWIW I agree we should make the differences visible in the explain. That\nseems fairly useful for non-trivial index access paths, and it does not\nchange the execution at all. I think it'd be fine to do that only for\nVERBOSE mode, and only for EXPLAIN ANALYZE (if we only do this at\nexecution time for now).\n\n> Besides, such a shift in plan would have to \"legitimately win\" for\n> things to work out like this. If we're essentially picking between two\n> different subtypes of \"Index Filter Predicate\", then there can't be\n> the same weird second order effects that we see when an \"Access\n> Predicate\" is out-competed by an \"Index Filter Predicate\". It's\n> possible that expression evaluation of a small-ish conjunctive\n> predicate like \"Index Filter: ((tenthous = 1) OR (tenthous = 3) OR\n> (tenthous = 42))\" will be faster than a natively executed SAOP. You\n> can't do that kind of expression evaluation in the index AM itself\n> (assuming that there is an opclass for nbtree to use in the first\n> place, which there might not be in the case of any non-key INCLUDE\n> columns). With the patch, you can do all this. And I think that you\n> can derisk it without resorting to the overly conservative approach of\n> limiting ourselves to non-key columns from INCLUDE indexes.\n> \n\nI'm not following. Why couldn't there be some second-order effects?\nMaybe it's obvious / implied by something said earlier, but I think\nevery time we decide between multiple choices, there's a risk of picking\nwrong.\n\nAnyway, is this still about this patch or rather about your SAOP patch?\n\n> To summarize: As Markus says on the same page. \"Index filter\n> predicates give a false sense of safety; even though an index is used,\n> the performance degrades rapidly on a growing data volume or system\n> load\". That's essentially what I want to avoid here. I'm much less\n> concerned about competition between what are really \"Index Filter\n> Predicate\" subtypes. Allowing that competition to take place is not\n> entirely risk-free, of course, but it seems about as risky as anything\n> else in this area.\n> \n\nTrue. IMHO the danger or \"index filter\" predicates is that people assume\nindex conditions eliminate large parts of the index - which is not\nnecessarily the case here. If we can improve this, cool.\n\nBut again, this is not what this patch does, right? It's about moving\nstuff from \"table filter\" to \"index filter\". And those clauses were not\nmatched to the index AM at all, so it's not really relevant to the\ndiscussion about different subtypes of predicates.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 2 Aug 2023 15:48:28 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Wed, Aug 2, 2023 at 6:48 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> How come we don't know that until the execution time? Surely when\n> building the paths/plans, we match the clauses to the index keys, no? Or\n> are you saying that just having a scan key is not enough for it to be\n> \"access predicate\"?\n\nIn principle we can and probably should recognize the difference\nbetween \"Access Predicates\" and \"Index Filter Predicates\" much earlier\non, during planning. Probably when index paths are first built.\n\nIt seems quite likely that there will turn out to be at least 2 or 3\nreasons to do this. The EXPLAIN usability issue might be enough of a\nreason on its own, though.\n\n> Anyway, this patch is mostly about \"Index Cond\" mixing two types of\n> predicates. But the patch is really about \"Filter\" predicates - moving\n> some of them from table to index. So quite similar to the \"index filter\n> predicates\" except that those are handled by the index AM.\n\nI understand that that's how the patch is structured. It is\nnevertheless possible (as things stand) that the patch will make the\nplanner shift from a plan that uses \"Access Predicates\" to the maximum\nextent possible when scanning a composite index, to a similar plan\nthat has a similar index scan, for the same index, but with fewer\n\"Access Predicates\" in total. In effect, the patched planner will swap\none type of predicate for another type because doing so enables the\nexecutor to scan an index once instead of scanning it several times.\n\nI don't dispute the fact that this can only happen when the planner\nbelieves (with good reason) that the expected cost will be lower. But\nI maintain that there is a novel risk to be concerned about, which is\nmeaningfully distinct from the general risk of regressions that comes\nfrom making just about any change to the planner. The important\nprinciple here is that we should have a strong bias in the direction\nof making quals into true \"Access Predicates\" whenever practical.\n\nYeah, technically the patch doesn't directly disrupt how existing\nindex paths get generated. But it does have the potential to disrupt\nit indirectly, by providing an alternative very-similar index path\nthat's likely to outcompete the existing one in these cases. I think\nthat we should have just one index path that does everything well\ninstead.\n\n> But differentiating between access and filter predicates (at the index\n> AM level) seems rather independent of what this patch aims to do.\n\nMy concern is directly related to the question of \"access predicates\nversus filter predicates\", and the planner's current ignorance on\nwhich is which. The difference may not matter too much right now, but\nISTM that your patch makes it matter a lot more. And so in that\nindirect sense it does seem relevant.\n\nThe planner has always had a strong bias in the direction of making\nclauses that can be index quals into index quals, rather than filter\npredicates. It makes sense to do that even when they aren't very\nselective. This is a similar though distinct principle.\n\nIt's awkward to discuss the issue, since we don't really have official\nnames for these things just yet (I'm just going with what Markus calls\nthem for now). And because there is more than one type of \"index\nfilter predicate\" in play with the patch (namely those in the index\nAM, and those in the index scan executor node). But my concern boils\ndown to access predicates being strictly better than equivalent index\nfilter predicates.\n\n> I'm not following. Why couldn't there be some second-order effects?\n> Maybe it's obvious / implied by something said earlier, but I think\n> every time we decide between multiple choices, there's a risk of picking\n> wrong.\n\nNaturally, I agree that some amount of risk is inherent. I believe\nthat the risk I have identified is qualitatively different to the\nstandard, inherent kind of risk.\n\n> Anyway, is this still about this patch or rather about your SAOP patch?\n\nIt's mostly not about my SAOP patch. It's just that my SAOP patch will\ndo exactly the right thing in this particular case (at least once\ncombined with Alena Rybakina's OR-to-SAOP transformation patch). It is\ntherefore quite clear that a better plan is possible in principle.\nClearly we *can* have the benefit of only one single index scan (i.e.\nno BitmapOr node combining multiple similar index scans), without\naccepting any novel new risk to get that benefit. We should just have\none index path that does it all, while avoiding duplicative index\npaths that embody a distinction that really shouldn't exist -- it\nshould be dealt with at runtime instead.\n\n> True. IMHO the danger or \"index filter\" predicates is that people assume\n> index conditions eliminate large parts of the index - which is not\n> necessarily the case here. If we can improve this, cool.\n\nI think that we'd find a way to use the same information in the\nplanner, too. It's not just users that should care about the\ndifference. And I don't think that it's particularly likely to be\nlimited to SAOP/MDAM stuff. As Markus said, we're talking about an\nimportant general principle here.\n\n> But again, this is not what this patch does, right? It's about moving\n> stuff from \"table filter\" to \"index filter\". And those clauses were not\n> matched to the index AM at all, so it's not really relevant to the\n> discussion about different subtypes of predicates.\n\nI understand that what I've said is not particularly helpful. At least\nnot on its own. You quite naturally won't want to tie the fate of this\npatch to my SAOP patch, which is significantly more complicated. I do\nthink that my concern about this being a novel risk needs to be\ncarefully considered.\n\nMaybe it's possible to address my concern outside of the context of my\nown SAOP patch. That would definitely be preferable all around. But\n\"access predicates versus filter predicates\" seems important and\nrelevant, either way.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 2 Aug 2023 18:32:40 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Wed, Aug 2, 2023 at 6:32 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I don't dispute the fact that this can only happen when the planner\n> believes (with good reason) that the expected cost will be lower. But\n> I maintain that there is a novel risk to be concerned about, which is\n> meaningfully distinct from the general risk of regressions that comes\n> from making just about any change to the planner. The important\n> principle here is that we should have a strong bias in the direction\n> of making quals into true \"Access Predicates\" whenever practical.\n>\n> Yeah, technically the patch doesn't directly disrupt how existing\n> index paths get generated. But it does have the potential to disrupt\n> it indirectly, by providing an alternative very-similar index path\n> that's likely to outcompete the existing one in these cases. I think\n> that we should have just one index path that does everything well\n> instead.\n\nYou can see this for yourself, quite easily. Start by running the\nrelevant query from the regression tests, which is:\n\nSELECT * FROM tenk1 WHERE thousand = 42 AND (tenthous = 1 OR tenthous\n= 3 OR tenthous = 42);\n\nEXPLAIN (ANALYZE, BUFFERS) confirms that the patch makes the query\nslightly faster, as expected. I see 7 buffer hits for the bitmap index\nscan plan on master, versus only 4 buffer hits for the patch's index\nscan. Obviously, this is because we go from multiple index scans\n(multiple bitmap index scans) to only one.\n\nBut, if I run this insert statement and try the same thing again,\nthings look very different:\n\ninsert into tenk1 (thousand, tenthous) select 42, i from\ngenerate_series(43, 1000) i;\n\n(Bear in mind that we've inserted rows that don't actually need to be\nselected by the query in question.)\n\nNow the master branch's plan works in just the same way as before --\nit has exactly the same overhead (7 buffer hits). Whereas the patch\nstill gets the same risky plan -- which now blows up. The plan now\nloses by far more than it could ever really hope to win by: 336 buffer\nhits. (It could be a lot higher than this, even, but you get the\npicture.)\n\nSure, it's difficult to imagine a very general model that captures\nthis sort of risk, in the general case. But you don't need a degree in\nactuarial science to understand that it's inherently a bad idea to\njuggle chainsaws -- no matter what your general risk tolerance happens\nto be. Some things you just don't do.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 2 Aug 2023 22:26:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 8/3/23 07:26, Peter Geoghegan wrote:\n> On Wed, Aug 2, 2023 at 6:32 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> I don't dispute the fact that this can only happen when the planner\n>> believes (with good reason) that the expected cost will be lower. But\n>> I maintain that there is a novel risk to be concerned about, which is\n>> meaningfully distinct from the general risk of regressions that comes\n>> from making just about any change to the planner. The important\n>> principle here is that we should have a strong bias in the direction\n>> of making quals into true \"Access Predicates\" whenever practical.\n>>\n\nOK, preference for access predicates sounds like a reasonable principle.\n\n>> Yeah, technically the patch doesn't directly disrupt how existing\n>> index paths get generated. But it does have the potential to disrupt\n>> it indirectly, by providing an alternative very-similar index path\n>> that's likely to outcompete the existing one in these cases. I think\n>> that we should have just one index path that does everything well\n>> instead.\n> \n> You can see this for yourself, quite easily. Start by running the\n> relevant query from the regression tests, which is:\n> \n> SELECT * FROM tenk1 WHERE thousand = 42 AND (tenthous = 1 OR tenthous\n> = 3 OR tenthous = 42);\n> \n> EXPLAIN (ANALYZE, BUFFERS) confirms that the patch makes the query\n> slightly faster, as expected. I see 7 buffer hits for the bitmap index\n> scan plan on master, versus only 4 buffer hits for the patch's index\n> scan. Obviously, this is because we go from multiple index scans\n> (multiple bitmap index scans) to only one.\n> \n> But, if I run this insert statement and try the same thing again,\n> things look very different:\n> \n> insert into tenk1 (thousand, tenthous) select 42, i from\n> generate_series(43, 1000) i;\n> \n> (Bear in mind that we've inserted rows that don't actually need to be\n> selected by the query in question.)\n> \n> Now the master branch's plan works in just the same way as before --\n> it has exactly the same overhead (7 buffer hits). Whereas the patch\n> still gets the same risky plan -- which now blows up. The plan now\n> loses by far more than it could ever really hope to win by: 336 buffer\n> hits. (It could be a lot higher than this, even, but you get the\n> picture.)\n\nAre you sure? Because if I try this on master (62e9af4c without any\npatches), I get this:\n\nregression=# explain (verbose, analyze, buffers) SELECT * FROM tenk1\nWHERE thousand = 42 AND (tenthous = 1 OR tenthous = 3 OR tenthous = 42);\n\n QUERY PLAN\n------------------------------------------------------------------------\n Index Scan using tenk1_thous_tenthous on public.tenk1\n(cost=0.29..1416.32 rows=1 width=244) (actual time=0.078..1.361 rows=1\nloops=1)\n Output: unique1, unique2, two, four, ten, twenty, hundred, thousand,\ntwothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4\n Index Cond: (tenk1.thousand = 42)\n Filter: ((tenk1.tenthous = 1) OR (tenk1.tenthous = 3) OR\n(tenk1.tenthous = 42))\n Rows Removed by Filter: 967\n Buffers: shared hit=335\n Planning Time: 0.225 ms\n Execution Time: 1.430 ms\n(8 rows)\n\nSo not sure about the claim that master works fine as before. OTOH, the\npatched branch (with \"my\" patch 2023/07/16, just to be clear) does this:\n\n QUERY PLAN\n------------------------------------------------------------------------\n Index Scan using tenk1_thous_tenthous on public.tenk1\n(cost=0.29..23.57 rows=1 width=244) (actual time=0.077..0.669 rows=1\nloops=1)\n Output: unique1, unique2, two, four, ten, twenty, hundred, thousand,\ntwothousand, fivethous, tenthous, odd, even, stringu1, stringu2, string4\n Index Cond: (tenk1.thousand = 42)\n Index Filter: ((tenk1.tenthous = 1) OR (tenk1.tenthous = 3) OR\n(tenk1.tenthous = 42))\n Rows Removed by Index Recheck: 967\n Filter: ((tenk1.tenthous = 1) OR (tenk1.tenthous = 3) OR\n(tenk1.tenthous = 42))\n Buffers: shared hit=7\n Planning Time: 0.211 ms\n Execution Time: 0.722 ms\n(9 rows)\n\nWhich is just the 7 buffers ...\n\nDid I do something wrong?\n\n> \n> Sure, it's difficult to imagine a very general model that captures\n> this sort of risk, in the general case. But you don't need a degree in\n> actuarial science to understand that it's inherently a bad idea to\n> juggle chainsaws -- no matter what your general risk tolerance happens\n> to be. Some things you just don't do.\n> \n\nOh come on. I've been juggling chainsaws (lit on fire!) since I was a\nlittle boy! There's nothing risky about it.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 3 Aug 2023 13:20:55 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 8/3/23 03:32, Peter Geoghegan wrote:\n> On Wed, Aug 2, 2023 at 6:48 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> How come we don't know that until the execution time? Surely when\n>> building the paths/plans, we match the clauses to the index keys, no? Or\n>> are you saying that just having a scan key is not enough for it to be\n>> \"access predicate\"?\n> \n> In principle we can and probably should recognize the difference\n> between \"Access Predicates\" and \"Index Filter Predicates\" much earlier\n> on, during planning. Probably when index paths are first built.\n> \n> It seems quite likely that there will turn out to be at least 2 or 3\n> reasons to do this. The EXPLAIN usability issue might be enough of a\n> reason on its own, though.\n> \n\nOK\n\n>> Anyway, this patch is mostly about \"Index Cond\" mixing two types of\n>> predicates. But the patch is really about \"Filter\" predicates - moving\n>> some of them from table to index. So quite similar to the \"index filter\n>> predicates\" except that those are handled by the index AM.\n> \n> I understand that that's how the patch is structured. It is\n> nevertheless possible (as things stand) that the patch will make the\n> planner shift from a plan that uses \"Access Predicates\" to the maximum\n> extent possible when scanning a composite index, to a similar plan\n> that has a similar index scan, for the same index, but with fewer\n> \"Access Predicates\" in total. In effect, the patched planner will swap\n> one type of predicate for another type because doing so enables the\n> executor to scan an index once instead of scanning it several times.\n> \n\nThat seems very much like something the costing is meant to handle, no?\nI mean, surely \"access predicate\" and \"filter\" should affect the cost\ndifferently, with \"filter\" being more expensive (and table filter being\nmore expensive than index filter).\n\nI was not sure why would the patch affect the plan choice, as it does\nnot really affect the index predicates (passed to the AM) at all. But I\nthink I get the point now - the thing is in having two different index\npaths, like:\n\n PATH #1: access predicates (A,B)\n PATH #2: access predicate A, index filter B\n\nAFAICS the assumption is that path #1 should be better, as it has two\nproper access predicates. But maybe if we add another condition C, it\nmight end up like this:\n\n PATH #1: access predicates (A,B), table filter C\n PATH #2: access predicate A, index filter (B,C)\n\nand #2 will end up winning.\n\nI still think this seems more like a costing issue (and I'd guess we may\nalready have similar cases for index-only scans).\n\nIMO we can either consider the different predicate types during costing.\nSure, then we have the usual costing risks, but that's expected.\n\nOr we could just ignore this during costing entirely (and ditch the\ncosting from the patch). Then the cost doesn't change, and we don't have\nany new risks.\n\n> I don't dispute the fact that this can only happen when the planner\n> believes (with good reason) that the expected cost will be lower. But\n> I maintain that there is a novel risk to be concerned about, which is\n> meaningfully distinct from the general risk of regressions that comes\n> from making just about any change to the planner. The important\n> principle here is that we should have a strong bias in the direction\n> of making quals into true \"Access Predicates\" whenever practical.\n> \n> Yeah, technically the patch doesn't directly disrupt how existing\n> index paths get generated. But it does have the potential to disrupt\n> it indirectly, by providing an alternative very-similar index path\n> that's likely to outcompete the existing one in these cases. I think\n> that we should have just one index path that does everything well\n> instead.\n> \n\nYeah, I think that's the scenario I described above ...\n\n>> But differentiating between access and filter predicates (at the index\n>> AM level) seems rather independent of what this patch aims to do.\n> \n> My concern is directly related to the question of \"access predicates\n> versus filter predicates\", and the planner's current ignorance on\n> which is which. The difference may not matter too much right now, but\n> ISTM that your patch makes it matter a lot more. And so in that\n> indirect sense it does seem relevant.\n> \n\nI'm not sure my patch makes it matter a lot more. Yes, moving a filter\nfrom the table to the index may lower the scan cost, but that can happen\nfor a lot of other reasons ...\n\n> The planner has always had a strong bias in the direction of making\n> clauses that can be index quals into index quals, rather than filter\n> predicates. It makes sense to do that even when they aren't very\n> selective. This is a similar though distinct principle.\n> \n> It's awkward to discuss the issue, since we don't really have official\n> names for these things just yet (I'm just going with what Markus calls\n> them for now). And because there is more than one type of \"index\n> filter predicate\" in play with the patch (namely those in the index\n> AM, and those in the index scan executor node). But my concern boils\n> down to access predicates being strictly better than equivalent index\n> filter predicates.\n> \n\nI like the discussion, but it feels a bit abstract (and distant from\nwhat the patch aimed to do) and I have trouble turning it into something\nactionable.\n\n>> I'm not following. Why couldn't there be some second-order effects?\n>> Maybe it's obvious / implied by something said earlier, but I think\n>> every time we decide between multiple choices, there's a risk of picking\n>> wrong.\n> \n> Naturally, I agree that some amount of risk is inherent. I believe\n> that the risk I have identified is qualitatively different to the\n> standard, inherent kind of risk.\n> \n>> Anyway, is this still about this patch or rather about your SAOP patch?\n> \n> It's mostly not about my SAOP patch. It's just that my SAOP patch will\n> do exactly the right thing in this particular case (at least once\n> combined with Alena Rybakina's OR-to-SAOP transformation patch). It is\n> therefore quite clear that a better plan is possible in principle.\n> Clearly we *can* have the benefit of only one single index scan (i.e.\n> no BitmapOr node combining multiple similar index scans), without\n> accepting any novel new risk to get that benefit. We should just have\n> one index path that does it all, while avoiding duplicative index\n> paths that embody a distinction that really shouldn't exist -- it\n> should be dealt with at runtime instead.\n> \n\nDoes this apply to the index scan vs. index-only scans too? That is, do\nyou think we should we have just one index-scan path, doing index-only\nstuff when possible?\n\n>> True. IMHO the danger or \"index filter\" predicates is that people assume\n>> index conditions eliminate large parts of the index - which is not\n>> necessarily the case here. If we can improve this, cool.\n> \n> I think that we'd find a way to use the same information in the\n> planner, too. It's not just users that should care about the\n> difference. And I don't think that it's particularly likely to be\n> limited to SAOP/MDAM stuff. As Markus said, we're talking about an\n> important general principle here.\n> \n\nIf we want/need to consider this during costing, this seems necessary.\n\n>> But again, this is not what this patch does, right? It's about moving\n>> stuff from \"table filter\" to \"index filter\". And those clauses were not\n>> matched to the index AM at all, so it's not really relevant to the\n>> discussion about different subtypes of predicates.\n> \n> I understand that what I've said is not particularly helpful. At least\n> not on its own. You quite naturally won't want to tie the fate of this\n> patch to my SAOP patch, which is significantly more complicated. I do\n> think that my concern about this being a novel risk needs to be\n> carefully considered.\n> \n> Maybe it's possible to address my concern outside of the context of my\n> own SAOP patch. That would definitely be preferable all around. But\n> \"access predicates versus filter predicates\" seems important and\n> relevant, either way.\n> \n\nIf we can form some sort of plan what needs to be done (both for my\npatch and for the SAOP patch), I'm willing to work on it ... But it's\nnot quite clear to me what the requirements are.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 3 Aug 2023 13:57:10 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Thu, Aug 3, 2023 at 4:20 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Which is just the 7 buffers ...\n>\n> Did I do something wrong?\n\nI think that it might have something to do with your autovacuum\nsettings. Note that the plan that you've shown for the master branch\nisn't the same one that appears in\nsrc/test/regress/expected/create_index.out for the master branch --\nthat plan (the BitmapOr plan) was my baseline case for master.\n\nThat said, I am a little surprised that you could ever get the plan\nthat you showed for master (without somehow unnaturally forcing it).\nIt's almost the same plan that your patch gets, but much worse. Your\npatch can use an index filter, but master uses a table filter instead.\n\nWhile the plan used by the patch is risky in the way that I described,\nthe plan you saw for master is just horrible. I mean, it's not even\nrisky -- it seems almost certain to lose. Whereas at least the plan\nfrom the patch really is cheaper than the BitmapOr plan (the master\nbranch plan from create_index.out) on average.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 3 Aug 2023 09:47:51 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 8/3/23 18:47, Peter Geoghegan wrote:\n> On Thu, Aug 3, 2023 at 4:20 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> Which is just the 7 buffers ...\n>>\n>> Did I do something wrong?\n> \n> I think that it might have something to do with your autovacuum\n> settings. Note that the plan that you've shown for the master branch\n> isn't the same one that appears in\n> src/test/regress/expected/create_index.out for the master branch --\n> that plan (the BitmapOr plan) was my baseline case for master.\n> \n> That said, I am a little surprised that you could ever get the plan\n> that you showed for master (without somehow unnaturally forcing it).\n> It's almost the same plan that your patch gets, but much worse. Your\n> patch can use an index filter, but master uses a table filter instead.\n> \n\nWell I did force it - I thought we're talking about regular index scans,\nso I disabled bitmap scans. Without doing that I get the BitmapOr plan\nlike you.\n\nHowever, with the patch I get this behavior (starting from a \"fresh\"\nstate right after \"make installcheck\")\n\n QUERY PLAN\n------------------------------------------------------------------------\n Index Scan using tenk1_thous_tenthous on tenk1\n (cost=0.29..8.38 rows=1 width=244)\n (actual time=0.033..0.036 rows=1 loops=1)\n Index Cond: (thousand = 42)\n Index Filter: ((tenthous = 1) OR (tenthous = 3) OR (tenthous = 42))\n Rows Removed by Index Recheck: 9\n Filter: ((tenthous = 1) OR (tenthous = 3) OR (tenthous = 42))\n Buffers: shared read=4\n Planning:\n Buffers: shared hit=119 read=32\n Planning Time: 0.673 ms\n Execution Time: 0.116 ms\n(10 rows)\n\ninsert into tenk1 (thousand, tenthous) select 42, i from\ngenerate_series(43, 1000) i;\n\n QUERY PLAN\n------------------------------------------------------------------------\n Index Scan using tenk1_thous_tenthous on tenk1\n (cost=0.29..8.38 rows=1 width=244)\n (actual time=0.038..0.605 rows=1 loops=1)\n Index Cond: (thousand = 42)\n Index Filter: ((tenthous = 1) OR (tenthous = 3) OR (tenthous = 42))\n Rows Removed by Index Recheck: 967\n Filter: ((tenthous = 1) OR (tenthous = 3) OR (tenthous = 42))\n Buffers: shared hit=336\n Planning Time: 0.114 ms\n Execution Time: 0.632 ms\n(8 rows)\n\nanalyze tenk1;\n\n QUERY PLAN\n------------------------------------------------------------------------\n Bitmap Heap Scan on tenk1 (cost=12.89..16.91 rows=1 width=244)\n (actual time=0.016..0.019 rows=1 loops=1)\n Recheck Cond: (((thousand = 42) AND (tenthous = 1)) OR\n ((thousand = 42) AND (tenthous = 3)) OR\n ((thousand = 42) AND (tenthous = 42)))\n Heap Blocks: exact=1\n Buffers: shared hit=7\n -> BitmapOr ...\n Buffers: shared hit=6\n -> Bitmap Index Scan on tenk1_thous_tenthous ...\n Index Cond: ((thousand = 42) AND (tenthous = 1))\n Buffers: shared hit=2\n -> Bitmap Index Scan on tenk1_thous_tenthous ...\n Index Cond: ((thousand = 42) AND (tenthous = 3))\n Buffers: shared hit=2\n -> Bitmap Index Scan on tenk1_thous_tenthous ...\n Index Cond: ((thousand = 42) AND (tenthous = 42))\n Buffers: shared hit=2\n Planning Time: 0.344 ms\n Execution Time: 0.044 ms\n(19 rows)\n\nvacuum analyze tenk1;\n\n QUERY PLAN\n------------------------------------------------------------------------\n Bitmap Heap Scan on tenk1 (cost=12.89..16.91 rows=1 width=244)\n (actual time=0.017..0.019 rows=1 loops=1)\n Recheck Cond: (((thousand = 42) AND (tenthous = 1)) OR\n ((thousand = 42) AND (tenthous = 3)) OR\n ((thousand = 42) AND (tenthous = 42)))\n Heap Blocks: exact=1\n Buffers: shared hit=7\n -> BitmapOr ...\n Buffers: shared hit=6\n -> Bitmap Index Scan on tenk1_thous_tenthous ...\n Index Cond: ((thousand = 42) AND (tenthous = 1))\n Buffers: shared hit=2\n -> Bitmap Index Scan on tenk1_thous_tenthous ...\n Index Cond: ((thousand = 42) AND (tenthous = 3))\n Buffers: shared hit=2\n -> Bitmap Index Scan on tenk1_thous_tenthous ...\n Index Cond: ((thousand = 42) AND (tenthous = 42))\n Buffers: shared hit=2\n Planning Time: 0.277 ms\n Execution Time: 0.046 ms\n(19 rows)\n\nset enable_bitmapscan = off;\n\n QUERY PLAN\n------------------------------------------------------------------------\n Index Scan using tenk1_thous_tenthous on tenk1\n (cost=0.29..23.57 rows=1 width=244)\n (actual time=0.042..0.235 rows=1 loops=1)\n Index Cond: (thousand = 42)\n Index Filter: ((tenthous = 1) OR (tenthous = 3) OR (tenthous = 42))\n Rows Removed by Index Recheck: 967\n Filter: ((tenthous = 1) OR (tenthous = 3) OR (tenthous = 42))\n Buffers: shared hit=7\n Planning Time: 0.119 ms\n Execution Time: 0.261 ms\n(8 rows)\n\nSo yeah, it gets\n\n Buffers: shared hit=336\n\nright after the insert, but it seems to be mostly about visibility map\n(and having to fetch heap tuples), as it disappears after vacuum.\n\nThere seems to be some increase in cost, so we switch back to the bitmap\nplan. I haven't looked into that, but I guess there's either some thinko\nin the costing change, or maybe it's due to correlation.\n\n> While the plan used by the patch is risky in the way that I described,\n> the plan you saw for master is just horrible. I mean, it's not even\n> risky -- it seems almost certain to lose. Whereas at least the plan\n> from the patch really is cheaper than the BitmapOr plan (the master\n> branch plan from create_index.out) on average.\n> \n\nNot sure. I'm a bit confused about what exactly is so risky on the plan\nproduced with the patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 3 Aug 2023 20:17:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Thu, Aug 3, 2023 at 4:57 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> > I understand that that's how the patch is structured. It is\n> > nevertheless possible (as things stand) that the patch will make the\n> > planner shift from a plan that uses \"Access Predicates\" to the maximum\n> > extent possible when scanning a composite index, to a similar plan\n> > that has a similar index scan, for the same index, but with fewer\n> > \"Access Predicates\" in total. In effect, the patched planner will swap\n> > one type of predicate for another type because doing so enables the\n> > executor to scan an index once instead of scanning it several times.\n> >\n>\n> That seems very much like something the costing is meant to handle, no?\n> I mean, surely \"access predicate\" and \"filter\" should affect the cost\n> differently, with \"filter\" being more expensive (and table filter being\n> more expensive than index filter).\n\nI'm not 100% sure that it's not a costing issue, but intuitively it\ndoesn't seem like one.\n\nAs Goetz Graefe once said, \"choice is confusion\". It seems desirable\nto have fewer, better index paths. This is possible whenever there is\na way to avoid the index paths that couldn't possibly be better in the\nfirst place. Though we must also make sure that there is no real\ndownside -- possibly by teaching the executor to behave adaptively\ninstead of needlessly trusting what the planner says. Turning a plan\ntime decision into a runtime decision seems strictly better.\n\nObviously the planner will always need to be trusted to a significant\ndegree (especially for basic things like join order), but why not\navoid it when we can avoid it without any real downsides? Having lots\nof slightly different index paths with slightly different types of\nlogically equivalent predicates seems highly undesirable, and quite\navoidable.\n\nISTM that it should be possible to avoid generating some of these\nindex paths based on static rules that assume that:\n\n1. An \"access predicate\" is always strictly better than an equivalent\n\"index filter predicate\" (for any definition of \"index filter\npredicate\" you can think of).\n2. An \"Index Filter: \" is always strictly better than an equivalent\n\"Filter: \" (i.e. table filter).\n\nThe first item is what I've been going on about, of course. The second\nitem is the important principle behind your patch -- and one that I\nalso agree with. I don't see any contradictions here -- these two\nprinciples are compatible. I think that we can have it both ways.\n\n> AFAICS the assumption is that path #1 should be better, as it has two\n> proper access predicates. But maybe if we add another condition C, it\n> might end up like this:\n>\n> PATH #1: access predicates (A,B), table filter C\n> PATH #2: access predicate A, index filter (B,C)\n>\n> and #2 will end up winning.\n\nWhy wouldn't we expect there to also be this path:\n\nPATH #3: access predicates (A,B), index filter C\n\nAnd why wouldn't we also expect this other path to always be better?\nSo much better that we don't even need to bother generating PATH #1\nand PATH #2 in the first place, even?\n\nRight now there are weird reasons why it might not be so -- strange\ninteractions with things like BitmapOr nodes that could make either\nPATH #1 or PATH #2 look slightly cheaper. But that doesn't seem\nparticularly fundamental to me. We should probably just avoid those\nplan shapes that have the potential to make PATH #1 and PATH #2\nslightly cheaper, due only to perverse interactions.\n\n> I like the discussion, but it feels a bit abstract (and distant from\n> what the patch aimed to do) and I have trouble turning it into something\n> actionable.\n\nI think that I have gotten a lot out of this discussion -- it has made\nmy thinking about this stuff more rigorous. I really appreciate that.\n\n> Does this apply to the index scan vs. index-only scans too? That is, do\n> you think we should we have just one index-scan path, doing index-only\n> stuff when possible?\n\nI think so, yes. But index-only scans don't appear under BitmapOr\nnodes, so right now I can't think of an obvious way of demonstrating\nthat this is true. Maybe it accidentally doesn't come up with\nindex-only scans in practice, but the same underlying principles\nshould be just as true.\n\n> If we can form some sort of plan what needs to be done (both for my\n> patch and for the SAOP patch), I'm willing to work on it ... But it's\n> not quite clear to me what the requirements are.\n\nI do hope to have more concrete proposals soon. Thanks for being patient.\n\nFor what it's worth, I actually think that there is a good chance that\nI'll end up relying on what you've done here to make certain things I\nwant to do with the SOAP patch okay. It would be rather convenient to\nbe able to handle some of the SAOP safety issues without needing any\ntable filters (just index filters), in some corner cases. I think that\nwhat you're doing here makes a lot of sense. FWIW, I am already\npersonally invested in the success of your patch.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 3 Aug 2023 11:50:41 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Thu, Aug 3, 2023 at 11:17 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Not sure. I'm a bit confused about what exactly is so risky on the plan\n> produced with the patch.\n\nIt's all about the worst case. In the scenarios that I'm concerned\nabout, we can be quite sure that the saving from not using a BitmapOr\nwill be fairly low -- the cost of not having to repeat the same index\npage accesses across several similar index scans is, at best, some\nsmall multiple of the would-be number of index scans that the BitmapOr\nplan gets. We can be certain that the possible benefits are fixed and\nlow. This is always true; presumably the would-be BitmapOr plan can\nnever have all that many index scans. And we always know how many\nindex scans a BitmapOr plan would use up front.\n\nOn the other hand, the possible downsides have no obvious limit. So\neven if we're almost certain to win on average, we only have to be\nunlucky once to lose all we gained before that point. As a general\nrule, we want the index AM to have all the context required to\nterminate its scan at the earliest possible opportunity. This is\nenormously important in the worst case.\n\nIt's easier for me to make this argument because I know that we don't\nreally need to make any trade-off here. But even if that wasn't the\ncase, I'd probably arrive at the same general conclusion.\n\nImportantly, it isn't possible to make a similar argument that works\nin the opposite direction -- IMV that's the difference between this\nflavor of riskiness, and the inevitable riskiness that comes with any\nplanner change. In other words, your patch isn't going to win by an\nunpredictably high amount. Not in the specific scenarios that I'm\nfocussed on here, with a BitmapOr + multiple index scans getting\ndisplaced.\n\nThe certainty about the upside is just as important as the uncertainty\nabout the downside. The huge asymmetry matters, and is fairly\natypical. If, somehow, there was less certainty about the possible\nupside, then my argument wouldn't really work.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 3 Aug 2023 12:21:33 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 8/3/23 20:50, Peter Geoghegan wrote:\n> On Thu, Aug 3, 2023 at 4:57 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>> I understand that that's how the patch is structured. It is\n>>> nevertheless possible (as things stand) that the patch will make the\n>>> planner shift from a plan that uses \"Access Predicates\" to the maximum\n>>> extent possible when scanning a composite index, to a similar plan\n>>> that has a similar index scan, for the same index, but with fewer\n>>> \"Access Predicates\" in total. In effect, the patched planner will swap\n>>> one type of predicate for another type because doing so enables the\n>>> executor to scan an index once instead of scanning it several times.\n>>>\n>>\n>> That seems very much like something the costing is meant to handle, no?\n>> I mean, surely \"access predicate\" and \"filter\" should affect the cost\n>> differently, with \"filter\" being more expensive (and table filter being\n>> more expensive than index filter).\n> \n> I'm not 100% sure that it's not a costing issue, but intuitively it\n> doesn't seem like one.\n> \n> As Goetz Graefe once said, \"choice is confusion\". It seems desirable\n> to have fewer, better index paths. This is possible whenever there is\n> a way to avoid the index paths that couldn't possibly be better in the\n> first place. Though we must also make sure that there is no real\n> downside -- possibly by teaching the executor to behave adaptively\n> instead of needlessly trusting what the planner says. Turning a plan\n> time decision into a runtime decision seems strictly better.\n> \n\nSure, having more choices means a risk of making mistakes. But does\nsimply postponing the choices to runtime actually solves this? Even if\nyou're able to do a perfect choice at that point, it's only works for\nthat particular path type (e.g. index scan). You still need to cost it\nsomehow, to decide which path type to pick ...\n\nPerhaps my mental model of what you intend to do is wrong?\n\n> Obviously the planner will always need to be trusted to a significant\n> degree (especially for basic things like join order), but why not\n> avoid it when we can avoid it without any real downsides? Having lots\n> of slightly different index paths with slightly different types of\n> logically equivalent predicates seems highly undesirable, and quite\n> avoidable.\n> \n> ISTM that it should be possible to avoid generating some of these\n> index paths based on static rules that assume that:\n> \n> 1. An \"access predicate\" is always strictly better than an equivalent\n> \"index filter predicate\" (for any definition of \"index filter\n> predicate\" you can think of).\n\nYes, probably.\n\n> 2. An \"Index Filter: \" is always strictly better than an equivalent\n> \"Filter: \" (i.e. table filter).\n\nNot sure about this. As I explained earlier, I think it needs to\nconsider the cost/selectivity of the predicate, and fraction of\nallvisible pages. But yes, it's a static decision.\n\n> \n> The first item is what I've been going on about, of course. The second\n> item is the important principle behind your patch -- and one that I\n> also agree with. I don't see any contradictions here -- these two\n> principles are compatible. I think that we can have it both ways.\n> \n>> AFAICS the assumption is that path #1 should be better, as it has two\n>> proper access predicates. But maybe if we add another condition C, it\n>> might end up like this:\n>>\n>> PATH #1: access predicates (A,B), table filter C\n>> PATH #2: access predicate A, index filter (B,C)\n>>\n>> and #2 will end up winning.\n> \n> Why wouldn't we expect there to also be this path:\n> \n> PATH #3: access predicates (A,B), index filter C\n> \n\nWhy? Maybe the index doesn't have all the columns needed for condition\nC, in which case it has to be evaluated as a table filter.\n\n(I didn't say it explicitly, but this assumes those paths are not for\nthe same index. If they were, then PATH #3 would have to exist too.)\n\n> And why wouldn't we also expect this other path to always be better?\n> So much better that we don't even need to bother generating PATH #1\n> and PATH #2 in the first place, even?\n> \n\nYes, I agree that path would likely be superior to the other paths.\n\n> Right now there are weird reasons why it might not be so -- strange\n> interactions with things like BitmapOr nodes that could make either\n> PATH #1 or PATH #2 look slightly cheaper. But that doesn't seem\n> particularly fundamental to me. We should probably just avoid those\n> plan shapes that have the potential to make PATH #1 and PATH #2\n> slightly cheaper, due only to perverse interactions.\n> \n>> I like the discussion, but it feels a bit abstract (and distant from\n>> what the patch aimed to do) and I have trouble turning it into something\n>> actionable.\n> \n> I think that I have gotten a lot out of this discussion -- it has made\n> my thinking about this stuff more rigorous. I really appreciate that.\n> \n\nI feel a bit like the rubber duck from [1], but I'm OK with that ;-)\n\n>> Does this apply to the index scan vs. index-only scans too? That is, do\n>> you think we should we have just one index-scan path, doing index-only\n>> stuff when possible?\n> \n> I think so, yes. But index-only scans don't appear under BitmapOr\n> nodes, so right now I can't think of an obvious way of demonstrating\n> that this is true. Maybe it accidentally doesn't come up with\n> index-only scans in practice, but the same underlying principles\n> should be just as true.\n> \n>> If we can form some sort of plan what needs to be done (both for my\n>> patch and for the SAOP patch), I'm willing to work on it ... But it's\n>> not quite clear to me what the requirements are.\n> \n> I do hope to have more concrete proposals soon. Thanks for being patient.\n> \n> For what it's worth, I actually think that there is a good chance that\n> I'll end up relying on what you've done here to make certain things I\n> want to do with the SOAP patch okay. It would be rather convenient to\n> be able to handle some of the SAOP safety issues without needing any\n> table filters (just index filters), in some corner cases. I think that\n> what you're doing here makes a lot of sense. FWIW, I am already\n> personally invested in the success of your patch.\n> \n\nGreat! I'm happy those bits are likely useful for what you're doing.\n\nregards\n\n\n[1] https://en.wikipedia.org/wiki/Rubber_duck_debugging\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 3 Aug 2023 23:46:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 8/3/23 21:21, Peter Geoghegan wrote:\n> On Thu, Aug 3, 2023 at 11:17 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> Not sure. I'm a bit confused about what exactly is so risky on the plan\n>> produced with the patch.\n> \n> It's all about the worst case. In the scenarios that I'm concerned\n> about, we can be quite sure that the saving from not using a BitmapOr\n> will be fairly low -- the cost of not having to repeat the same index\n> page accesses across several similar index scans is, at best, some\n> small multiple of the would-be number of index scans that the BitmapOr\n> plan gets. We can be certain that the possible benefits are fixed and\n> low. This is always true; presumably the would-be BitmapOr plan can\n> never have all that many index scans. And we always know how many\n> index scans a BitmapOr plan would use up front.\n> \n\nWhen you say \"index page accesses\" do you mean accesses to index pages,\nor accesses to heap pages from the index scan?\n\nBecause my patch is all about reducing the heap pages, which are usually\nthe expensive part of the index scan. But you're right the \"index scan\"\nwith index filter may access more index pages, because it has fewer\n\"access predicates\".\n\nI don't quite see that with the tenk1 query we've been discussing (the\nextra buffers were due to non-allvisible heap pages), but I guess that's\npossible.\n\n> On the other hand, the possible downsides have no obvious limit. So\n> even if we're almost certain to win on average, we only have to be\n> unlucky once to lose all we gained before that point. As a general\n> rule, we want the index AM to have all the context required to\n> terminate its scan at the earliest possible opportunity. This is\n> enormously important in the worst case.\n> \n\nYeah, I agree there's some asymmetry in the risk/benefit. It's not\nunlike e.g. seqscan vs. index scan, where the index scan can't really\nsave more than what the seqscan costs, but it can get (almost)\narbitrarily expensive.\n\n> It's easier for me to make this argument because I know that we don't\n> really need to make any trade-off here. But even if that wasn't the\n> case, I'd probably arrive at the same general conclusion.\n> \n> Importantly, it isn't possible to make a similar argument that works\n> in the opposite direction -- IMV that's the difference between this\n> flavor of riskiness, and the inevitable riskiness that comes with any\n> planner change. In other words, your patch isn't going to win by an\n> unpredictably high amount. Not in the specific scenarios that I'm\n> focussed on here, with a BitmapOr + multiple index scans getting\n> displaced.\n> \n\nTrue. It probably can't beat BitmapOr plan if it means moving access\npredicate into index filter (or even worse a table filter).\n\n> The certainty about the upside is just as important as the uncertainty\n> about the downside. The huge asymmetry matters, and is fairly\n> atypical. If, somehow, there was less certainty about the possible\n> upside, then my argument wouldn't really work.\n> \n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 4 Aug 2023 00:04:35 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Thu, Aug 3, 2023 at 2:46 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Sure, having more choices means a risk of making mistakes. But does\n> simply postponing the choices to runtime actually solves this?\n\nIt solves that one problem, yes. This is particularly important in\ncases where we would otherwise get truly pathological performance --\nnot just mediocre or bad performance. Most of the time, mediocre\nperformance isn't such a big deal.\n\nHaving a uniform execution strategy for certain kinds of index scans\nis literally guaranteed to beat a static strategy in some cases. For\nexample, with some SAOP scans (with my patch), we'll have to skip lots\nof the index, and then scan lots of the index -- just because of a\nbunch of idiosyncratic details that are almost impossible to predict\nusing statistics. Such an index scan really shouldn't be considered\n\"moderately skippy\". It is not the average of two opposite things --\nit is more like two separate things that are opposites.\n\nIt's true that even this \"moderately skippy\" case needs to be costed.\nBut if we can entirely eliminate variation that really looks like\nnoise, it should be more likely that we'll get the cheapest plan.\nCosting may not be any easier, but getting the cheapest plan might be.\n\n> > 1. An \"access predicate\" is always strictly better than an equivalent\n> > \"index filter predicate\" (for any definition of \"index filter\n> > predicate\" you can think of).\n>\n> Yes, probably.\n>\n> > 2. An \"Index Filter: \" is always strictly better than an equivalent\n> > \"Filter: \" (i.e. table filter).\n>\n> Not sure about this. As I explained earlier, I think it needs to\n> consider the cost/selectivity of the predicate, and fraction of\n> allvisible pages. But yes, it's a static decision.\n\nWhat I said is limited to \"equivalent\" predicates. If it's just not\npossible to get an \"access predicate\" at all, then my point 1 doesn't\napply. Similarly, if it just isn't possible to get an \"Index Filter\"\n(only a table filter), then my point #2 doesn't apply.\n\nThis does mean that there could still be competition between multiple\nindex paths for the same composite index, but I have no objections to\nthat -- it makes sense to me because it isn't duplicative in the way\nthat I'm concerned about. It just isn't possible to delay anything\nuntil run time in this scenario, so nothing that I've said should\napply.\n\n> (I didn't say it explicitly, but this assumes those paths are not for\n> the same index. If they were, then PATH #3 would have to exist too.)\n\nThat changes everything, then. If they're completely different indexes\nthen nothing I've said should apply. I can't think of a way to avoid\nmaking an up-front commitment to that in the planner (I'm thinking of\nfar more basic things than that).\n\n> I feel a bit like the rubber duck from [1], but I'm OK with that ;-)\n\nNot from my point of view. Besides, even when somebody says that they\njust don't understand what I'm saying at all (which wasn't ever fully\nthe case here), that is often useful feedback in itself.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 3 Aug 2023 16:29:31 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Thu, Aug 3, 2023 at 3:04 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> When you say \"index page accesses\" do you mean accesses to index pages,\n> or accesses to heap pages from the index scan?\n\nYes, that's exactly what I mean. Note that that's the dominant cost\nfor the original BitmapOr plan.\n\nAs I said upthread, the original BitmapOr plan has 7 buffer hits. The\nbreakdown is 1 single heap page access, 3 root page accesses, and 3\nleaf page accesses. There is only 1 heap page access because only 1\nout of the 3 index scans that feed into the BitmapOr actually end up\nfinding any matching rows in the index.\n\nIn short, the dominant cost here is index page accesses. It's a\nparticularly good case for my SAOP patch!\n\n> Because my patch is all about reducing the heap pages, which are usually\n> the expensive part of the index scan. But you're right the \"index scan\"\n> with index filter may access more index pages, because it has fewer\n> \"access predicates\".\n\nThe fact is that your patch correctly picks the cheapest plan, which\nis kinda like a risky version of the plan that my SAOP patch would\npick -- it is cheaper for the very same reason. I understand that\nthat's not your intention at all, but this is clearly what happened.\nThat's what I meant by \"weird second order effects\".\n\nTo me, it really does kinda look like your patch accidentally\ndiscovered a plan that's fairly similar to the plan that my SAOP patch\nwould have found by design! Perhaps I should have been clearer on this\npoint earlier. (If you're only now seeing this for yourself for the\nfirst time, then...oops. No wonder you were confused about which\npatch it was I was going on about!)\n\n> I don't quite see that with the tenk1 query we've been discussing (the\n> extra buffers were due to non-allvisible heap pages), but I guess that's\n> possible.\n\nThe extra buffer hits occur because I made them occur by inserting new\ntuples where thousand = 42. Obviously, I did it that way because I had\na point that I wanted to make. Obviously, there wouldn't have been any\nnotable regression from your patch at all if I had (say) inserted\ntuples where thousand = 43 instead. (Not for the original \"42\" query,\nat least.)\n\nThat's part of the problem, as I see it. Local changes like that can\nhave outsized impact on individual queries, even though there is no\ninherent reason to expect it. How can statistics reliably guide the\nplanner here? Statistics are supposed to be a summary of the whole\nattribute, that allow us to make various generalizations during\nplanning. But this plan leaves us sensitive to relatively small\nchanges in one particular \"thousand\" grouping, with potentially\noutsized impact. And, this can happen very suddenly, because it's so\n\"local\".\n\nMaking this plan perform robustly just doesn't seem to be one of the\nthings that statistics can be expected to help us with very much.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 3 Aug 2023 17:08:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Thu, Aug 3, 2023 at 3:04 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Because my patch is all about reducing the heap pages, which are usually\n> the expensive part of the index scan. But you're right the \"index scan\"\n> with index filter may access more index pages, because it has fewer\n> \"access predicates\".\n\nIt's not so much the unnecessary index page accesses that bother me.\nAt least I didn't push that aspect very far when I constructed my\nadversarial test case -- index pages were only a small part of the\noverall problem. (I mean the problem at runtime, in the executor. The\nplanner expected to save a small number of leaf page accesses, which\nwas kinda, sorta the problem there -- though the planner might have\ntechnically still been correct about that, and can't have been too far\nwrong in any case.)\n\nThe real problem that my adversarial case seemed to highlight was a\nproblem of extra heap page accesses. The tenk1 table's VM is less than\none page in size, so how could it have been VM buffer hits? Sure,\nthere were no \"table filters\" involved -- only \"index filters\". But\neven \"index filters\" require heap access when the page isn't marked\nall-visible in the VM.\n\nThat problem just cannot happen with a similar plan that eliminates\nthe same index tuples within the index AM proper (the index quals\ndon't even have to be \"access predicates\" for this to apply, either).\nOf course, we never need to check the visibility of index tuples just\nto be able to consider eliminating them via nbtree search scan\nkeys/index quals -- and so there is never any question of heap/VM\naccess for tuples that don't pass index quals. Not so for \"index\nfilters\", where there is at least some chance of accessing the heap\nproper just to be able to eliminate non-matches.\n\nWhile I think that it makes sense to assume that \"index filters\" are\nstrictly better than \"table filters\" (assuming they're directly\nequivalent in that they contain the same clause), they're not\n*reliably* any better. So \"index filters\" are far from being anywhere\nnear as good as an equivalent index qual (AFAICT we should always\nassume that that's true). This is true of index quals generally --\nthis advantage is *not* limited to \"access predicate index quals\". (It\nis most definitely not okay for \"index filters\" to displace equivalent\n\"access predicate index quals\", but it's also not really okay to allow\nthem to displace equivalent \"index filter predicate index quals\" --\nthe latter case is less bad, but AFAICT they both basically aren't\nacceptable \"substitutions\".)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 3 Aug 2023 19:07:29 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 8/4/23 02:08, Peter Geoghegan wrote:\n> On Thu, Aug 3, 2023 at 3:04 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> When you say \"index page accesses\" do you mean accesses to index pages,\n>> or accesses to heap pages from the index scan?\n> \n> Yes, that's exactly what I mean. Note that that's the dominant cost\n> for the original BitmapOr plan.\n> \n\nWell, I presented multiple options, so \"yes\" doesn't really clarify\nwhich of them applies. But my understanding is you meant the index pages\naccesses.\n\n> As I said upthread, the original BitmapOr plan has 7 buffer hits. The\n> breakdown is 1 single heap page access, 3 root page accesses, and 3\n> leaf page accesses. There is only 1 heap page access because only 1\n> out of the 3 index scans that feed into the BitmapOr actually end up\n> finding any matching rows in the index.\n> \n> In short, the dominant cost here is index page accesses. It's a\n> particularly good case for my SAOP patch!\n> \n\nUnderstood. It makes sense considering the SAOP patch is all about\noptimizing the index walk / processing fewer pages.\n\n>> Because my patch is all about reducing the heap pages, which are usually\n>> the expensive part of the index scan. But you're right the \"index scan\"\n>> with index filter may access more index pages, because it has fewer\n>> \"access predicates\".\n> \n> The fact is that your patch correctly picks the cheapest plan, which\n> is kinda like a risky version of the plan that my SAOP patch would\n> pick -- it is cheaper for the very same reason. I understand that\n> that's not your intention at all, but this is clearly what happened.\n> That's what I meant by \"weird second order effects\".\n> \n> To me, it really does kinda look like your patch accidentally\n> discovered a plan that's fairly similar to the plan that my SAOP patch\n> would have found by design! Perhaps I should have been clearer on this\n> point earlier. (If you're only now seeing this for yourself for the\n> first time, then...oops. No wonder you were confused about which\n> patch it was I was going on about!)\n> \n\nThanks. I think I now see the relationship between the plan with my\npatch and your SAOP patch. It's effectively very similar, except that\nthe responsibilities are split a bit differently. With my patch the OR\nclause happens outside AM, while the SAOP patch would do that in the AM\nand also use that to walk the index more efficiently.\n\n>> I don't quite see that with the tenk1 query we've been discussing (the\n>> extra buffers were due to non-allvisible heap pages), but I guess that's\n>> possible.\n> \n> The extra buffer hits occur because I made them occur by inserting new\n> tuples where thousand = 42. Obviously, I did it that way because I had\n> a point that I wanted to make. Obviously, there wouldn't have been any\n> notable regression from your patch at all if I had (say) inserted\n> tuples where thousand = 43 instead. (Not for the original \"42\" query,\n> at least.)\n> \n\nRight. I know where the heap accesses come from, but clearly we're able\nto skip those when allvisible=true, and we don't need to scan more index\npages either. But I guess we could make the data more complex to make\nthis part worse (for my patch, while the SAOP would not be affected).\n\n> That's part of the problem, as I see it. Local changes like that can\n> have outsized impact on individual queries, even though there is no\n> inherent reason to expect it. How can statistics reliably guide the\n> planner here? Statistics are supposed to be a summary of the whole\n> attribute, that allow us to make various generalizations during\n> planning. But this plan leaves us sensitive to relatively small\n> changes in one particular \"thousand\" grouping, with potentially\n> outsized impact. And, this can happen very suddenly, because it's so\n> \"local\".\n> \n> Making this plan perform robustly just doesn't seem to be one of the\n> things that statistics can be expected to help us with very much.\n> \n\nI agree there certainly are cases where the estimates will be off. This\nis not that different from correlated columns, in fact it's exactly the\nsame issue, I think. But it's also not a novel/unique issue - we should\nprobably do the \"usual\" thing, i.e. plan based on estimates (maybe with\nsome safety margin), and have some fallback strategy at runtime.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 4 Aug 2023 13:47:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Fri, Aug 4, 2023 at 4:47 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Well, I presented multiple options, so \"yes\" doesn't really clarify\n> which of them applies. But my understanding is you meant the index pages\n> accesses.\n\nSorry. Your understanding of what I must have meant before was correct\n-- your patch picked that plan because it reduced the number of index\npage accesses significantly. Just like my SAOP patch would have.\n\n> > In short, the dominant cost here is index page accesses. It's a\n> > particularly good case for my SAOP patch!\n> >\n>\n> Understood. It makes sense considering the SAOP patch is all about\n> optimizing the index walk / processing fewer pages.\n\nActually, some of the most compelling cases for my SAOP patch are\nthose involving heap page access savings, which come from the planner\nchanges. Basically, the nbtree/executor changes make certain index\naccess patterns much more efficient. Which is useful in itself, but\noften much more useful as an indirect enabler of avoiding heap page\naccesses by altering other related aspects of a plan in the planner.\nSometimes by replacing table filter quals with index quals (the nbtree\nchanges make nbtree a strictly better place for the quals). You know,\nsomewhat like your patch.\n\nThat's really why I'm so interested in your patch, and its\nrelationship with my own patch, and the BitmapOr issue. If your patch\nenables some tricks that are really quite similar to the tricks that\nmy own patch enables, then delineating which patch does which exact\ntrick when is surely important for both patches.\n\nI actually started out just thinking about index page accesses, before\neventually coming to understand that heap page accesses were also very\nrelevant. Whereas you started out just thinking about heap page\naccesses, and now see some impact from saving on index page accesses.\n\n> Thanks. I think I now see the relationship between the plan with my\n> patch and your SAOP patch. It's effectively very similar, except that\n> the responsibilities are split a bit differently. With my patch the OR\n> clause happens outside AM, while the SAOP patch would do that in the AM\n> and also use that to walk the index more efficiently.\n\nRight. That's where my idea of structuring things so that there is\nonly one best choice really comes from.\n\n> I agree there certainly are cases where the estimates will be off. This\n> is not that different from correlated columns, in fact it's exactly the\n> same issue, I think.\n\nIn one way I think you're right that it's the same issue -- if you\njust focus on that one executor node, then it's the same issue. But I\ndon't think it's the same issue in a deeper sense, since this is one\ncase where you simply don't have to accept any risk. We really should\nbe able to just not ever do this, for a limited though important\nsubset of cases involving ORs + indexable operators.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 4 Aug 2023 10:01:29 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 8/4/23 04:07, Peter Geoghegan wrote:\n> On Thu, Aug 3, 2023 at 3:04 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> Because my patch is all about reducing the heap pages, which are usually\n>> the expensive part of the index scan. But you're right the \"index scan\"\n>> with index filter may access more index pages, because it has fewer\n>> \"access predicates\".\n> \n> It's not so much the unnecessary index page accesses that bother me.\n> At least I didn't push that aspect very far when I constructed my\n> adversarial test case -- index pages were only a small part of the\n> overall problem. (I mean the problem at runtime, in the executor. The\n> planner expected to save a small number of leaf page accesses, which\n> was kinda, sorta the problem there -- though the planner might have\n> technically still been correct about that, and can't have been too far\n> wrong in any case.)\n> \n\nThanks for the clarification, I think I understand better now.\n\nLet me briefly sum my understanding for the two patches:\n\n- The SAOP patch eliminates those heap accesses because it manages to\nevaluate all clauses in the AM, including clauses that would previously\nbe treated as \"table filters\" and evaluated on the heap row.\n\n- My patch achieves a similar result by evaluating the clauses as index\nfilters (i.e. on the index tuple). That's probably not as good as proper\naccess predicates, so it can't help with the index page accesses, but\nbetter than what we had before.\n\nThere's a couple more related thoughts later in my reply.\n\n> The real problem that my adversarial case seemed to highlight was a\n> problem of extra heap page accesses. The tenk1 table's VM is less than\n> one page in size, so how could it have been VM buffer hits? Sure,\n> there were no \"table filters\" involved -- only \"index filters\". But\n> even \"index filters\" require heap access when the page isn't marked\n> all-visible in the VM.\n> \n\nNo, the extra accesses were not because of VM buffer hits - it was\nbecause of having to actually fetch the heap tuple for pages that are\nnot fully visible, which is what happens right after the insert.\n\nThe patch does what we index-only scans do - before evaluating the\nfilters on an index tuple, we check if the page is fully visible. If\nnot, we fetch the heap tuple and evaluate the filters on it.\n\nThis means even an index-only scan would behave like this too. And it\ngoes away as the table gets vacuumed, at which point we can eliminate\nthe rows using only the index tuple again.\n\n> That problem just cannot happen with a similar plan that eliminates\n> the same index tuples within the index AM proper (the index quals\n> don't even have to be \"access predicates\" for this to apply, either).\n> Of course, we never need to check the visibility of index tuples just\n> to be able to consider eliminating them via nbtree search scan\n> keys/index quals -- and so there is never any question of heap/VM\n> access for tuples that don't pass index quals. Not so for \"index\n> filters\", where there is at least some chance of accessing the heap\n> proper just to be able to eliminate non-matches.\n> \n\nRight. This however begs a question - why would we actually need to\ncheck the visibility map before evaluating the index filter, when the\nindex tuple alone is clearly good enough for the bitmapOr plan?\n\nBecause if we didn't need to do that VM check, this issue with extra\nheap accesses would disappear.\n\nI copied this from the IOS somewhat blindly, but now I'm starting to\nthink it was misguided. I thought it's a protection against processing\n\"invalid\" tuples - not tuples broken after a crash (as that would be\nfixed by crash recovery), but e.g. tuples with schema changes from an\naborted transaction.\n\nBut can this actually happen for indexes? For heap it's certainly\npossible (BEGIN - ALTER - INSERT - ROLLBACK will leave behind tuples\nlike that). But we don't support changing indexes like this, right?\n\nAnd if we had this issue, how come the bitmapOr plan (which ultimately\ndoes the same thing, although entirely in the AM) does not need to do\nthese VM checks / heap accesses too? It's evaluating essentially the\nsame conditions on the index tuple ...\n\nSo I'm starting to think this just my misunderstanding of why IOS does\nthis VM check - it's purely to determine visibility of the result. When\nit sees a pointer to page that is not all-visible, it decides it'll need\nto do check visibility on a heap tuple anyway, and just fetches the heap\ntuple right away. Which however ignores that the filters may eliminate\nmany of those tuples, so IOS could also make such unnecessary heap\naccesses. It might be better to check the filters first, and only then\nmaybe fetch the heap tuple ...\n\n> While I think that it makes sense to assume that \"index filters\" are\n> strictly better than \"table filters\" (assuming they're directly\n> equivalent in that they contain the same clause), they're not\n> *reliably* any better. So \"index filters\" are far from being anywhere\n> near as good as an equivalent index qual (AFAICT we should always\n> assume that that's true). This is true of index quals generally --\n> this advantage is *not* limited to \"access predicate index quals\". (It\n> is most definitely not okay for \"index filters\" to displace equivalent\n> \"access predicate index quals\", but it's also not really okay to allow\n> them to displace equivalent \"index filter predicate index quals\" --\n> the latter case is less bad, but AFAICT they both basically aren't\n> acceptable \"substitutions\".)\n> \n\nI'm not quite sure what are the differences between \"index qual\" vs.\n\"access predicate index qual\" vs. \"index filter predicate index quals\",\nor what \"dispacing\" would mean exactly. But I agree there's a hierarchy\nof qual types, and some \"promotions\" are likely guaranteed to be good.\n\n\nFWIW this also reminds me that this whole discussion mostly focused on\nSAOP clauses (and how they may be translated into access predicates\netc.). My patch is however way more general - it applies to all clauses,\nnot just SAOP ones, including clauses with no chance of evaluating at\nthe AM level.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 5 Aug 2023 01:34:04 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Fri, Aug 4, 2023 at 4:34 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Thanks for the clarification, I think I understand better now.\n\nHonestly, it's gratifying to be understood at all in a discussion like\nthis one. Figuring out how to articulate some of my more subtle ideas\n(without leaving out important nuances) can be a real struggle for me.\n\n> Let me briefly sum my understanding for the two patches:\n>\n> - The SAOP patch eliminates those heap accesses because it manages to\n> evaluate all clauses in the AM, including clauses that would previously\n> be treated as \"table filters\" and evaluated on the heap row.\n\nYes, exactly.\n\n> - My patch achieves a similar result by evaluating the clauses as index\n> filters (i.e. on the index tuple). That's probably not as good as proper\n> access predicates, so it can't help with the index page accesses, but\n> better than what we had before.\n\nYes, exactly.\n\n> No, the extra accesses were not because of VM buffer hits - it was\n> because of having to actually fetch the heap tuple for pages that are\n> not fully visible, which is what happens right after the insert.\n\nYes, exactly.\n\n> The patch does what we index-only scans do - before evaluating the\n> filters on an index tuple, we check if the page is fully visible. If\n> not, we fetch the heap tuple and evaluate the filters on it.\n>\n> This means even an index-only scan would behave like this too. And it\n> goes away as the table gets vacuumed, at which point we can eliminate\n> the rows using only the index tuple again.\n\nYes, exactly.\n\n> Right. This however begs a question - why would we actually need to\n> check the visibility map before evaluating the index filter, when the\n> index tuple alone is clearly good enough for the bitmapOr plan?\n>\n> Because if we didn't need to do that VM check, this issue with extra\n> heap accesses would disappear.\n\nThe index AM is entitled to make certain assumptions of opclass\nmembers -- assumptions that cannot be made during expression\nevaluation. The classic example is division-by-zero during evaluation\nof a qual, for a tuple that wasn't visible anyway. Our assumption is\nthat stuff like that just cannot happen with index quals -- users\nshouldn't ever encounter sanity check errors caused by\ninvisible-to-their-MVCC-snapshot tuples.\n\nI think that that's the main difficulty, as far as avoiding heap\naccess for index filters is concerned. Of course, you could just limit\nyourself to those cases where the index AM assumptions were safe. But\nat that point, why not simply make sure to generate true index quals,\nand be done with it?\n\n> I copied this from the IOS somewhat blindly, but now I'm starting to\n> think it was misguided. I thought it's a protection against processing\n> \"invalid\" tuples - not tuples broken after a crash (as that would be\n> fixed by crash recovery), but e.g. tuples with schema changes from an\n> aborted transaction.\n\nI agree that schema changes for indexes shouldn't be an issue, though.\n\n> I'm not quite sure what are the differences between \"index qual\" vs.\n> \"access predicate index qual\" vs. \"index filter predicate index quals\",\n> or what \"dispacing\" would mean exactly.\n\nFor the purposes of this point about \"a hierarchy of quals\", it\ndoesn't really matter -- that was the point I was trying to make.\n\nIn other words, \"index quals\" are strictly better than equivalent\n\"index filters\", which are themselves strictly better than equivalent\n\"table filters\". While it is also true that you can meaningfully\nclassify \"index quals\" into their own hierarchy (namely access\npredicates vs index filter predicates), that doesn't necessarily need\nto be discussed when discussing the hierarchy from a planner point of\nview, since it is (at least for now) internal to the nbtree index AM.\n\nOn second thought, I tend to doubt that your patch needs to worry\nabout each type of index qual directly. It probably needs to worry\nabout index quals in general.\n\nIt is always better to make what could be an \"index filter\" into an\nindex qual. Of course, the current practical problem for you is\nfiguring out how to deal with that in cases like the BitmapOr case.\nSince it's not as if the planner is wrong, exactly...it really is the\ncheapest plan, so the planner is at least right on its own terms. I am\ninterested in finding a solution to that problem.\n\n> FWIW this also reminds me that this whole discussion mostly focused on\n> SAOP clauses (and how they may be translated into access predicates\n> etc.). My patch is however way more general - it applies to all clauses,\n> not just SAOP ones, including clauses with no chance of evaluating at\n> the AM level.\n\nI agree that your patch is way more general, in one way. But the SAOP\npatch is also much more general than you might think.\n\nFor one thing the whole BitmapOr plan issue actually required that you\ncompared your patch to a combination of my SAOP patch and Alena\nRybakina's OR-to-SAOP transformation patch -- you needed both patches.\nHer patch effectively made my own patch much more general. But there\nare all kinds of other transformations that might further extend the\napplicability of nbtree executor changes from my patch -- the MDAM\ntechniques are certainly not limited to ORs/SAOPs. For example, it's\neasy to imagine inventing a new type of SAOP-style clause that\nrepresented \"BETWEEN x AND y\" expressions, that would make range\npredicates into \"first class indexable operators\" --\nScalarRangeOprExpr, or something. With multi-column indexes, these\nScalarRangeOprExpr clauses could be composed beside ScalarArrayOpExpr\nclauses, as well as simpler clauses -- all while preserving index\norder on output. So there are quite a few plan shapes that aren't\npossible at all right now, that become possible, many of which don't\neven have SAOPs/ORs.\n\nOf course it won't ever be possible to create a transformation that\ndoesn't ultimately flatten everything into MDAM style \"single value\"\nDNF predicates, which have to use simple B-Tree opclass operators --\nobviously there are fundamental limits to it. So even in a perfect\nworld, with every possible MDAM-ish transformation implemented, we'll\nstill have a significant need for your patch.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 4 Aug 2023 17:53:39 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 8/5/23 02:53, Peter Geoghegan wrote:\n> ...\n> \n>> Right. This however begs a question - why would we actually need to\n>> check the visibility map before evaluating the index filter, when the\n>> index tuple alone is clearly good enough for the bitmapOr plan?\n>>\n>> Because if we didn't need to do that VM check, this issue with extra\n>> heap accesses would disappear.\n> \n> The index AM is entitled to make certain assumptions of opclass\n> members -- assumptions that cannot be made during expression\n> evaluation. The classic example is division-by-zero during evaluation\n> of a qual, for a tuple that wasn't visible anyway. Our assumption is\n> that stuff like that just cannot happen with index quals -- users\n> shouldn't ever encounter sanity check errors caused by\n> invisible-to-their-MVCC-snapshot tuples.\n> \n\nThanks for reminding me, I keep forgetting about this.\n\n> I think that that's the main difficulty, as far as avoiding heap\n> access for index filters is concerned. Of course, you could just limit\n> yourself to those cases where the index AM assumptions were safe. But\n> at that point, why not simply make sure to generate true index quals,\n> and be done with it?\n> \n\nYeah, if it's possible to generate true index quals, it'd be stupid not\nto do that. But clearly there are cases where that's not possible (we\nmay not have the code doing that, or maybe it's just not possible in\nprinciple).\n\nWould it be possible to inspect the expression and determine if it's\n\"safe\" to be treated almost like an index qual? Say, given a complex\nexpression, we might check if it consists only of expressions that could\nbe treated as an index qual. So a bit like leak-proof, which may\nactually be relevant here too I guess (e.g. int4div is not leak-proof,\nfor example, so e.g. the division-by-zero would not allow index-qual\ntreatment).\n\nI now recall there probably was a past discussion about how leak-proof\nrelates to this, but IIRC the conclusion was it's not quite the same\nthing. But maybe I just remember things wrong.\n\nAnyway, I think there are always going to be clauses that would be safe\nto evaluate on the index, but the index AM does not know to handle them\nfor some reason. For example it might require extending the AM to handle\ngeneric expressions, which doesn't seem quite desirable.\n\nSo I think I see three points where we could evaluate expressions:\n\n1) in the AM, as access preditates / index quals (doing this more often\n is kinda what the SAOP patches aim to do)\n\n2) in the index scan, before checking VM / visibility (if the qual is\n safe to be evaluated early)\n\n3) in the index scan, after checking VM / visibility (if the expression\n does unsafe things)\n\n\n\n>> I copied this from the IOS somewhat blindly, but now I'm starting to\n>> think it was misguided. I thought it's a protection against processing\n>> \"invalid\" tuples - not tuples broken after a crash (as that would be\n>> fixed by crash recovery), but e.g. tuples with schema changes from an\n>> aborted transaction.\n> \n> I agree that schema changes for indexes shouldn't be an issue, though.\n> \n>> I'm not quite sure what are the differences between \"index qual\" vs.\n>> \"access predicate index qual\" vs. \"index filter predicate index quals\",\n>> or what \"dispacing\" would mean exactly.\n> \n> For the purposes of this point about \"a hierarchy of quals\", it\n> doesn't really matter -- that was the point I was trying to make.\n> \n> In other words, \"index quals\" are strictly better than equivalent\n> \"index filters\", which are themselves strictly better than equivalent\n> \"table filters\". While it is also true that you can meaningfully\n> classify \"index quals\" into their own hierarchy (namely access\n> predicates vs index filter predicates), that doesn't necessarily need\n> to be discussed when discussing the hierarchy from a planner point of\n> view, since it is (at least for now) internal to the nbtree index AM.\n> \n> On second thought, I tend to doubt that your patch needs to worry\n> about each type of index qual directly. It probably needs to worry\n> about index quals in general.\n> \n\nI agree. That seems like a discussion relevant to the general topic of\n\"upgrading\" clauses. If anything, the patch may need to worry about\nmoving table filters to index filters, that's the only thing it does.\n\n> It is always better to make what could be an \"index filter\" into an\n> index qual. Of course, the current practical problem for you is\n> figuring out how to deal with that in cases like the BitmapOr case.\n> Since it's not as if the planner is wrong, exactly...it really is the\n> cheapest plan, so the planner is at least right on its own terms. I am\n> interested in finding a solution to that problem.\n> \n\nWell, if the planner is not wrong, what solution are we looking for? ;-)\n\nFWIW if the problem is the patch may make the new plan look cheaper than\nsome \"actually better\" plan (e.g. the BitmapOr one). In that case, we\ncould just keep the old costing (kinda assuming the worst case, but as\nyou said, the benefits are limited, while the risks are arbitrary).\nThat's the only idea I have.\n\n>> FWIW this also reminds me that this whole discussion mostly focused on\n>> SAOP clauses (and how they may be translated into access predicates\n>> etc.). My patch is however way more general - it applies to all clauses,\n>> not just SAOP ones, including clauses with no chance of evaluating at\n>> the AM level.\n> \n> I agree that your patch is way more general, in one way. But the SAOP\n> patch is also much more general than you might think.\n> \n> For one thing the whole BitmapOr plan issue actually required that you\n> compared your patch to a combination of my SAOP patch and Alena\n> Rybakina's OR-to-SAOP transformation patch -- you needed both patches.\n> Her patch effectively made my own patch much more general. But there\n> are all kinds of other transformations that might further extend the\n> applicability of nbtree executor changes from my patch -- the MDAM\n> techniques are certainly not limited to ORs/SAOPs. For example, it's\n> easy to imagine inventing a new type of SAOP-style clause that\n> represented \"BETWEEN x AND y\" expressions, that would make range\n> predicates into \"first class indexable operators\" --\n> ScalarRangeOprExpr, or something. With multi-column indexes, these\n> ScalarRangeOprExpr clauses could be composed beside ScalarArrayOpExpr\n> clauses, as well as simpler clauses -- all while preserving index\n> order on output. So there are quite a few plan shapes that aren't\n> possible at all right now, that become possible, many of which don't\n> even have SAOPs/ORs.\n> \n> Of course it won't ever be possible to create a transformation that\n> doesn't ultimately flatten everything into MDAM style \"single value\"\n> DNF predicates, which have to use simple B-Tree opclass operators --\n> obviously there are fundamental limits to it. So even in a perfect\n> world, with every possible MDAM-ish transformation implemented, we'll\n> still have a significant need for your patch.\n> \n\nYup, that's exactly my point.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 6 Aug 2023 19:23:19 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Sun, Aug 6, 2023 at 10:23 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> > The index AM is entitled to make certain assumptions of opclass\n> > members -- assumptions that cannot be made during expression\n> > evaluation.\n\n> Thanks for reminding me, I keep forgetting about this.\n\nI was almost certain that you already knew that, actually. It's easy\nto forget such details in a discussion like this one, where the focus\nzooms out and then zooms back in, again and again.\n\n> Would it be possible to inspect the expression and determine if it's\n> \"safe\" to be treated almost like an index qual? Say, given a complex\n> expression, we might check if it consists only of expressions that could\n> be treated as an index qual. So a bit like leak-proof, which may\n> actually be relevant here too I guess (e.g. int4div is not leak-proof,\n> for example, so e.g. the division-by-zero would not allow index-qual\n> treatment).\n\nClearly you're talking about a distinct set of guarantees to the ones\nthat B-Tree opclasses make about not throwing errors when scanning\nmaybe-not-visible index tuples. The B-Tree opclass guarantees might\nnot even be written down anywhere -- they seem like common sense,\nalmost.\n\nWhat you've described definitely seems like it could be very useful,\nbut I don't think that it solves the fundamental problem with cases\nlike the BitmapOr plan. Even if you do things in a way that precludes\nthe possibility of extra heap page accesses (when the VM bit isn't\nset), you still have the problem of \"access predicates vs index filter\npredicates\". Which is a big problem, in and of itself.\n\n> Anyway, I think there are always going to be clauses that would be safe\n> to evaluate on the index, but the index AM does not know to handle them\n> for some reason. For example it might require extending the AM to handle\n> generic expressions, which doesn't seem quite desirable.\n\nActually, I mostly don't think we'd need to teach nbtree or other\nindex AMs anything about simplifying expressions. Structurally, we\nshould try to make things like ScalarArrayOpExpr into \"just another\nindexable operator\", which has little to no difference with any other\nindexable operator at runtime.\n\nThere probably are several good reasons why \"normalizing to CNF\" in\nthe planner is a good idea (to some extent we do this already). Alena\nRybakina's OR-to-SAOP transformation patch was written well before\nanybody knew about the MDAM/SAOP patch I'm working on. The original\nmotivation was to lower expression evaluation overhead.\n\nYou could probably find a third of even a fourth reason to do that\nspecific transformation, if you thought about it for a while. Top-down\nplanner designs such as Cascades really have to spend a lot of time on\nthis kind of normalization process. For very general reasons -- many\nof which are bound to apply in our own bottom-up planner design.\n\n> So I think I see three points where we could evaluate expressions:\n>\n> 1) in the AM, as access preditates / index quals (doing this more often\n> is kinda what the SAOP patches aim to do)\n>\n> 2) in the index scan, before checking VM / visibility (if the qual is\n> safe to be evaluated early)\n>\n> 3) in the index scan, after checking VM / visibility (if the expression\n> does unsafe things)\n\nAgreed.\n\nPresumably there would also be a class of expressions that the patch\nshould make into index filters rather than table filters, despite\nbeing unable to determine whether they're safe to evaluate early. Even\nif there is only a small chance of it helping at runtime, there is no\nreason (or infinitesimally small reason) to not to just do prefer\nindex filters where possible -- so it's desirable to always prefer\nindex filters, regardless of opclass/type restrictions on early\nevaluation. Right?\n\nAssuming the answer is yes, then I think that you still need all of\nthe index-only scan stuff that can \"fallback to heap access\", just to\nbe able to cover this other class of expression. I don't think that\nthis class that I've described will be rarely encountered, or\nanything.\n\n> I agree. That seems like a discussion relevant to the general topic of\n> \"upgrading\" clauses. If anything, the patch may need to worry about\n> moving table filters to index filters, that's the only thing it does.\n\nObviously that will have indirect consequences due to the changes in\nthe costing.\n\n> > It is always better to make what could be an \"index filter\" into an\n> > index qual. Of course, the current practical problem for you is\n> > figuring out how to deal with that in cases like the BitmapOr case.\n> > Since it's not as if the planner is wrong, exactly...it really is the\n> > cheapest plan, so the planner is at least right on its own terms. I am\n> > interested in finding a solution to that problem.\n> >\n>\n> Well, if the planner is not wrong, what solution are we looking for? ;-)\n\nI imagine that you really don't want to have to rely on some\nwishy-washy philosophical argument about the planner's expectation\nbeing the only reasonable basis for choosing a plan. Just as I don't\nwant to have to rely on some similarly hand-wavy argument about risk.\nThe ideal outcome is one that doesn't require any of that, from either\nof us.\n\nI believe that the patch that I'm working on can allow us to totally\navoid it. I hesitate to say this, since it might sound like I'm\nimposing conditions in a self-interested way. AFAICT it really does\nprovide us with a practical way of just not having to go down the road\nthat nobody wants to go down. So I am just about ready to say that I\nbelieve that that will end up being the solution we use. It just seems\nto make sense.\n\nBy normalizing to CNF, the planner is given the ability to work with\nhigher-level index paths, that abstract-away inessential \"physical\nplan\" differences. Suppose, for example, we're building index paths\nfor a scan that comes from an SAOP that was generated through OR\ntransformation. But, it's an index AM that lacks native support for\nSAOPs -- not nbtree. That index path will still end up using a\nBitmapOr, in the end, since it'll ultimately have to compensate for\nthe lack of index AM infrastructure. So the final \"physical plan\" will\nbe exactly the same as today -- the OR transformation will actually\nhave changed nothing about the physical plan in these sorts of cases.\n\nThe CNF transformation process just puts what was already true (\"these\ntwo plans are logically equivalent\") on a more formal footing -- and\nso implicitly avoids \"risky plans\" like the one we've discussed. We'll\nonly be relying on the nbtree work to get those small efficiencies\nthat come from avoiding duplicative primitive index scans. Since that\nwas never actually a goal of your patch to begin with, limiting that\nbenefit to nbtree scans (where we can do it without novel risks) seems\nmore than acceptable.\n\nSince you're not relying on the nbtree work at all here, really (just\non the transformation process itself), the strategic risk that this\nadds to your project isn't too great. It's not like this ties the\nsuccess of your patch to the success of my own patch. At most it ties\nthe success of your patch to something like Alena Rybakina's\nOR-to-SAOP transformation patch, which seems manageable to me. (To be\nclear, I'm not relying on that work in the same way myself -- for my\npatch the transformation process is just a nice-to-have.)\n\n> FWIW if the problem is the patch may make the new plan look cheaper than\n> some \"actually better\" plan (e.g. the BitmapOr one). In that case, we\n> could just keep the old costing (kinda assuming the worst case, but as\n> you said, the benefits are limited, while the risks are arbitrary).\n> That's the only idea I have.\n\nThat's almost ideal for my patch. I should be able to pursue my patch\nwithout concern about your patch interfering too much -- presumably my\npatch will be able to generate the cheapest plan in cases like the\nBitmapOr case. It'll at least be slightly cheaper in physical reality,\nand now you'll artificially penalize the paths that your patch\ngenerates -- so cases like the BitmapOr case should do the right thing\nby seeing the SAOP path as cheaper during planning.\n\nBut is that approach really ideal for your patch? I doubt it. Why\nwouldn't we want to give the index paths with index filters credit for\nbeing cheaper in almost all cases? That's why this doesn't feel like a\ncosting issue to me (it's more of a \"just don't do that\" issue, I\nthink). Your patch seems too important to nerf like this, even if it's\nconvenient in some ways.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 6 Aug 2023 13:13:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Sun, Aug 6, 2023 at 1:13 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Since you're not relying on the nbtree work at all here, really (just\n> on the transformation process itself), the strategic risk that this\n> adds to your project isn't too great. It's not like this ties the\n> success of your patch to the success of my own patch. At most it ties\n> the success of your patch to something like Alena Rybakina's\n> OR-to-SAOP transformation patch, which seems manageable to me. (To be\n> clear, I'm not relying on that work in the same way myself -- for my\n> patch the transformation process is just a nice-to-have.)\n\nI decided to verify my understanding by checking what would happen\nwhen I ran the OR-heavy tenk1 regression test query against a\ncombination of your patch, and v7 of the OR-to-SAOP transformation\npatch. (To be clear, this is without my patch.)\n\nI found that the problem that I saw with the OR-heavy tenk1 regression\ntest goes away (though only when I \"set or_transform_limit=0\"). That\nis, we'll get an index scan plan that uses a SAOP. This index scan\nplan is comparable to the master branch's BitmapOr scan. In\nparticular, both plans get 7 buffer hits. More importantly, the new\nplan is (like the master branch plan) not risky in the way I've been\ngoing on about.\n\nThis does mean that your patch gets a *slightly* slower plan, due to\nthe issue of added index page accesses. Improving that should be a job\nfor my patch -- it's not your problem, since there is no regression.\n\nI'm not sure if it's somehow still possible that SAOP expression\nevaluation is able to \"do the risky thing\" in the same way that your\npatch's \"Index Filter: ((tenk1.tenthous = 1) OR (tenk1.tenthous = 3)\nOR (tenk1.tenthous = 42))\" plan. But it certainly looks like it can't.\nIncreasingly, the problem here appears to me to be a problem of\nlacking useful CNF transformations/normalization -- nothing more.\nStructuring things so that we reliably use \"the native representation\nof ORs\" via normalization seems likely to be all you really need.\n\nWe may currently be over relying on a similar process that happens\nindirectly, via BitmapOr paths. I suspect that you're right to be\nconcerned about how this might already be affecting index-only scans.\nOnce we have CNF normalization/transformation in place, we will of\ncourse continue to use some BitmapOr plans that may look a little like\nthe ones I'm focussed on, and some plans that use \"index filters\" (due\nto your patch) that are also a little like that. But there's nothing\nobjectionable about those cases IMV (quite the opposite), since there\nis no question of displacing/out-competing very similar plans that can\nuse index quals. (You might also find a way to avoid ever requiring\nheap access/visibility checks for a subset of the \"index filter\" cases\nwhere it is determined to be safe up front, but that's just a bonus.)\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 6 Aug 2023 15:28:51 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Sun, Aug 6, 2023 at 3:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I decided to verify my understanding by checking what would happen\n> when I ran the OR-heavy tenk1 regression test query against a\n> combination of your patch, and v7 of the OR-to-SAOP transformation\n> patch. (To be clear, this is without my patch.)\n\nI also spotted what looks like it might be a problem with your patch\nwhen looking at this query (hard to be sure if it's truly a bug,\nthough).\n\nI manually SAOP-ify the OR-heavy tenk1 regression test query like so:\n\nselect\n *\nfrom\n tenk1\nwhere\n thousand = 42\n and tenthous in (1, 3, 42);\n\nSure enough, I continue to get 7 buffer hits with this query. Just\nlike with the BitmapOr plan (and exactly like the original query with\nthe OR-to-SAOP transformation patch in place).\n\nAs I continue to add SAOP constants to the original \"tenthous\" IN(),\neventually the planner switches over to not using index quals on the\n\"tenthous\" low order index column (they're only used on the high order\n\"thousand\" index column). Here's where the switch to only using the\nleading column from the index happens for me:\n\nselect\n *\nfrom\n tenk1\nwhere\n thousand = 42\n and\n tenthous in (1, 3, 42, 43, 44, 45, 46, 47, 48, 49, 50);\n\nThis plan switchover isn't surprising in itself -- it's one of the most\nimportant issues addressed by my SAOP patch. However, it *is* a little\nsurprising that your patch doesn't even manage to use \"Index Filter\"\nquals. It appears that it is only capable of using table filter quals.\nObviously, the index has all the information that expression\nevaluation needs, and yet I see \"Filter: (tenk1.tenthous = ANY\n('{1,3,42,43,44,45,46,47,48,49,50}'::integer[]))\". So no improvement\nover master here.\n\nInterestingly enough, your patch only has this problem with SAOPs, at\nleast that I know of -- the spelling/style matters. If I add many\nadditional \"tenthous\" constants to the original version of the query\nfrom the regression tests in the same way, but using the \"longform\"\n(tenthous = 1 or tenthous = 3 ...) spelling, then your patch does\nindeed use index filters/expression evaluation. Just like the original\n\"risky\" plan (it's just a much bigger expression, with many more ORs).\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 6 Aug 2023 17:38:26 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 8/7/23 02:38, Peter Geoghegan wrote:\n> On Sun, Aug 6, 2023 at 3:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> I decided to verify my understanding by checking what would happen\n>> when I ran the OR-heavy tenk1 regression test query against a\n>> combination of your patch, and v7 of the OR-to-SAOP transformation\n>> patch. (To be clear, this is without my patch.)\n> \n> I also spotted what looks like it might be a problem with your patch\n> when looking at this query (hard to be sure if it's truly a bug,\n> though).\n> \n> I manually SAOP-ify the OR-heavy tenk1 regression test query like so:\n> \n> select\n> *\n> from\n> tenk1\n> where\n> thousand = 42\n> and tenthous in (1, 3, 42);\n> \n> Sure enough, I continue to get 7 buffer hits with this query. Just\n> like with the BitmapOr plan (and exactly like the original query with\n> the OR-to-SAOP transformation patch in place).\n> \n> As I continue to add SAOP constants to the original \"tenthous\" IN(),\n> eventually the planner switches over to not using index quals on the\n> \"tenthous\" low order index column (they're only used on the high order\n> \"thousand\" index column). Here's where the switch to only using the\n> leading column from the index happens for me:\n> \n> select\n> *\n> from\n> tenk1\n> where\n> thousand = 42\n> and\n> tenthous in (1, 3, 42, 43, 44, 45, 46, 47, 48, 49, 50);\n> \n> This plan switchover isn't surprising in itself -- it's one of the most\n> important issues addressed by my SAOP patch. However, it *is* a little\n> surprising that your patch doesn't even manage to use \"Index Filter\"\n> quals. It appears that it is only capable of using table filter quals.\n> Obviously, the index has all the information that expression\n> evaluation needs, and yet I see \"Filter: (tenk1.tenthous = ANY\n> ('{1,3,42,43,44,45,46,47,48,49,50}'::integer[]))\". So no improvement\n> over master here.\n> \n> Interestingly enough, your patch only has this problem with SAOPs, at\n> least that I know of -- the spelling/style matters. If I add many\n> additional \"tenthous\" constants to the original version of the query\n> from the regression tests in the same way, but using the \"longform\"\n> (tenthous = 1 or tenthous = 3 ...) spelling, then your patch does\n> indeed use index filters/expression evaluation. Just like the original\n> \"risky\" plan (it's just a much bigger expression, with many more ORs).\n> \n\nRight. This happens because the matching of SAOP to indexes happens in\nmultiple places. Firstly, create_index_paths() matches the clauses to\nthe index by calling\n\n match_restriction_clauses_to_index\n -> match_clauses_to_index\n -> match_clause_to_index\n\nWhich is where we also decide which *unmatched* clauses can be filters.\nAnd this *does* match the SAOP to the index key, hence no index filter.\n\nBut then we call get_index_paths/build_index_path a little bit later,\nand that decides to skip \"lower SAOP\" (which seems a bit strange,\nbecause the column is \"after\" the equality, but meh). Anyway, at this\npoint we already decided what's a filter, ignoring the index clauses,\nand not expecting any backsies.\n\nThe simples fix seems to be to add these skipped SAOP clauses as\nfilters. We know it can be evaluated on the index ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 7 Aug 2023 21:34:09 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 12:34 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> But then we call get_index_paths/build_index_path a little bit later,\n> and that decides to skip \"lower SAOP\" (which seems a bit strange,\n> because the column is \"after\" the equality, but meh). Anyway, at this\n> point we already decided what's a filter, ignoring the index clauses,\n> and not expecting any backsies.\n\nI'm not surprised that it's due to the issue around \"lower SAOP\"\nclauses within get_index_paths/build_index_path. That whole approach\nseems rather ad-hoc to me. As you probably realize already, my own\npatch has to deal with lots of issues in the same area.\n\n> The simples fix seems to be to add these skipped SAOP clauses as\n> filters. We know it can be evaluated on the index ...\n\nRight. Obviously, my preferred solution to the problem at hand is to\nmake everything into index quals via an approach like the one from my\npatch -- that works sensibly, no matter the length of the SAOP arrays.\nBut even if you're willing to assume that that work will be in place\nfor 17, there are still certain remaining gaps, that also seem\nimportant.\n\nEven my patch cannot always make SAOP clauses into index quals. There\nare specific remaining gaps that I hope that your patch will still\ncover. The simplest example is a similar NOT IN() inequality, like\nthis:\n\nselect\n ctid, *\nfrom\n tenk1\nwhere\n thousand = 42\n and\n tenthous not in (1, 3, 42, 43, 44, 45, 46, 47, 48, 49, 50);\n\nThere is no way that my patch can handle this case. Where your patch\nseems to be unable to do better than master here, either -- just like\nwith the \"tenthous in ( )\" variant. Once again, the inequality SAOP\nalso ends up as table filter quals, not index filter quals.\n\nIt would also be nice if we found a way of doing this, while still\nreliably avoiding all visibility checks (just like \"real index quals\"\nwill) -- since that should be safe in this specific case.\n\nThe MDAM paper describes a scheme for converting NOT IN() clauses into\nDNF single value predicates. But that's not going to happen for 17,\nand doesn't seem all that helpful with a query like this in any case.\nBut it does suggest an argument in favor of visibility checks not\nbeing truly required for SAOP inequalities like this one (when they\nappear in index filters). I'm not sure if that idea is too particular\nto SAOP inequalities to be interesting -- just a suggestion.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 7 Aug 2023 15:18:51 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 3:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Even my patch cannot always make SAOP clauses into index quals. There\n> are specific remaining gaps that I hope that your patch will still\n> cover. The simplest example is a similar NOT IN() inequality, like\n> this:\n>\n> select\n> ctid, *\n> from\n> tenk1\n> where\n> thousand = 42\n> and\n> tenthous not in (1, 3, 42, 43, 44, 45, 46, 47, 48, 49, 50);\n>\n> There is no way that my patch can handle this case. Where your patch\n> seems to be unable to do better than master here, either -- just like\n> with the \"tenthous in ( )\" variant. Once again, the inequality SAOP\n> also ends up as table filter quals, not index filter quals.\n>\n> It would also be nice if we found a way of doing this, while still\n> reliably avoiding all visibility checks (just like \"real index quals\"\n> will) -- since that should be safe in this specific case.\n\nActually, this isn't limited to SAOP inequalities. It appears as if\n*any* simple inequality has the same limitation. So, for example, the\nfollowing query can only use table filters with the patch (never index\nfilters):\n\nselect\n ctid, *\nfrom\n tenk1\nwhere\n thousand = 42 and tenthous != 1;\n\nThis variant will use index filters, as expected (though with some\nrisk of heap accesses when VM bits aren't set):\n\nselect\n ctid, *\nfrom\n tenk1\nwhere\n thousand = 42 and tenthous is distinct from 1;\n\nOffhand I suspect that it's a similar issue to the one you described for SAOPs.\n\nI see that get_op_btree_interpretation() will treat != as a kind of\nhonorary member of an opfamily whose = operator has our != operator as\nits negator. Perhaps we should be finding a way to pass != quals into\nthe index AM so that they become true index quals (obviously they\nwould only be index filter predicates, never access predicates). That\nhas the advantage of working in a way that's analogous to the way that\nindex quals already avoid visibility checks.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 7 Aug 2023 21:21:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 8/8/23 00:18, Peter Geoghegan wrote:\n> On Mon, Aug 7, 2023 at 12:34 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> But then we call get_index_paths/build_index_path a little bit later,\n>> and that decides to skip \"lower SAOP\" (which seems a bit strange,\n>> because the column is \"after\" the equality, but meh). Anyway, at this\n>> point we already decided what's a filter, ignoring the index clauses,\n>> and not expecting any backsies.\n> \n> I'm not surprised that it's due to the issue around \"lower SAOP\"\n> clauses within get_index_paths/build_index_path. That whole approach\n> seems rather ad-hoc to me. As you probably realize already, my own\n> patch has to deal with lots of issues in the same area.\n> \n\nYeah. It's be much easier if the decision was done in one place, without\nthen changing it later.\n\n>> The simples fix seems to be to add these skipped SAOP clauses as\n>> filters. We know it can be evaluated on the index ...\n> \n> Right. Obviously, my preferred solution to the problem at hand is to\n> make everything into index quals via an approach like the one from my\n> patch -- that works sensibly, no matter the length of the SAOP arrays.\n> But even if you're willing to assume that that work will be in place\n> for 17, there are still certain remaining gaps, that also seem\n> important.\n> \n\nAgreed.\n\n> Even my patch cannot always make SAOP clauses into index quals. There\n> are specific remaining gaps that I hope that your patch will still\n> cover. The simplest example is a similar NOT IN() inequality, like\n> this:\n> \n> select\n> ctid, *\n> from\n> tenk1\n> where\n> thousand = 42\n> and\n> tenthous not in (1, 3, 42, 43, 44, 45, 46, 47, 48, 49, 50);\n> \n> There is no way that my patch can handle this case. Where your patch\n> seems to be unable to do better than master here, either -- just like\n> with the \"tenthous in ( )\" variant. Once again, the inequality SAOP\n> also ends up as table filter quals, not index filter quals.\n> \n\nAre you sure? Because if I try with the 20230716 patch, I get this plan\n(after disabling bitmapscan):\n\n QUERY PLAN\n-------------------------------------------------------------------\n Index Scan using tenk1_thous_tenthous on tenk1 (cost=0.31..44.54\nrows=10 width=250)\n Index Cond: (thousand = 42)\n Index Filter: (tenthous <> ALL\n('{1,3,42,43,44,45,46,47,48,49,50}'::integer[]))\n Filter: (tenthous <> ALL ('{1,3,42,43,44,45,46,47,48,49,50}'::integer[]))\n(4 rows)\n\nSo the condition is recognized as index filter. Or did you mean\nsomething different?\n\n> It would also be nice if we found a way of doing this, while still\n> reliably avoiding all visibility checks (just like \"real index quals\"\n> will) -- since that should be safe in this specific case.\n> \n> The MDAM paper describes a scheme for converting NOT IN() clauses into\n> DNF single value predicates. But that's not going to happen for 17,\n> and doesn't seem all that helpful with a query like this in any case.\n> But it does suggest an argument in favor of visibility checks not\n> being truly required for SAOP inequalities like this one (when they\n> appear in index filters). I'm not sure if that idea is too particular\n> to SAOP inequalities to be interesting -- just a suggestion.\n> \n\nNot sure. A couple messages back I suggested that maybe there is a way\nto check which expression would be safe to evaluate before checking the\nvisibility. This seems similar, although what you're suggesting really\napplies to the \"transformed\" SAOP, and I'm not sure it can be extended\nto the original SAOP.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 8 Aug 2023 16:28:53 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 8/8/23 06:21, Peter Geoghegan wrote:\n> On Mon, Aug 7, 2023 at 3:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> Even my patch cannot always make SAOP clauses into index quals. There\n>> are specific remaining gaps that I hope that your patch will still\n>> cover. The simplest example is a similar NOT IN() inequality, like\n>> this:\n>>\n>> select\n>> ctid, *\n>> from\n>> tenk1\n>> where\n>> thousand = 42\n>> and\n>> tenthous not in (1, 3, 42, 43, 44, 45, 46, 47, 48, 49, 50);\n>>\n>> There is no way that my patch can handle this case. Where your patch\n>> seems to be unable to do better than master here, either -- just like\n>> with the \"tenthous in ( )\" variant. Once again, the inequality SAOP\n>> also ends up as table filter quals, not index filter quals.\n>>\n>> It would also be nice if we found a way of doing this, while still\n>> reliably avoiding all visibility checks (just like \"real index quals\"\n>> will) -- since that should be safe in this specific case.\n> \n> Actually, this isn't limited to SAOP inequalities. It appears as if\n> *any* simple inequality has the same limitation. So, for example, the\n> following query can only use table filters with the patch (never index\n> filters):\n> \n> select\n> ctid, *\n> from\n> tenk1\n> where\n> thousand = 42 and tenthous != 1;\n> \n> This variant will use index filters, as expected (though with some\n> risk of heap accesses when VM bits aren't set):\n> \n> select\n> ctid, *\n> from\n> tenk1\n> where\n> thousand = 42 and tenthous is distinct from 1;\n> \n> Offhand I suspect that it's a similar issue to the one you described for SAOPs.\n> \n> I see that get_op_btree_interpretation() will treat != as a kind of\n> honorary member of an opfamily whose = operator has our != operator as\n> its negator. Perhaps we should be finding a way to pass != quals into\n> the index AM so that they become true index quals (obviously they\n> would only be index filter predicates, never access predicates). That\n> has the advantage of working in a way that's analogous to the way that\n> index quals already avoid visibility checks.\n> \n\nAre you sure you're using the right build? Because I get this plan:\n\n QUERY PLAN\n-------------------------------------------------------------------\n Index Scan using tenk1_thous_tenthous on tenk1 (cost=0.29..44.48\nrows=10 width=250)\n Index Cond: (thousand = 42)\n Index Filter: (tenthous <> 1)\n Filter: (tenthous <> 1)\n(4 rows)\n\nAgain, the inequality is clearly recognized as index filter.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 8 Aug 2023 16:31:00 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 7:28 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Are you sure? Because if I try with the 20230716 patch, I get this plan\n> (after disabling bitmapscan):\n\n> So the condition is recognized as index filter. Or did you mean\n> something different?\n\nNo, I was just wrong about this (and about inequalities in general). I\nnow see why the planner preferred a bitmap scan, which makes sense.\nApologies.\n\n> Not sure. A couple messages back I suggested that maybe there is a way\n> to check which expression would be safe to evaluate before checking the\n> visibility. This seems similar, although what you're suggesting really\n> applies to the \"transformed\" SAOP, and I'm not sure it can be extended\n> to the original SAOP.\n\nThe transformation doesn't necessarily have to happen in order for it\nto be possible in principle (and correct). My point was that there are\na handful of important types of expressions (SAOPs among them, but\npossibly also RowCompareExpr and IS NULL tests) that are \"index quals\nin spirit\". These expressions therefore don't seem to need visibility\nchecks at all -- the index qual guarantees \"apply transitively\".\n\nIt's possible that an approach that focuses on leakproof quals won't\nhave any problems, and will be strictly better than \"extending the\nindex qual guarantees to index-qual-like expressions\". Really not sure\nabout that.\n\nIn any case I see a huge amount of value in differentiating between\ncases that need visibility checks (if only via the VM) and those that\ndo not, ever. I'm speaking very generally here -- nothing to do with\nmy adversarial tenk1 test case. It's weird that index quals have such\na massive advantage over even simple index filters -- that feels\nartificial. I suggest that you focus on that aspect, since it has the\npotential to make what is already a compelling patch into a much more\ncompelling patch.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 8 Aug 2023 09:24:29 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 9:21 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I see that get_op_btree_interpretation() will treat != as a kind of\n> honorary member of an opfamily whose = operator has our != operator as\n> its negator. Perhaps we should be finding a way to pass != quals into\n> the index AM so that they become true index quals (obviously they\n> would only be index filter predicates, never access predicates). That\n> has the advantage of working in a way that's analogous to the way that\n> index quals already avoid visibility checks.\n\nThe approach in your patch can only really work with index scans (and\nindex-only scans). So while it is more general than true index quals\nin some ways, it's also less general in other ways: it cannot help\nbitmap index scans.\n\nWhile I accept that the inability of bitmap index scans to use index\nfilters in this way is, to some degree, a natural and inevitable\ndownside of bitmap index scans, that isn't always true. For example,\nit doesn't seem to be the case with simple inequalities. Bitmap index\nscans argue for making cases involving quals that are \"index quals in\nspirit\" into actual index quals. Even if you can reliably avoid extra\nheap accesses for plain index scans using expression evaluation, I\ncan't see that working for bitmap index scans.\n\nMore concretely, if we have an index on \"tenk1 (four, two)\", then we\nmiss out on the opportunity to eliminate heap accesses for a query\nlike this one:\n\nselect\n ctid, *\nfrom\n tenk1\nwhere\n four = 1 and two != 1;\n\nThis will get a bitmap index scan plan (that uses our composite\nindex), which makes sense overall. But the details beyond that make no\nsense -- since we're using table filter quals for \"two\". It turns out\nthat the bitmap heap scan will access every single heap page in the\ntenk1 table as a result, even though we could have literally avoided\nall heap accesses had we been able to push down the != as an index\nqual. This is a difference in \"buffers hit\" that is close to 2 orders\nof magnitude.\n\nI'd be okay with treating these cases as out of scope for this patch,\nbut we should probably agree on the parameters. The patch certainly\nshouldn't make it any harder to fix cases such as this.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 8 Aug 2023 10:43:10 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 8/8/23 19:43, Peter Geoghegan wrote:\n> On Mon, Aug 7, 2023 at 9:21 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> I see that get_op_btree_interpretation() will treat != as a kind of\n>> honorary member of an opfamily whose = operator has our != operator as\n>> its negator. Perhaps we should be finding a way to pass != quals into\n>> the index AM so that they become true index quals (obviously they\n>> would only be index filter predicates, never access predicates). That\n>> has the advantage of working in a way that's analogous to the way that\n>> index quals already avoid visibility checks.\n> \n> The approach in your patch can only really work with index scans (and\n> index-only scans). So while it is more general than true index quals\n> in some ways, it's also less general in other ways: it cannot help\n> bitmap index scans.\n> \n> While I accept that the inability of bitmap index scans to use index\n> filters in this way is, to some degree, a natural and inevitable\n> downside of bitmap index scans, that isn't always true. For example,\n> it doesn't seem to be the case with simple inequalities. Bitmap index\n> scans argue for making cases involving quals that are \"index quals in\n> spirit\" into actual index quals. Even if you can reliably avoid extra\n> heap accesses for plain index scans using expression evaluation, I\n> can't see that working for bitmap index scans.\n> \n> More concretely, if we have an index on \"tenk1 (four, two)\", then we\n> miss out on the opportunity to eliminate heap accesses for a query\n> like this one:\n> \n> select\n> ctid, *\n> from\n> tenk1\n> where\n> four = 1 and two != 1;\n> \n> This will get a bitmap index scan plan (that uses our composite\n> index), which makes sense overall. But the details beyond that make no\n> sense -- since we're using table filter quals for \"two\". It turns out\n> that the bitmap heap scan will access every single heap page in the\n> tenk1 table as a result, even though we could have literally avoided\n> all heap accesses had we been able to push down the != as an index\n> qual. This is a difference in \"buffers hit\" that is close to 2 orders\n> of magnitude.\n> \n> I'd be okay with treating these cases as out of scope for this patch,\n> but we should probably agree on the parameters. The patch certainly\n> shouldn't make it any harder to fix cases such as this.\n> \n\nI agree this patch shouldn't make it harder to improve these cases, but\nTBH I don't quite see which part of the patch would do that? Which bit\nare you objecting to? If we decide to change how match_clause_to_index()\ndeals with these cases, to recognize them as index quals, the patch will\nbe working just fine.\n\nThe only thing the patch does is it looks at clauses we decided not to\ntreat as index quals, and do maybe still evaluate them on index. And I\ndon't think I want to move these goalposts much further.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 8 Aug 2023 20:14:50 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 11:14 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> I agree this patch shouldn't make it harder to improve these cases, but\n> TBH I don't quite see which part of the patch would do that? Which bit\n> are you objecting to? If we decide to change how match_clause_to_index()\n> deals with these cases, to recognize them as index quals, the patch will\n> be working just fine.\n\nWell, I also recently said that I think that it's important that you\nfigure out a way to reliably avoid visibility checks, in cases where\nit can be avoided entirely -- since that can lead to huge savings in\nheap accesses. You haven't done that yet, but you really should look\ninto it IMV.\n\nAssuming that that happens, then it immediately gives index scans a\nhuge advantage over bitmap index scans. At that point it seems\nimportant to describe (in high level terms) where it is that the\nadvantage is innate, and where it's just because we haven't done the\nrequired work for bitmap index scans. I became confused on this point\nmyself yesterday. Admittedly I should have been able to figure it out\non my own -- but it is confusing.\n\n> The only thing the patch does is it looks at clauses we decided not to\n> treat as index quals, and do maybe still evaluate them on index. And I\n> don't think I want to move these goalposts much further.\n\nAvoiding the need for visibility checks completely (in at least a\nsubset of cases) was originally your idea. I'm not going to insist on\nit, or anything like that. It just seems like something that'll add a\ngreat deal of value over what you have already.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 8 Aug 2023 11:36:25 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 11:36 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Assuming that that happens, then it immediately gives index scans a\n> huge advantage over bitmap index scans. At that point it seems\n> important to describe (in high level terms) where it is that the\n> advantage is innate, and where it's just because we haven't done the\n> required work for bitmap index scans. I became confused on this point\n> myself yesterday. Admittedly I should have been able to figure it out\n> on my own -- but it is confusing.\n\nI also have some doubts about the costing. That contributed to my confusion.\n\nTake my \" four = 1 and two != 1\" example query, from earlier today. As\nI said, that gets a bitmap index scan, which does a hugely excessive\namount of heap access. But once I force the planner to use an index\nscan, then (as predicted) there are useful index filters -- filters\nthat can eliminate 100% of all heap accesses. Yet the planner still\nthinks that the total cost of the bitmap scan plan is only 415.28,\nversus 714.89 for the index scan plan. Perhaps that's just because\nthis is a tricky case, for whatever reason...but it's not obvious what\nthat reason really is.\n\nYou keep pointing out that your patch only makes isolated, local\nchanges to certain specific plans. While that is true, it's also true\nthat there will be fairly far removed consequences. Why shouldn't I\ntreat those things as in scope?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 8 Aug 2023 12:15:53 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 8/8/23 20:36, Peter Geoghegan wrote:\n> On Tue, Aug 8, 2023 at 11:14 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> I agree this patch shouldn't make it harder to improve these cases, but\n>> TBH I don't quite see which part of the patch would do that? Which bit\n>> are you objecting to? If we decide to change how match_clause_to_index()\n>> deals with these cases, to recognize them as index quals, the patch will\n>> be working just fine.\n> \n> Well, I also recently said that I think that it's important that you\n> figure out a way to reliably avoid visibility checks, in cases where\n> it can be avoided entirely -- since that can lead to huge savings in\n> heap accesses. You haven't done that yet, but you really should look\n> into it IMV.\n> \n> Assuming that that happens, then it immediately gives index scans a\n> huge advantage over bitmap index scans. At that point it seems\n> important to describe (in high level terms) where it is that the\n> advantage is innate, and where it's just because we haven't done the\n> required work for bitmap index scans. I became confused on this point\n> myself yesterday. Admittedly I should have been able to figure it out\n> on my own -- but it is confusing.\n> \n\nYeah, I agree that might help a lot, particularly for tables that have a\nsignificant fraction of not-all-visible pages.\n\n>> The only thing the patch does is it looks at clauses we decided not to\n>> treat as index quals, and do maybe still evaluate them on index. And I\n>> don't think I want to move these goalposts much further.\n> \n> Avoiding the need for visibility checks completely (in at least a\n> subset of cases) was originally your idea. I'm not going to insist on\n> it, or anything like that. It just seems like something that'll add a\n> great deal of value over what you have already.\n> \n\nRight, and I'm not against improving that, but I see it more like an\nindependent task. I don't think it needs (or should) to be part of this\npatch - skipping visibility checks would apply to IOS, while this is\naimed only at plain index scans.\n\nFurthermore, I don't have a very good idea how to do that (except maybe\nfor relying on the leakproof flag).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 8 Aug 2023 22:24:21 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 8/8/23 21:15, Peter Geoghegan wrote:\n> On Tue, Aug 8, 2023 at 11:36 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>> Assuming that that happens, then it immediately gives index scans a\n>> huge advantage over bitmap index scans. At that point it seems\n>> important to describe (in high level terms) where it is that the\n>> advantage is innate, and where it's just because we haven't done the\n>> required work for bitmap index scans. I became confused on this point\n>> myself yesterday. Admittedly I should have been able to figure it out\n>> on my own -- but it is confusing.\n> \n> I also have some doubts about the costing. That contributed to my confusion.\n> \n> Take my \" four = 1 and two != 1\" example query, from earlier today. As\n> I said, that gets a bitmap index scan, which does a hugely excessive\n> amount of heap access. But once I force the planner to use an index\n> scan, then (as predicted) there are useful index filters -- filters\n> that can eliminate 100% of all heap accesses. Yet the planner still\n> thinks that the total cost of the bitmap scan plan is only 415.28,\n> versus 714.89 for the index scan plan. Perhaps that's just because\n> this is a tricky case, for whatever reason...but it's not obvious what\n> that reason really is.\n> \n\nRight. I haven't checked how the costs are calculated in these cases,\nbut I'd bet it's a combination of having correlated conditions, and the\nbitmap costing being fairly rough (with plenty of constants etc).\n\nThe correlation seems like an obvious culprit, considering the explain says\n\n Bitmap Heap Scan on public.tenk1 (cost=31.35..413.85 rows=1250\nwidth=250) (actual time=2.698..2.703 rows=0 loops=1)\n\nSo we expect 1250 rows. If that was accurate, the index scan would have\nto do 1250 heap fetches. It's just luck the index scan doesn't need to\ndo that. I don't this there's a chance to improve this costing - if the\ninputs are this off, it can't do anything.\n\nAlso, I think this is related to the earlier discussion about maybe\ncosting it according to the worst case - i.e. as if we still needed\nfetch the same number of heap tuples as before. Which will inevitably\nlead to similar issues, with worse plans looking cheaper.\n\n> You keep pointing out that your patch only makes isolated, local\n> changes to certain specific plans. While that is true, it's also true\n> that there will be fairly far removed consequences. Why shouldn't I\n> treat those things as in scope?\n> \n\nThat is certainly true - I'm trying to keep the scope somewhat close to\nthe original goal. Obviously, there may be additional things the patch\nreally needs to consider, but I'm not sure this is one of those cases\n(perhaps I just don't understand what the issue is - the example seems\nlike a run-of-the-mill case of poor estimate / costing).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 8 Aug 2023 22:49:52 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 1:24 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> > Assuming that that happens, then it immediately gives index scans a\n> > huge advantage over bitmap index scans. At that point it seems\n> > important to describe (in high level terms) where it is that the\n> > advantage is innate, and where it's just because we haven't done the\n> > required work for bitmap index scans. I became confused on this point\n> > myself yesterday. Admittedly I should have been able to figure it out\n> > on my own -- but it is confusing.\n> >\n>\n> Yeah, I agree that might help a lot, particularly for tables that have a\n> significant fraction of not-all-visible pages.\n\nIt also has the potential to make the costing a lot easier in certain\nimportant cases. Accurately deriving just how many heap accesses can\nbe avoided via the VM from the statistics that are available to the\nplanner is likely always going to be very difficult. Finding a way to\nmake that just not matter at all (in these important cases) can also\nmake it safe to bias the costing, such that the planner tends to favor\nindex scans (and index-only scans) over bitmap index scans that cannot\npossibly eliminate any heap page accesses via an index filter qual.\n\n> Right, and I'm not against improving that, but I see it more like an\n> independent task. I don't think it needs (or should) to be part of this\n> patch - skipping visibility checks would apply to IOS, while this is\n> aimed only at plain index scans.\n\nI'm certainly not going to insist on it. Worth considering if putting\nit in scope could make certain aspects of this patch (like the\ncosting) easier, though.\n\nI think that it wouldn't be terribly difficult to make simple\ninequalities into true index quals. I think I'd like to have a go at\nit myself. To some degree I'm trying to get a sense of how much that'd\nhelp you.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 8 Aug 2023 13:54:36 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 1:49 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> So we expect 1250 rows. If that was accurate, the index scan would have\n> to do 1250 heap fetches. It's just luck the index scan doesn't need to\n> do that. I don't this there's a chance to improve this costing - if the\n> inputs are this off, it can't do anything.\n\nWell, that depends. If we can find a way to make the bitmap index scan\ncapable of doing something like the same trick through other means, in\nsome other patch, then this particular problem (involving a simple\ninequality) just goes away. There may be other cases that look a\nlittle similar, with a more complicated expression, where it just\nisn't reasonable to expect a bitmap index scan to compete. Ideally,\nbitmap index scans will only be at a huge disadvantage when it just\nmakes sense, due to the particulars of the expression.\n\nI'm not trying to make this your problem. I'm just trying to establish\nthe general nature of the problem.\n\n> Also, I think this is related to the earlier discussion about maybe\n> costing it according to the worst case - i.e. as if we still needed\n> fetch the same number of heap tuples as before. Which will inevitably\n> lead to similar issues, with worse plans looking cheaper.\n\nNot in those cases where it just doesn't come up, because we can\ntotally avoid visibility checks. As I said, securing that guarantee\nhas the potential to make the costing a lot more reliable/easier to\nimplement.\n\n> That is certainly true - I'm trying to keep the scope somewhat close to\n> the original goal. Obviously, there may be additional things the patch\n> really needs to consider, but I'm not sure this is one of those cases\n> (perhaps I just don't understand what the issue is - the example seems\n> like a run-of-the-mill case of poor estimate / costing).\n\nI'm not trying to impose any particular interpretation here. It's\nearly in the cycle, and my questions are mostly exploratory. I'm still\ntrying to develop my own understanding of the trade-offs in this area.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 8 Aug 2023 14:03:58 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "\n\nOn 8/8/23 23:03, Peter Geoghegan wrote:\n> On Tue, Aug 8, 2023 at 1:49 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> So we expect 1250 rows. If that was accurate, the index scan would have\n>> to do 1250 heap fetches. It's just luck the index scan doesn't need to\n>> do that. I don't this there's a chance to improve this costing - if the\n>> inputs are this off, it can't do anything.\n> \n> Well, that depends. If we can find a way to make the bitmap index scan\n> capable of doing something like the same trick through other means, in\n> some other patch, then this particular problem (involving a simple\n> inequality) just goes away. There may be other cases that look a\n> little similar, with a more complicated expression, where it just\n> isn't reasonable to expect a bitmap index scan to compete. Ideally,\n> bitmap index scans will only be at a huge disadvantage when it just\n> makes sense, due to the particulars of the expression.\n> \n> I'm not trying to make this your problem. I'm just trying to establish\n> the general nature of the problem.\n> \n>> Also, I think this is related to the earlier discussion about maybe\n>> costing it according to the worst case - i.e. as if we still needed\n>> fetch the same number of heap tuples as before. Which will inevitably\n>> lead to similar issues, with worse plans looking cheaper.\n> \n> Not in those cases where it just doesn't come up, because we can\n> totally avoid visibility checks. As I said, securing that guarantee\n> has the potential to make the costing a lot more reliable/easier to\n> implement.\n> \n\nBut in the example you shared yesterday, the problem is not really about\nvisibility checks. In fact, the index scan costing completely ignores\nthe VM checks - it didn't matter before, and the patch did not change\nthis. It's about the number of rows the index scan is expected to\nproduce - and those will always do a random I/O, we can't skip those.\n\n>> That is certainly true - I'm trying to keep the scope somewhat close to\n>> the original goal. Obviously, there may be additional things the patch\n>> really needs to consider, but I'm not sure this is one of those cases\n>> (perhaps I just don't understand what the issue is - the example seems\n>> like a run-of-the-mill case of poor estimate / costing).\n> \n> I'm not trying to impose any particular interpretation here. It's\n> early in the cycle, and my questions are mostly exploratory. I'm still\n> trying to develop my own understanding of the trade-offs in this area.\n> \n\nUnderstood. I think this whole discussion is about figuring out these\ntrade offs and also how to divide the various improvements into \"minimum\nviable\" changes.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 9 Aug 2023 18:05:42 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Wed, Aug 9, 2023 at 9:05 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> But in the example you shared yesterday, the problem is not really about\n> visibility checks. In fact, the index scan costing completely ignores\n> the VM checks - it didn't matter before, and the patch did not change\n> this. It's about the number of rows the index scan is expected to\n> produce - and those will always do a random I/O, we can't skip those.\n\nI wasn't really talking about that example here.\n\nAs I see it, the problem from my example is that plain index scans had\nan \"unnatural\" advantage over bitmap index scans. There was no actual\nreason why the system couldn't just deal with the inequality on the\nsecond column uniformly, so that index scans and bitmap index scans\nboth filtered out all non-matches inexpensively, without heap access.\nThen the costing could have been quite off, and it really wouldn't\nhave mattered at runtime, because the index scan and bitmap index scan\nwould do approximately the same thing in any case.\n\nAs I've said, it's obviously also true that there are many other cases\nwhere there really will be a \"natural\" advantage for index scans, that\nbitmap index scans just cannot hope to offer. These are the cases\nwhere the mechanism from your patch is best placed to be the thing\nthat avoids heap accesses, or maybe even avoid all visibility checks\n(despite not using true index quals).\n\n> Understood. I think this whole discussion is about figuring out these\n> trade offs and also how to divide the various improvements into \"minimum\n> viable\" changes.\n\nThat's exactly how I see it myself.\n\nObviously, there is still plenty of gray area here -- cases where it's\nnot at all clear whether or not we should rely on the mechanism from\nyour patch, or whether we should provide some alternative, more\nspecialized mechanism. For example, I've made a lot out of simple !=\ninequalities recently, but it's natural to wonder what that might mean\nfor NOT IN ( ... ) SAOP inequalities. Am I also going to add\nspecialized code that passes those down to the index AM? Where do you\ndraw the line?\n\nI totally accept that there is a significant amount of gray area, and\nthat that's likely to remain true for the foreseeable future. But I\nalso believe that there is a small but important number of things that\nare either exactly black or exactly white. If we can actually firmly\nagree on what these other things are in days or weeks (which seems\nquite doable), then we'll have the right framework for figuring\neverything else out over time (possibly over multiple releases). We'll\nat least have the right shared vocabulary for discussing the problems,\nwhich is a very good start. I want to have a general structure that\nhas the right general concepts in place from the start -- that's all.\n\nI also suspect that we'll discover that the large amount of gray area\nclauses/items are those that tend to be far less important than\n\"exactly black\" and \"exactly white\" items. So even if we can only\nagree that a small handful of things are in either category, that\nsmall handful will likely be very overrepresented in real world\nqueries. For example, simple inequalities are very common -- it's\nsurprising that nbtree can't already handle them directly. I should\nhave thought of this myself, long ago, but it took your patch to force\nme to think about it.\n\nThe problem with simple inequalities was \"hiding in plain sight\" for a\nvery long time. Could there be anything else like that?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 9 Aug 2023 09:54:12 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 8/8/23 22:54, Peter Geoghegan wrote:\n> On Tue, Aug 8, 2023 at 1:24 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>> Assuming that that happens, then it immediately gives index scans a\n>>> huge advantage over bitmap index scans. At that point it seems\n>>> important to describe (in high level terms) where it is that the\n>>> advantage is innate, and where it's just because we haven't done the\n>>> required work for bitmap index scans. I became confused on this point\n>>> myself yesterday. Admittedly I should have been able to figure it out\n>>> on my own -- but it is confusing.\n>>>\n>>\n>> Yeah, I agree that might help a lot, particularly for tables that have a\n>> significant fraction of not-all-visible pages.\n> \n> It also has the potential to make the costing a lot easier in certain\n> important cases. Accurately deriving just how many heap accesses can\n> be avoided via the VM from the statistics that are available to the\n> planner is likely always going to be very difficult. Finding a way to\n> make that just not matter at all (in these important cases) can also\n> make it safe to bias the costing, such that the planner tends to favor\n> index scans (and index-only scans) over bitmap index scans that cannot\n> possibly eliminate any heap page accesses via an index filter qual.\n> \n\nYes, if there's a way to safely skip the visibility check for some\nconditions, that would probably make the costing simpler.\n\nAnyway, I find this discussion rather abstract and I'll probably forget\nhalf the important cases by next week. Maybe it'd be good to build a set\nof examples demonstrating the interesting cases? We've already used a\ncouple tenk1 queries for that purpose ...\n\n>> Right, and I'm not against improving that, but I see it more like an\n>> independent task. I don't think it needs (or should) to be part of this\n>> patch - skipping visibility checks would apply to IOS, while this is\n>> aimed only at plain index scans.\n> \n> I'm certainly not going to insist on it. Worth considering if putting\n> it in scope could make certain aspects of this patch (like the\n> costing) easier, though.\n> \n> I think that it wouldn't be terribly difficult to make simple\n> inequalities into true index quals. I think I'd like to have a go at\n> it myself. To some degree I'm trying to get a sense of how much that'd\n> help you.\n> \n\nI'm trying to make the patch to not dependent on such change. In a way,\nonce a clause gets recognized as index qual, it becomes irrelevant for\nmy patch. But the patch also doesn't get any simpler, because it still\nneeds to do the same thing for the remaining quals.\n\nOTOH if there was some facility to decide if a qual is \"safe\" to be\nexecuted on the index tuple, that'd be nice. But as I already said, I\nsee it more as an additional optimization, as it only applies to a\nsubset of cases.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 9 Aug 2023 19:00:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Wed, Aug 9, 2023 at 10:00 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Anyway, I find this discussion rather abstract and I'll probably forget\n> half the important cases by next week. Maybe it'd be good to build a set\n> of examples demonstrating the interesting cases? We've already used a\n> couple tenk1 queries for that purpose ...\n\nThat seems wise. I'll try to come up with a list of general principles\nwith specific and easy to run examples later on today.\n\n> I'm trying to make the patch to not dependent on such change. In a way,\n> once a clause gets recognized as index qual, it becomes irrelevant for\n> my patch. But the patch also doesn't get any simpler, because it still\n> needs to do the same thing for the remaining quals.\n\nPractically speaking, I don't see any reason why you won't be able to\nsign off on the set of principles that I'll lay out for work in this\narea, while at the same time continuing with this patch more or less\nas originally planned.\n\nAt one point I thought that your patch might be obligated to\ncompensate for its tendency to push down OR-heavy clauses as\nexpressions, leading to \"risky\" plans. While I still have a concern\nabout that now, I'm not going to try to make it your problem. I'm now\ninclined to think of this as an existing problem, that your patch will\nincrease the prevalence of -- but not to the extent that it makes\nsense to hold it up. To some extent it is up to me to put my money\nwhere my mouth is.\n\nI'm optimistic about the prospect of significantly ameliorating (if\nnot fixing) the \"risky OR expression plan\" problem in the scope of my\nwork on 17. But even if that doesn't quite happen (say we don't end up\nnormalizing to CNF in the way that I've suggested for 17), we should\nat least reach agreement on the best way forward. If we could just\nagree that evaluating complicated OR expressions in index filters is\nmuch worse than finding a way to pass them down as an equivalent index\nqual (an SAOP), then I could live with it for another release or two.\n\nAs I said, I mostly just care about having the right general\nprinciples in place at this point.\n\n> OTOH if there was some facility to decide if a qual is \"safe\" to be\n> executed on the index tuple, that'd be nice. But as I already said, I\n> see it more as an additional optimization, as it only applies to a\n> subset of cases.\n\nI'm happy to go with that.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 9 Aug 2023 10:53:35 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 8/9/23 19:53, Peter Geoghegan wrote:\n> On Wed, Aug 9, 2023 at 10:00 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> Anyway, I find this discussion rather abstract and I'll probably forget\n>> half the important cases by next week. Maybe it'd be good to build a set\n>> of examples demonstrating the interesting cases? We've already used a\n>> couple tenk1 queries for that purpose ...\n> \n> That seems wise. I'll try to come up with a list of general principles\n> with specific and easy to run examples later on today.\n> \n\nCool. I'll try to build my own set of examples that I find interesting\neither because it's what the patch aims to help with, or because I\nexpect it to be problematic for some reason. And then we can compare.\n\n>> I'm trying to make the patch to not dependent on such change. In a way,\n>> once a clause gets recognized as index qual, it becomes irrelevant for\n>> my patch. But the patch also doesn't get any simpler, because it still\n>> needs to do the same thing for the remaining quals.\n> \n> Practically speaking, I don't see any reason why you won't be able to\n> sign off on the set of principles that I'll lay out for work in this\n> area, while at the same time continuing with this patch more or less\n> as originally planned.\n> \n> At one point I thought that your patch might be obligated to\n> compensate for its tendency to push down OR-heavy clauses as\n> expressions, leading to \"risky\" plans. While I still have a concern\n> about that now, I'm not going to try to make it your problem. I'm now\n> inclined to think of this as an existing problem, that your patch will\n> increase the prevalence of -- but not to the extent that it makes\n> sense to hold it up. To some extent it is up to me to put my money\n> where my mouth is.\n> \n> I'm optimistic about the prospect of significantly ameliorating (if\n> not fixing) the \"risky OR expression plan\" problem in the scope of my\n> work on 17. But even if that doesn't quite happen (say we don't end up\n> normalizing to CNF in the way that I've suggested for 17), we should\n> at least reach agreement on the best way forward. If we could just\n> agree that evaluating complicated OR expressions in index filters is\n> much worse than finding a way to pass them down as an equivalent index\n> qual (an SAOP), then I could live with it for another release or two.\n> \n\nYup, I agree with that principle. The AM can evaluate the expression in\na smarter way, without the visibility checks.\n\n> As I said, I mostly just care about having the right general\n> principles in place at this point.\n> \n>> OTOH if there was some facility to decide if a qual is \"safe\" to be\n>> executed on the index tuple, that'd be nice. But as I already said, I\n>> see it more as an additional optimization, as it only applies to a\n>> subset of cases.\n> \n> I'm happy to go with that.\n> \n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 9 Aug 2023 20:15:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Wed, Aug 9, 2023 at 11:15 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Cool. I'll try to build my own set of examples that I find interesting\n> either because it's what the patch aims to help with, or because I\n> expect it to be problematic for some reason. And then we can compare.\n\nThat would be great. I definitely want to make this a collaborative thing.\n\n> Yup, I agree with that principle. The AM can evaluate the expression in\n> a smarter way, without the visibility checks.\n\nAttached text document (which I guess might be my first draft) is an\nattempt to put the discussion up until this point on a more formal\nfooting.\n\nThe format here tries to reduce the principles to a handful of bullet\npoints. For example, one line reads:\n\n+ Index quals are better than equivalent index filters because bitmap\nindex scans can only use index quals\n\nI'm pretty sure that these are all points that you and I both agree on\nalready. But you should confirm that. And make your own revisions, as\nyou see fit.\n\nIt's definitely possible that I overlooked an interesting and durable\nadvantage that index filters have. If there is some other way in which\nindex filters are likely to remain the best and only viable approach,\nthen we should note that. I just went with the really obvious case of\nan expression that definitely needs visibility checks to avoid ever\nthrowing a division-by-zero error, related to some invisible tuple.\nIt's an extreme case in that it focuses on requirements that seem just\nabout unavoidable in any future world. (Come to think of it, the\nbiggest and most durable advantage for index filters is probably just\nhow general they are, which I do mention.)\n\n-- \nPeter Geoghegan",
"msg_date": "Wed, 9 Aug 2023 17:14:00 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Wed, 2023-08-09 at 17:14 -0700, Peter Geoghegan wrote:\n> + Index quals are better than equivalent index filters because bitmap\n> index scans can only use index quals\n\nIt seems there's consensus that:\n\n * Index Filters (Tomas's patch and the topic of this thread) are more\ngeneral, because they can work on any expression, e.g. 1/x, which can\nthrow an error if x=0.\n * Index quals are more optimizable, because such operators are not\nsupposed to throw errors or have side effects, so we can evaluate them\nbefore determining visibility.\n\nI wouldn't describe one as \"better\" than the other, but I assume you\nmeant \"more optimizable\".\n\nIt's interesting that there's overlap in utility between Tomas's\ncurrent patch and Peter's work on optimizing SAOPs. But I don't see a\nlot of tension there -- it seems like Tomas's patch will always be\nuseful for filters that might throw an error (or have other side\neffects).\n\nIs there any part of the design here that's preventing this patch from\nmoving forward? If not, what are the TODOs for the current patch, or is\nit just waiting on more review?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 17 Aug 2023 16:29:56 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 4:29 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Wed, 2023-08-09 at 17:14 -0700, Peter Geoghegan wrote:\n> > + Index quals are better than equivalent index filters because bitmap\n> > index scans can only use index quals\n>\n> It seems there's consensus that:\n>\n> * Index Filters (Tomas's patch and the topic of this thread) are more\n> general, because they can work on any expression, e.g. 1/x, which can\n> throw an error if x=0.\n\nAgreed, but with one small caveat: they're not more general when it\ncomes to bitmap index scans, which seem like they'll only ever be able\nto support index quals. But AFAICT that's the one and only exception.\n\n> * Index quals are more optimizable, because such operators are not\n> supposed to throw errors or have side effects, so we can evaluate them\n> before determining visibility.\n\nRight. Though there is a second reason: the index AM can use them to\nnavigate the index more efficiently with true index quals. At least in\nthe case of SAOPs, and perhaps in other cases in the future.\n\n> I wouldn't describe one as \"better\" than the other, but I assume you\n> meant \"more optimizable\".\n\nThe use of the term \"better\" is actually descriptive of Tomas' patch.\nIt is structured in a way that conditions the use of index filters on\nthe absence of equivalent index quals for eligible/indexed clauses. So\nit really does make sense to think of it as a \"qual hierarchy\" (to use\nTomas' term).\n\nIt's also true that it will always be fundamentally impossible to use\nindex quals for a significant proportion of all clause types, so\n\"better when feasible at all\" is slightly more accurate.\n\n> Is there any part of the design here that's preventing this patch from\n> moving forward? If not, what are the TODOs for the current patch, or is\n> it just waiting on more review?\n\nPractically speaking, I have no objections to moving ahead with this.\nI believe that the general structure of the patch makes sense, and I\ndon't expect Tomas to do anything that wasn't already expected before\nI chimed in.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 18 Aug 2023 15:19:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 11:36 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > The only thing the patch does is it looks at clauses we decided not to\n> > treat as index quals, and do maybe still evaluate them on index. And I\n> > don't think I want to move these goalposts much further.\n>\n> Avoiding the need for visibility checks completely (in at least a\n> subset of cases) was originally your idea. I'm not going to insist on\n> it, or anything like that. It just seems like something that'll add a\n> great deal of value over what you have already.\n\nAnother idea in this area occurred to me today: it would be cool if\nnon-key columns from INCLUDE indexes could completely avoid visibility\nchecks (not just avoid heap accesses using the visibility map) in\nroughly the same way that we'd expect with an equivalent key column\nalready, today (if it was an index filter index qual). Offhand I think\nthat it would make sense to do that outside of index AMs, by extending\nthe mechanism from Tomas' patch to this special class of expression.\nWe'd invent some other class of index filter that \"outranks\"\nconventional index filters when the optimizer can safely determine\nthat they're \"index filters with the visibility characteristics of\ntrue index quals\". I am mostly thinking of simple, common cases here\n(e.g., an equality operator + constant).\n\nThis might require the involvement of the relevant btree opclass,\nsince that's where the no-visibility-check-safety property actually\ncomes from. However, it wouldn't need to be limited to INCLUDE B-Tree\nindexes. You could for example do this with a GiST INCLUDE index that\nhad no opclass information about the datatype/operator. That'd be a\nnatural advantage of an approach based on index filters.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 18 Aug 2023 17:49:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 12:34 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 8/7/23 02:38, Peter Geoghegan wrote:\n> > This plan switchover isn't surprising in itself -- it's one of the most\n> > important issues addressed by my SAOP patch. However, it *is* a little\n> > surprising that your patch doesn't even manage to use \"Index Filter\"\n> > quals. It appears that it is only capable of using table filter quals.\n> > Obviously, the index has all the information that expression\n> > evaluation needs, and yet I see \"Filter: (tenk1.tenthous = ANY\n> > ('{1,3,42,43,44,45,46,47,48,49,50}'::integer[]))\". So no improvement\n> > over master here.\n\n> Right. This happens because the matching of SAOP to indexes happens in\n> multiple places. Firstly, create_index_paths() matches the clauses to\n> the index by calling\n>\n> match_restriction_clauses_to_index\n> -> match_clauses_to_index\n> -> match_clause_to_index\n>\n> Which is where we also decide which *unmatched* clauses can be filters.\n> And this *does* match the SAOP to the index key, hence no index filter.\n>\n> But then we call get_index_paths/build_index_path a little bit later,\n> and that decides to skip \"lower SAOP\" (which seems a bit strange,\n> because the column is \"after\" the equality, but meh). Anyway, at this\n> point we already decided what's a filter, ignoring the index clauses,\n> and not expecting any backsies.\n>\n> The simples fix seems to be to add these skipped SAOP clauses as\n> filters. We know it can be evaluated on the index ...\n\nUpdate on this: I recently posted v2 of my patch, which completely\nremoves build_index_paths's \"skip_lower_saop\" mechanism. This became\npossible in v2 because it fully eliminated all of the advantages that\nSOAP style filter quals might have had over true index quals, through\nfurther enhancements on the nbtree side. There is simply no reason to\ngenerate alternative index paths with filter quals in the first place.\n(As I seem to like to say, \"choice is confusion\".)\n\nIn short, v2 of my patch fully adheres to the principles set out in\nthe \"qual hierarchy\" doc. The planner no longer needs to know anything\nabout how nbtree executes SAOP index quals, except when costing them.\nTo the planner, there is pretty much no difference between \"=\" and \"=\nANY()\" (for index AMs that natively support SAOP execution).\n\nI imagine that this general planner structure will be ideal for your\npatch. If I'm not mistaken, it will allow you to completely avoid\ntreating SAOPs as a special case. Although the build_index_paths\n\"skip_lower_saop\" thing might have created issues for the approach\nyour patch takes in the planner, that seems to me to work best as an\nargument against the \"skip_lower_saop\" mechanism -- it was always a\nkludge IMV.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 18 Sep 2023 13:35:11 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "Hi,\n\nIt took me a while but I finally got back to working on this patch. I\nreread the summary of principles Peter shared in August, and I do agree\nwith all the main points - the overall qual hierarchy and the various\nexamples and counter-examples.\n\nI'll respond to a couple things in this sub-thread, and then I plan to\npost an updated version of the patch with a discussion of a couple\nproblems I still haven't managed to solve (not necessarily new ones).\n\nOn 8/19/23 00:19, Peter Geoghegan wrote:\n> On Thu, Aug 17, 2023 at 4:29 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>> On Wed, 2023-08-09 at 17:14 -0700, Peter Geoghegan wrote:\n>>> + Index quals are better than equivalent index filters because bitmap\n>>> index scans can only use index quals\n>>\n>> It seems there's consensus that:\n>>\n>> * Index Filters (Tomas's patch and the topic of this thread) are more\n>> general, because they can work on any expression, e.g. 1/x, which can\n>> throw an error if x=0.\n> \n> Agreed, but with one small caveat: they're not more general when it\n> comes to bitmap index scans, which seem like they'll only ever be able\n> to support index quals. But AFAICT that's the one and only exception.\n> \n\nYeah. Some conditions are never gonna be translated into proper index\nquals, either because it's not really possible or because no one just\nimplemented that.\n\nFWIW it's not immediately obvious to me if bitmap index scans are unable\nto support index filters because of some inherent limitation, or whether\nit's something we could implement. Some index AMs simply can't support\nindex filters, because they don't know the \"full\" index tuple tuple\n(e.g. GIN), but maybe other AMs could support that ...\n\n>> * Index quals are more optimizable, because such operators are not\n>> supposed to throw errors or have side effects, so we can evaluate them\n>> before determining visibility.\n> \n> Right. Though there is a second reason: the index AM can use them to\n> navigate the index more efficiently with true index quals. At least in\n> the case of SAOPs, and perhaps in other cases in the future.\n> \n>> I wouldn't describe one as \"better\" than the other, but I assume you\n>> meant \"more optimizable\".\n> \n> The use of the term \"better\" is actually descriptive of Tomas' patch.\n> It is structured in a way that conditions the use of index filters on\n> the absence of equivalent index quals for eligible/indexed clauses. So\n> it really does make sense to think of it as a \"qual hierarchy\" (to use\n> Tomas' term).\n> \n> It's also true that it will always be fundamentally impossible to use\n> index quals for a significant proportion of all clause types, so\n> \"better when feasible at all\" is slightly more accurate.\n> \n\nYeah, I agree with all of this too. I think that in all practical cases\nan index qual is strictly better than an index filter, for exactly these\nreasons.\n\n>> Is there any part of the design here that's preventing this patch from\n>> moving forward? If not, what are the TODOs for the current patch, or is\n>> it just waiting on more review?\n> \n> Practically speaking, I have no objections to moving ahead with this.\n> I believe that the general structure of the patch makes sense, and I\n> don't expect Tomas to do anything that wasn't already expected before\n> I chimed in.\n> \n\nI agree the general idea is sound. The main \"problem\" in the original\npatch was about costing changes and the implied risk of picking the\nwrong plan (instead of e.g. a bitmap index scan). That's pretty much\nwhat the \"risk\" in example (4) in the principles.txt is about, AFAICS.\n\nI think the consensus is that if we don't change the cost, this issue\nmostly goes away - we won't pick index scan in cases where we'd pick a\ndifferent plan without the patch, and for index scans it's (almost)\nalways correct to replace \"table filter\" with \"index filter\".\n\nIf that issue goes away, I think there are two smaller ones - picking\nwhich conditions to promote to index filters, and actually tweaking the\nindex scan code to do that.\n\nThe first issue seems similar to the costing issue - in fact, it really\nis a costing problem. The goal is to minimize the amount of work needed\nto evaluate the conditions on all rows, and if we knew selectivity+cost\nfor each condition, it'd be a fairly simple matter. But in practice we\ndon't know this - the selectivity estimates are rather shaky (and doubly\nso for correlated conditions), and the procedure cost estimates are\nquite crude too ...\n\nThe risk is we might move \"forward\" very expensive condition. Imagine a\ncondition with procost=1000000, which would normally be evaluated last\nafter all other clauses. But if we promote it to an index filter, that'd\nno longer be true.\n\nWhat Jeff proposed last week was roughly this:\n\n- find min(procost) for clauses that can't be index filters\n- consider all clauses with procost <= min(procost) as index filters\n\nBut after looking at this I realized order_qual_clauses() does a very\nsimilar thing for regular clauses, and the proposed logic contradicts\nthe heuristics order_qual_clauses() a bit.\n\nIn particular, order_qual_clauses() orders the clauses by procost, but\nthen it's careful not to reorder the clauses with the same procost. With\nthe proposed heuristics, that'd change - the clauses might get reordered\nin various ways, possibly with procost values [1, 10, 100, 1, 10, 100]\nor something like that.\n\nNot sure if we want to relax the heuristics like this. There's also the\npractical issue that order_qual_clauses() gets called quite late, long\nafter the index filters were built. And it also considers security\nlevels, which I ignored until now.\n\nBut now that I think about it, with the original patch it had to be\ndecided when building the path, because that's when the costing happens.\nWith the costing changes removed, we can do probably do that much later\nwhile creating the plan, after order_qual_clauses() does all the work.\n\nWe could walk the qpqual list, and stop on the first element that can't\nbe treated as index filter. That'd also handle the security levels, and\nit'd *almost* do what Jeff proposed, I think.\n\nThe main difference is that it'd not reorder clauses with the same\nprocost. With multiple clauses with the same procost, it'd depend on\nwhich one happens to be first in the list, which AFAIK depends on the\norder of conditions in the query. That may be a bit surprising, I guess,\nbecause it seems natural that in a declarative language this should not\nreally matter. Yes, we already do that, but it probably does not have\nhuge impact. If it affects whether a condition will be treated as an\nindex filter, the differences are likely to be much more significant.\n\n(FWIW this reminds me the issues we had with GROUP BY optimization,\nwhich ultimately got reverted because the reordering was considered too\nrisky. I'm afraid we might end in the same situation ...)\n\n\nAs for the other issue, that's more about how to deal with the various\nquals in the generic scan code. Imagine we have multiple conditions, and\nand some of them can be treated as index filters. So for example we may\nend up with this:\n\n quals = [A, B, C, D]\n index-filters = [A, B]\n\nAnd now a tuple can fall into three categories:\n\n a) all-visible=false, i.e. can't use index filter => quals\n\n b) all-visible=true, and index-filters evaluates to false\n\n c) all-visible=true, and index-filters evaluates to true\n\nThe first two cases are fine, but for (c) we need to also evaluate the\nremaining non-filter quals [C, D]. We may build such list, and the 0002\npatch does that, and shows the list in explain. But where/how would we\nevaluate it? The code in execScan.c uses the ScanState->ps.qual, which\nis set to [A,B,C,D]. Which is correct, but it does more work than\nstrictly necessary - the index filters are evaluated again. We can't\neasily change that to [C,D] -> that would break the (a) case. Which\nquals are evaluated on the heap tuple depends on visibility map and on\nthe index-filter result.\n\nI think there's two options. First, we may accept that some of index\nfilters may get evaluated twice. That's not great, because it increases\nthe risk of regressions. Second, we could disable ScanState->ps.quals if\nthere are index filters, and just do all the work info nodeIndexscan.\n\nBut that seems quite ugly - in a way, the code already does that, which\nis where the two loops\n\n while (true)\n {\n for (;;)\n {\n ...\n }\n }\n\ncome from. I was hoping to simplify / get rid of this, not making it do\neven more stuff ... :-(\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 20 Dec 2023 01:39:27 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
},
{
"msg_contents": "On 8/19/23 02:49, Peter Geoghegan wrote:\n> On Tue, Aug 8, 2023 at 11:36 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>>> The only thing the patch does is it looks at clauses we decided not to\n>>> treat as index quals, and do maybe still evaluate them on index. And I\n>>> don't think I want to move these goalposts much further.\n>>\n>> Avoiding the need for visibility checks completely (in at least a\n>> subset of cases) was originally your idea. I'm not going to insist on\n>> it, or anything like that. It just seems like something that'll add a\n>> great deal of value over what you have already.\n> \n> Another idea in this area occurred to me today: it would be cool if\n> non-key columns from INCLUDE indexes could completely avoid visibility\n> checks (not just avoid heap accesses using the visibility map) in\n> roughly the same way that we'd expect with an equivalent key column\n> already, today (if it was an index filter index qual). Offhand I think\n> that it would make sense to do that outside of index AMs, by extending\n> the mechanism from Tomas' patch to this special class of expression.\n> We'd invent some other class of index filter that \"outranks\"\n> conventional index filters when the optimizer can safely determine\n> that they're \"index filters with the visibility characteristics of\n> true index quals\". I am mostly thinking of simple, common cases here\n> (e.g., an equality operator + constant).\n> \n> This might require the involvement of the relevant btree opclass,\n> since that's where the no-visibility-check-safety property actually\n> comes from. However, it wouldn't need to be limited to INCLUDE B-Tree\n> indexes. You could for example do this with a GiST INCLUDE index that\n> had no opclass information about the datatype/operator. That'd be a\n> natural advantage of an approach based on index filters.\n> \n\nI haven't really thought about INCLUDE columns at all. I agree it seems\ndoable and possibly quite useful, and doing it outside the index AM\nseems about right. The index does not know about the opclass for these\ncolumns, it just stores them, why/how should it be responsible to do\nsomething with it? And as you said, it seems like a (fairly natural?)\nextension of the patch discussed in this thread.\n\nThat being said, I don't intend to make this work in v1. Once the other\nissues get solved in some way, I may try hacking a WIP, but mostly to\ntry if there's not some design issue that'd make it hard in the future.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 20 Dec 2023 01:45:29 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use of additional index columns in rows filtering"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI've been testing various edge-cases of timestamptz and related types\nand noticed that despite being a 16-byte wide type, interval overflows\nfor some timestamptz (8-byte) subtractions (timestamp_mi).\nA simple example of this would be:\n\nselect timestamptz'294276-12-31 23:59:59 UTC' - timestamptz'1582-10-15\n00:00:00 UTC';\n\nYielding:\ninterval'-106599615 days -08:01:50.551616'\n\nThis makes sense from the implementation point of view, since both\ntimestamptz and Interval->TimeOffset are int64.\n\nThe patch attached simply throws an error when an overflow is\ndetected. However I'm not sure this is a reasonable approach for a\ncode path that could be very hot in some workloads. Another\nconsideration is that regardless of the values of the timestamps, the\nabsolute value of the difference can be stored in a uint64. However\nthat observation has little practical value.\n\nThat being said I'm willing to work on a fix that makes sense and\nmaking it commit ready (or step aside if someone else wants to take\nover) but I'd also understand if this is marked as \"not worth fixing\".\n\nRegards,\nNick",
"msg_date": "Wed, 15 Feb 2023 16:07:45 +0100",
"msg_from": "Nikolai <pgnickb@gmail.com>",
"msg_from_op": true,
"msg_subject": "Silent overflow of interval type"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 7:08 AM Nikolai <pgnickb@gmail.com> wrote:\n>\n> The patch attached simply throws an error when an overflow is\n> detected. However I'm not sure this is a reasonable approach for a\n> code path that could be very hot in some workloads.\n\nGiven the extraordinary amount of overflow checks in the nearby code\nof timestamp.c, I'd say that this case should not be an exception.\nBy chance did you look at all other nearby cases, is it the only place\nwith overflow? (I took a look too, but haven't found anything\nsuspicious)\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Wed, 15 Feb 2023 12:24:24 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Silent overflow of interval type"
},
{
"msg_contents": "Andrey Borodin <amborodin86@gmail.com> writes:\n> On Wed, Feb 15, 2023 at 7:08 AM Nikolai <pgnickb@gmail.com> wrote:\n>> The patch attached simply throws an error when an overflow is\n>> detected. However I'm not sure this is a reasonable approach for a\n>> code path that could be very hot in some workloads.\n\n> Given the extraordinary amount of overflow checks in the nearby code\n> of timestamp.c, I'd say that this case should not be an exception.\n\nYeah, I don't think this would create a performance problem, at least not\nif you're using a compiler that implements pg_sub_s64_overflow reasonably.\n(And if you're not, and this bugs you, the answer is to get a better\ncompiler.)\n\n> By chance did you look at all other nearby cases, is it the only place\n> with overflow?\n\nThat was my immediate reaction as well.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Feb 2023 19:12:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Silent overflow of interval type"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 1:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, I don't think this would create a performance problem, at least not\n> if you're using a compiler that implements pg_sub_s64_overflow reasonably.\n> (And if you're not, and this bugs you, the answer is to get a better\n\nPlease find attached the v2 of the said patch with the tests added. I\ntested and it applies with all tests passing both on REL_14_STABLE,\nREL_15_STABLE and master. I don't know how the decision on\nbackpatching is made and whether it makes sense here or not. If any\nadditional work is required, please let me know.\n\n> By chance did you look at all other nearby cases, is it the only place\n> with overflow?\n\nNot really, no. The other place where it could overflow was in the\ninterval justification function and it was fixed about a year ago.\nThat wasn't backpatched afaict. See\nhttps://postgr.es/m/CAAvxfHeNqsJ2xYFbPUf_8nNQUiJqkag04NW6aBQQ0dbZsxfWHA@mail.gmail.com\n\nRegards,\nNick",
"msg_date": "Thu, 16 Feb 2023 14:00:23 +0100",
"msg_from": "Nick Babadzhanian <pgnickb@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Silent overflow of interval type"
},
{
"msg_contents": "Nick Babadzhanian <pgnickb@gmail.com> writes:\n> Please find attached the v2 of the said patch with the tests added.\n\nPushed with light editing (for instance, I don't think interval.sql\nis the place to test timestamp operators, even if the result is an\ninterval).\n\n> I don't know how the decision on\n> backpatching is made and whether it makes sense here or not.\n\nWe haven't got a really hard policy on that, but in this case\nI elected not to, because it didn't seem worth the effort.\nIt seems fairly unlikely that people would hit this in production.\nAlso there's the precedent that related changes weren't backpatched.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Feb 2023 17:30:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Silent overflow of interval type"
}
] |
[
{
"msg_contents": "Hi,\n\nI came across another harmless thinko in numeric.c.\n\nIt is harmless since 20/DEC_DIGITS and 40/DEC_DIGITS happens to be exactly 5 and 10 since DEC_DIGITS == 4,\nbut should be fixed anyway for correctness IMO.\n\n- alloc_var(var, 20 / DEC_DIGITS);\n+ alloc_var(var, (20 + DEC_DIGITS - 1) / DEC_DIGITS);\n\n- alloc_var(var, 40 / DEC_DIGITS);\n+ alloc_var(var, (40 + DEC_DIGITS - 1) / DEC_DIGITS);\n\n/Joel",
"msg_date": "Wed, 15 Feb 2023 22:08:59 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "[PATCH] FIx alloc_var() ndigits thinko"
}
] |
[
{
"msg_contents": "While doing some benchmarking of some fast-to-execute queries, I see\nthat set_ps_display() popping up on the profiles. Looking a little\ndeeper, there are some inefficiencies in there that we could fix.\n\nFor example, the following is pretty poor:\n\nstrlcpy(ps_buffer + ps_buffer_fixed_size, activity,\n ps_buffer_size - ps_buffer_fixed_size);\nps_buffer_cur_len = strlen(ps_buffer);\n\nWe already know the strlen of the fixed-sized part, so why bother\ndoing strlen on the entire thing? Also, if we did just do\nstrlen(activity), we could just memcpy, which would be much faster\nthan strlcpy's byte-at-a-time method of copying.\n\nAdjusting that lead me to notice that we often just pass string\nconstants to set_ps_display(), so we already know the strlen for this\nat compile time. So maybe we can just have set_ps_display_with_len()\nand then make a static inline wrapper that does strlen() so that when\nthe compiler can figure out the length, it just hard codes it.\n\nAfter doing that, I went over all usages of set_ps_display() to see if\nany of those call sites knew the length already in a way that the\ncompiler wouldn't be able to deduce. There were a few cases to adjust\nwhen setting the process title to contain the command tag.\n\nAfter fixing up the set_ps_display()s to use set_ps_display_with_len()\nwhere possible, I discovered some not so nice code which appends \"\nwaiting\" onto the process title. Basically, there's a bunch of code\nthat looks like this:\n\nconst char *old_status;\nint len;\n\nold_status = get_ps_display(&len);\nnew_status = (char *) palloc(len + 8 + 1);\nmemcpy(new_status, old_status, len);\nstrcpy(new_status + len, \" waiting\");\nset_ps_display(new_status);\nnew_status[len] = '\\0'; /* truncate off \" waiting\" */\n\nSeeing that made me wonder if we shouldn't just have something more\ngeneric for setting a suffix on the process title. I came up with\nset_ps_display_suffix() and set_ps_display_remove_suffix(). The above\ncode can just become:\n\nset_ps_display_suffix(\"waiting\");\n\nthen to remove the \"waiting\" suffix, just:\n\nset_ps_display_remove_suffix();\n\nI considered adding a format version to append the suffix as there's\none case that could make use of it, but in the end, decided it might\nbe overkill, so I left that code like:\n\nchar buffer[32];\n\nsprintf(buffer, \"waiting for %X/%X\", LSN_FORMAT_ARGS(lsn));\nset_ps_display_suffix(buffer);\n\nI don't think that's terrible enough to warrant making a va_args\nversion of set_ps_display_suffix(), especially for just 1 instance of\nit.\n\nI also resisted making set_ps_display_suffix_with_len(). The new code\nshould be quite a bit\nfaster already without troubling over that additional function.\n\nI've attached the patch.\n\nDavid",
"msg_date": "Thu, 16 Feb 2023 14:19:24 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Make set_ps_display faster and easier to use"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-16 14:19:24 +1300, David Rowley wrote:\n> After fixing up the set_ps_display()s to use set_ps_display_with_len()\n> where possible, I discovered some not so nice code which appends \"\n> waiting\" onto the process title. Basically, there's a bunch of code\n> that looks like this:\n> \n> const char *old_status;\n> int len;\n> \n> old_status = get_ps_display(&len);\n> new_status = (char *) palloc(len + 8 + 1);\n> memcpy(new_status, old_status, len);\n> strcpy(new_status + len, \" waiting\");\n> set_ps_display(new_status);\n> new_status[len] = '\\0'; /* truncate off \" waiting\" */\n\nYea, that code is atrocious... It took me a while to figure out that no,\nLockBufferForCleanup() isn't leaking memory, because it'll always reach the\ncleanup path *further up* in the function.\n\n\nAvoiding the allocation across loop iterations seems like a completely\npointless optimization in these paths - we add the \" waiting\", precisely\nbecause it's a slow path. But of course not allocating memory would be even\nbetter...\n\n\n> Seeing that made me wonder if we shouldn't just have something more\n> generic for setting a suffix on the process title. I came up with\n> set_ps_display_suffix() and set_ps_display_remove_suffix(). The above\n> code can just become:\n> \n> set_ps_display_suffix(\"waiting\");\n> \n> then to remove the \"waiting\" suffix, just:\n> \n> set_ps_display_remove_suffix();\n\nThat'd definitely be better.\n\n\nIt's not really a topic for this patch, but somehow the fact that we have\nthese set_ps_display() calls all over feels wrong, particularly because most\nof them are paired with a pgstat_report_activity() call. It's not entirely\nobvious how it should be instead, but it doesn't feel right.\n\n\n\n> +/*\n> + * set_ps_display_suffix\n> + *\t\tAdjust the process title to append 'suffix' onto the end with a space\n> + *\t\tbetween it and the current process title.\n> + */\n> +void\n> +set_ps_display_suffix(const char *suffix)\n> +{\n> +\tsize_t\tlen;\n\nThink this will give you an unused-variable warning in the PS_USE_NONE case.\n\n> +#ifndef PS_USE_NONE\n> +\t/* update_process_title=off disables updates */\n> +\tif (!update_process_title)\n> +\t\treturn;\n> +\n> +\t/* no ps display for stand-alone backend */\n> +\tif (!IsUnderPostmaster)\n> +\t\treturn;\n> +\n> +#ifdef PS_USE_CLOBBER_ARGV\n> +\t/* If ps_buffer is a pointer, it might still be null */\n> +\tif (!ps_buffer)\n> +\t\treturn;\n> +#endif\n\nThis bit is now repeated three times. How about putting it into a helper?\n\n\n\n\n> +#ifndef PS_USE_NONE\n> +static void\n> +set_ps_display_internal(void)\n\nVery very minor nit: Perhaps this should be update_ps_display() or\nflush_ps_display() instead?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 17:01:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Make set_ps_display faster and easier to use"
},
{
"msg_contents": "Thank you for having a look at this.\n\nOn Fri, 17 Feb 2023 at 14:01, Andres Freund <andres@anarazel.de> wrote:\n> > +set_ps_display_suffix(const char *suffix)\n> > +{\n> > + size_t len;\n>\n> Think this will give you an unused-variable warning in the PS_USE_NONE case.\n\nFixed\n\n> > +#ifndef PS_USE_NONE\n> > + /* update_process_title=off disables updates */\n> > + if (!update_process_title)\n> > + return;\n> > +\n> > + /* no ps display for stand-alone backend */\n> > + if (!IsUnderPostmaster)\n> > + return;\n> > +\n> > +#ifdef PS_USE_CLOBBER_ARGV\n> > + /* If ps_buffer is a pointer, it might still be null */\n> > + if (!ps_buffer)\n> > + return;\n> > +#endif\n>\n> This bit is now repeated three times. How about putting it into a helper?\n\nGood idea. Done.\n\n> > +set_ps_display_internal(void)\n>\n> Very very minor nit: Perhaps this should be update_ps_display() or\n> flush_ps_display() instead?\n\nI called the precheck helper update_ps_display_precheck(), so went\nwith flush_ps_display() for updating the display so they both didn't\nstart with \"update\".\n\nUpdated patch attached.\n\nDavid",
"msg_date": "Fri, 17 Feb 2023 21:44:06 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make set_ps_display faster and easier to use"
},
{
"msg_contents": "On Fri, 17 Feb 2023 at 21:44, David Rowley <dgrowleyml@gmail.com> wrote:\n> Updated patch attached.\n\nAfter making another couple of small adjustments, I've pushed this.\n\nThanks for the review.\n\nDavid\n\n\n",
"msg_date": "Mon, 20 Feb 2023 16:19:39 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make set_ps_display faster and easier to use"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on the BRIN SK_SEARCHARRAY patch I noticed a silly bug in\nhandling clauses on multi-column BRIN indexes, introduced in PG13.\n\nConsider a simple table with two columns (a,b) and a multi-columns BRIN\nindex on them:\n\n create table t (a int, b int);\n\n insert into t\n select\n mod(i,10000) + 100 * random(),\n mod(i,10000) + 100 * random()\n from generate_series(1,1000000) s(i);\n\n create index on t using brin(a int4_minmax_ops, b int4_minmax_ops)\n with (pages_per_range=1);\n\nLet's run a query with condition on \"a\":\n\n select * from t where a = 500;\n QUERY PLAN\n -----------------------------------------------------------------\n Bitmap Heap Scan on t (actual rows=97 loops=1)\n Recheck Cond: (a = 500)\n Rows Removed by Index Recheck: 53189\n Heap Blocks: lossy=236\n -> Bitmap Index Scan on t_a_b_idx (actual rows=2360 loops=1)\n Index Cond: (a = 500)\n Planning Time: 0.075 ms\n Execution Time: 8.263 ms\n (8 rows)\n\nNow let's add another condition on b:\n\n select * from t where a = 500 and b < 800;\n\n QUERY PLAN\n -----------------------------------------------------------------\n Bitmap Heap Scan on t (actual rows=97 loops=1)\n Recheck Cond: ((a = 500) AND (b < 800))\n Rows Removed by Index Recheck: 101101\n Heap Blocks: lossy=448\n -> Bitmap Index Scan on t_a_b_idx (actual rows=4480 loops=1)\n Index Cond: ((a = 500) AND (b < 800))\n Planning Time: 0.085 ms\n Execution Time: 14.989 ms\n (8 rows)\n\nWell, that's wrong. With one condition we accessed 236 pages, and with\nadditional condition - which should reduce the number of heap pages - we\naccessed 448 pages.\n\nThe problem is in bringetbitmap(), which failed to combine the results\nfrom consistent function correctly (and also does not abort early).\n\nHere's a patch for that, I'll push it shortly after a bit more testing.\n\n\nregard\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 16 Feb 2023 03:04:16 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Bug in processing conditions on multi-column BRIN indexes"
}
] |
[
{
"msg_contents": "Hi,\n\nHere's some archeology I did a while back, but was reminded to post\nwhen I saw David's nearby performance improvements for ps_status.c.\n\n * there are no systems with HAVE_PS_STRINGS (ancient BSD)\n * setproctitle_fast() is in all live FreeBSD releases\n * setproctitle() is in all other BSDs\n * PostgreSQL can't run on GNU/Hurd apparently, for lack of shared\nsempahores, so who would even know if that works?\n * IRIX is rusting in peace\n * there are no other NeXT-derived systems (NeXTSTEP and OPENSTEP are departed)\n\nTherefore I think it is safe to drop the PS_USE_PS_STRING and\nPS_USE_CHANGE_ARGV code branches, remove a bunch of outdated comments\nand macro tests, and prune the defunct configure/meson probe.\n\nI guess (defined(sun) && !defined(BSD)) || defined(__svr5__) could be\nchanged to just defined(sun) (surely there are no other living\nSysV-derived systems, and I think non-BSD Sun probably meant \"Solaris\nbut not SunOS\"), but I don't know so I didn't touch that.\n\nI think the history here is that the ancient BSD sendmail code\n(conf.c) had all this stuff for BSD and SVR5 systems, but then its\nsetproctitle() function actually moved into the OS so that the\nunderlying PS_STRINGS stuff wouldn't have to be stable, and indeed it\nwas not.",
"msg_date": "Thu, 16 Feb 2023 16:52:33 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Dead code in ps_status.c"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Therefore I think it is safe to drop the PS_USE_PS_STRING and\n> PS_USE_CHANGE_ARGV code branches, remove a bunch of outdated comments\n> and macro tests, and prune the defunct configure/meson probe.\n\nSeems reasonable. Patch passes an eyeball check.\n\n> I guess (defined(sun) && !defined(BSD)) || defined(__svr5__) could be\n> changed to just defined(sun) (surely there are no other living\n> SysV-derived systems, and I think non-BSD Sun probably meant \"Solaris\n> but not SunOS\"), but I don't know so I didn't touch that.\n\nHm, is \"defined(sun)\" true on any live systems at all?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Feb 2023 00:34:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dead code in ps_status.c"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 6:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Therefore I think it is safe to drop the PS_USE_PS_STRING and\n> > PS_USE_CHANGE_ARGV code branches, remove a bunch of outdated comments\n> > and macro tests, and prune the defunct configure/meson probe.\n>\n> Seems reasonable. Patch passes an eyeball check.\n\nThanks for looking.\n\n> > I guess (defined(sun) && !defined(BSD)) || defined(__svr5__) could be\n> > changed to just defined(sun) (surely there are no other living\n> > SysV-derived systems, and I think non-BSD Sun probably meant \"Solaris\n> > but not SunOS\"), but I don't know so I didn't touch that.\n>\n> Hm, is \"defined(sun)\" true on any live systems at all?\n\nMy GCC compile farm account seems to have expired, or something, so I\ncouldn't check on wrasse's host (though whether wrasse is \"live\" is\ndebatable: Solaris 11.3 has reached EOL, it's just that the CPU is too\nold to be upgraded, so it's not testing a real OS that anyone would\nactually run PostgreSQL on). But from some googling[1], I think\n__sun, __sun__ and sun should all be defined.\n\nOhh, but __svr5__ should not be. Solaris boxes define __svr4__, I was\nconfused by the two fives. __svr5__ was SCO/Unixware, another dead\nOS[1], so I think we can just remove that one too. So, yeah, I think\nwe should replace (defined(sun) && !defined(BSD)) || defined(__svr5__)\nwith defined(__sun). (Hmph. We have all of __sun__, __sun and sun in\nthe tree.)\n\n[1] https://stackoverflow.com/questions/16618604/solaris-and-preprocessor-macros\n[2] https://en.wikipedia.org/wiki/UNIX_System_V#SVR5_/_UnixWare_7\n\n\n",
"msg_date": "Thu, 16 Feb 2023 19:16:14 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dead code in ps_status.c"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Feb 16, 2023 at 6:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hm, is \"defined(sun)\" true on any live systems at all?\n\n> My GCC compile farm account seems to have expired, or something, so I\n> couldn't check on wrasse's host (though whether wrasse is \"live\" is\n> debatable: Solaris 11.3 has reached EOL, it's just that the CPU is too\n> old to be upgraded, so it's not testing a real OS that anyone would\n> actually run PostgreSQL on). But from some googling[1], I think\n> __sun, __sun__ and sun should all be defined.\n\nMy account still works, and what I see on wrasse's host is\n\ntgl@gcc-solaris11:~$ gcc -x c /dev/null -dM -E | grep -i svr\n#define __SVR4 1\n#define __svr4__ 1\ntgl@gcc-solaris11:~$ gcc -x c /dev/null -dM -E | grep -i sun\n#define __sun 1\n#define sun 1\n#define __sun__ 1\n\nI don't know a way to get the list of predefined macros out of the\ncompiler wrasse is actually using (/opt/developerstudio12.6/bin/cc),\nbut doing some experiments with #ifdef confirmed that it defines\n__sun, __sun__, and __svr4__, but not __svr5__.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Feb 2023 09:38:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dead code in ps_status.c"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 3:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> My account still works, and what I see on wrasse's host is\n>\n> tgl@gcc-solaris11:~$ gcc -x c /dev/null -dM -E | grep -i svr\n> #define __SVR4 1\n> #define __svr4__ 1\n> tgl@gcc-solaris11:~$ gcc -x c /dev/null -dM -E | grep -i sun\n> #define __sun 1\n> #define sun 1\n> #define __sun__ 1\n>\n> I don't know a way to get the list of predefined macros out of the\n> compiler wrasse is actually using (/opt/developerstudio12.6/bin/cc),\n> but doing some experiments with #ifdef confirmed that it defines\n> __sun, __sun__, and __svr4__, but not __svr5__.\n\nThanks. I went with __sun, because a random man page google found me\nfor Sun \"cc\" mentioned that but not __sun__. Pushed.\n\nhttp://www.polarhome.com/service/man/?qf=cc&tf=2&of=Solaris&sf=1\n\n\n",
"msg_date": "Fri, 17 Feb 2023 15:22:39 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dead code in ps_status.c"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 3:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Thu, Feb 16, 2023 at 6:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > My GCC compile farm account seems to have expired, or something, so I\n> > couldn't check on wrasse's host (though whether wrasse is \"live\" is\n> > debatable: Solaris 11.3 has reached EOL, it's just that the CPU is too\n> > old to be upgraded, so it's not testing a real OS that anyone would\n> > actually run PostgreSQL on). ...\n\n> My account still works, and what I see on wrasse's host is\n\nJust in case it helps someone else who finds themselves locked out of\nthat, I noticed that I can still connect from my machine with OpenSSH\n8.8p1, but not from another dev box which was upgraded to OpenSSH\n9.2p1. For reasons I didn't look into, the latter doesn't like\nexchanging 1s and 0s with \"Sun_SSH_2.4\" (something Oracle has\napparently now abandoned in favour of stock OpenSSH, but that machine\nis stuck in time).\n\n\n",
"msg_date": "Sat, 11 Mar 2023 16:59:46 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dead code in ps_status.c"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-11 16:59:46 +1300, Thomas Munro wrote:\n> On Fri, Feb 17, 2023 at 3:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > On Thu, Feb 16, 2023 at 6:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > My GCC compile farm account seems to have expired, or something, so I\n> > > couldn't check on wrasse's host (though whether wrasse is \"live\" is\n> > > debatable: Solaris 11.3 has reached EOL, it's just that the CPU is too\n> > > old to be upgraded, so it's not testing a real OS that anyone would\n> > > actually run PostgreSQL on). ...\n>\n> > My account still works, and what I see on wrasse's host is\n>\n> Just in case it helps someone else who finds themselves locked out of\n> that, I noticed that I can still connect from my machine with OpenSSH\n> 8.8p1, but not from another dev box which was upgraded to OpenSSH\n> 9.2p1. For reasons I didn't look into, the latter doesn't like\n> exchanging 1s and 0s with \"Sun_SSH_2.4\" (something Oracle has\n> apparently now abandoned in favour of stock OpenSSH, but that machine\n> is stuck in time).\n\nIt's the key types supported by the old ssh. I have the following in my\n~/.ssh/config to work around that:\n\nHost gcc210.fsffrance.org\n PubkeyAcceptedKeyTypes +ssh-rsa\n KexAlgorithms +diffie-hellman-group1-sha1\nHost gcc211.fsffrance.org\n PubkeyAcceptedKeyTypes +ssh-rsa\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Mar 2023 21:13:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Dead code in ps_status.c"
}
] |
[
{
"msg_contents": "Hi,\n\nThe attached patch removes the comment line noting the same as the\nprevious paragraph of the ExecUpdateAct() prolog comment.\n\n--\nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 16 Feb 2023 11:05:13 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "remove duplicate comment."
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 11:05:13AM +0530, Amul Sul wrote:\n> The attached patch removes the comment line noting the same as the\n> previous paragraph of the ExecUpdateAct() prolog comment.\n\n- * Caller is in charge of doing EvalPlanQual as necessary, and of keeping\n- * indexes current for the update.\n\nIndeed, good catch. Both are mentioned in the previous paragraph.\n--\nMichael",
"msg_date": "Thu, 16 Feb 2023 15:50:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove duplicate comment."
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed the NumericVar's pos_var and neg_var are not free_var()'d at the end of accum_sum_final().\n\nThe potential memory leak seems small, since the function is called only once per sum() per worker (and from a few more places), but maybe they should be free'd anyways for correctness?\n\n/Joel\nHi,I noticed the NumericVar's pos_var and neg_var are not free_var()'d at the end of accum_sum_final().The potential memory leak seems small, since the function is called only once per sum() per worker (and from a few more places), but maybe they should be free'd anyways for correctness?/Joel",
"msg_date": "Thu, 16 Feb 2023 06:59:13 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 06:59:13AM +0100, Joel Jacobson wrote:\n> I noticed the NumericVar's pos_var and neg_var are not free_var()'d\n> at the end of accum_sum_final().\n> \n> The potential memory leak seems small, since the function is called\n> only once per sum() per worker (and from a few more places), but\n> maybe they should be free'd anyways for correctness? \n\nIndeed, it is true that any code path of numeric.c that relies on a\nNumericVar with an allocation done in its buffer is careful enough to\nfree it, except for generate_series's SRF where one step of the\ncomputation is done. I don't see directly why you could not do the\nfollowing:\n@@ -11973,6 +11973,9 @@ accum_sum_final(NumericSumAccum *accum, NumericVar *result)\n /* And add them together */\n add_var(&pos_var, &neg_var, result);\n \n+ free_var(&pos_var);\n+ free_var(&neg_var);\n+\n /* Remove leading/trailing zeroes */\n strip_var(result);\n--\nMichael",
"msg_date": "Thu, 16 Feb 2023 15:26:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On Thu, Feb 16, 2023, at 07:26, Michael Paquier wrote:\n> Indeed, it is true that any code path of numeric.c that relies on a\n> NumericVar with an allocation done in its buffer is careful enough to\n> free it, except for generate_series's SRF where one step of the\n> computation is done. I don't see directly why you could not do the\n> following:\n> @@ -11973,6 +11973,9 @@ accum_sum_final(NumericSumAccum *accum, \n> NumericVar *result)\n> /* And add them together */\n> add_var(&pos_var, &neg_var, result);\n> \n> + free_var(&pos_var);\n> + free_var(&neg_var);\n> +\n\nThanks for looking and explaining.\n\nI added the free_var() calls after strip_var() to match the similar existing code at the end of sqrt_var().\nNot that it matters, but thought it looked nicer to keep them in the same order.\n\n/Joel",
"msg_date": "Thu, 16 Feb 2023 08:08:37 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 08:08:37AM +0100, Joel Jacobson wrote:\n> I added the free_var() calls after strip_var() to match the similar\n> existing code at the end of sqrt_var().\n> Not that it matters, but thought it looked nicer to keep them in the\n> same order. \n\nWFM. Thanks!\n--\nMichael",
"msg_date": "Thu, 16 Feb 2023 16:18:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-16 15:26:26 +0900, Michael Paquier wrote:\n> On Thu, Feb 16, 2023 at 06:59:13AM +0100, Joel Jacobson wrote:\n> > I noticed the NumericVar's pos_var and neg_var are not free_var()'d\n> > at the end of accum_sum_final().\n> > \n> > The potential memory leak seems small, since the function is called\n> > only once per sum() per worker (and from a few more places), but\n> > maybe they should be free'd anyways for correctness? \n> \n> Indeed, it is true that any code path of numeric.c that relies on a\n> NumericVar with an allocation done in its buffer is careful enough to\n> free it, except for generate_series's SRF where one step of the\n> computation is done. I don't see directly why you could not do the\n> following:\n> @@ -11973,6 +11973,9 @@ accum_sum_final(NumericSumAccum *accum, NumericVar *result)\n> /* And add them together */\n> add_var(&pos_var, &neg_var, result);\n> \n> + free_var(&pos_var);\n> + free_var(&neg_var);\n> +\n> /* Remove leading/trailing zeroes */\n> strip_var(result);\n\nBut why do we need it? Most SQL callable functions don't need to be careful\nabout not leaking O(1) memory, the exception being functions backing btree\nopclasses.\n\nIn fact, the detailed memory management often is *more* expensive than just\nrelying on the calling memory context being reset.\n\nOf course, numeric.c doesn't really seem to have gotten that message, so\nthere's a consistency argument here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 13:35:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 01:35:54PM -0800, Andres Freund wrote:\n> But why do we need it? Most SQL callable functions don't need to be careful\n> about not leaking O(1) memory, the exception being functions backing btree\n> opclasses.\n> \n> In fact, the detailed memory management often is *more* expensive than just\n> relying on the calling memory context being reset.\n> \n> Of course, numeric.c doesn't really seem to have gotten that message, so\n> there's a consistency argument here.\n\nI don't know which final result is better. The arguments go two ways:\n1) Should numeric.c be simplified so as its memory structure with extra\npfree()s, making it more consistent with more global assumptions than\njust this file? This has the disadvantage of creating more noise in\nbackpatching, while increasing the risk of leaks if some of the\nremoved parts are allocated in a tight loop within the same context.\nThis makes memory management less complicated. That's how I am\nunderstanding your point.\n2) Should the style within numeric.c be more consistent? This is how\nI am understanding this proposal. As you quote, this makes memory\nmanagement more complicated (not convinced about that for the internal\nof numerics?), while making the file more consistent.\n\nAt the end, perhaps that's not worth bothering, but 2) prevails when\nit comes to the rule of making some code consistent with its\nsurroundings. 1) has more risks seeing how old this code is.\n--\nMichael",
"msg_date": "Fri, 17 Feb 2023 11:48:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-17 11:48:14 +0900, Michael Paquier wrote:\n> On Thu, Feb 16, 2023 at 01:35:54PM -0800, Andres Freund wrote:\n> > But why do we need it? Most SQL callable functions don't need to be careful\n> > about not leaking O(1) memory, the exception being functions backing btree\n> > opclasses.\n> > \n> > In fact, the detailed memory management often is *more* expensive than just\n> > relying on the calling memory context being reset.\n> > \n> > Of course, numeric.c doesn't really seem to have gotten that message, so\n> > there's a consistency argument here.\n> \n> I don't know which final result is better. The arguments go two ways:\n> 1) Should numeric.c be simplified so as its memory structure with extra\n> pfree()s, making it more consistent with more global assumptions than\n> just this file? This has the disadvantage of creating more noise in\n> backpatching, while increasing the risk of leaks if some of the\n> removed parts are allocated in a tight loop within the same context.\n> This makes memory management less complicated. That's how I am\n> understanding your point.\n\nIt's not just simplification, it's just faster to free via context reset. I\nwhipped up a random query exercising numeric math a bunch:\n\nSELECT max(a + b + '17'::numeric + c) FROM (SELECT generate_series(1::numeric, 1000::numeric)) aa(a), (SELECT generate_series(1::numeric, 100::numeric)) bb(b), (SELECT generate_series(1::numeric, 10::numeric)) cc(c);\n\nRemoving the free_var()s from numeric_add_opt_error() speeds it up from ~361ms\nto ~338ms.\n\n\nThis code really needs some memory management overhead reduction love. Many\nallocation could be avoided by having a small on-stack \"buffer\" that's used\nunless the numeric is large.\n\n\n> 2) Should the style within numeric.c be more consistent? This is how\n> I am understanding this proposal. As you quote, this makes memory\n> management more complicated (not convinced about that for the internal\n> of numerics?), while making the file more consistent.\n\n> At the end, perhaps that's not worth bothering, but 2) prevails when\n> it comes to the rule of making some code consistent with its\n> surroundings. 1) has more risks seeing how old this code is.\n\nI'm a bit wary that this will trigger a stream of patches to pointlessly free\nthings, causing churn and slowdowns. I suspect there's other places in\nnumeric.c where we don't free, and there certainly are a crapton in other\nfunctions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Feb 2023 12:26:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On Fri, Feb 17, 2023, at 21:26, Andres Freund wrote:\n> Removing the free_var()s from numeric_add_opt_error() speeds it up from ~361ms\n> to ~338ms.\n\nI notice numeric_add_opt_error() is extern and declared in numeric.h,\ncalled from e.g. timestamp.c and jsonpath_exec.c. Is that a problem,\ni.e. is there a risk it could be used in a for loop by some code outside Numeric?\n\n> This code really needs some memory management overhead reduction love. Many\n> allocation could be avoided by having a small on-stack \"buffer\" that's used\n> unless the numeric is large.\n\nNice idea!\nCould something like the attached patch work?\n\n/Joel",
"msg_date": "Sun, 19 Feb 2023 20:54:11 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "Hi again,\n\nIgnore previous patch, new correct version attached, that also keeps track of if the buf-field is in use or not.\n\nSeems like your idea gives a signifiant speed-up!\nOn my machine, numeric_in() is 22% faster!\n\nI ran a benchmark with 100 tests measuring execution-time of numeric_in() with two significant digits time precision.\n\nBenchmark:\n\nCREATE EXTENSION pit;\nSELECT count(pit.async('numeric_in','{1234,0,-1}',2)) FROM generate_series(1,100);\nCALL pit.work(true);\n SELECT\n format('%s(%s)',pit.test_params.function_name, array_to_string(pit.test_params.input_values,',')),\n pit.pretty_time(pit.tests.final_result),\n count(*)\nFROM pit.tests\nJOIN pit.test_params USING (id)\nGROUP BY 1,2\nORDER BY 1,2;\n\nHEAD:\n\n format | pretty_time | count\n-----------------------+-------------+-------\n numeric_in(1234,0,-1) | 31 ns | 34\n numeric_in(1234,0,-1) | 32 ns | 66\n(2 rows)\n\n0002-fixed-buf.patch:\n\n format | pretty_time | count\n-----------------------+-------------+-------\n numeric_in(1234,0,-1) | 24 ns | 4\n numeric_in(1234,0,-1) | 25 ns | 71\n numeric_in(1234,0,-1) | 26 ns | 25\n(3 rows)\n\n\nThe benchmark results was produced with: https://github.com/joelonsql/pg-timeit\n\nNow, the question is how big of a fixed_buf we want. I will run some more benchmarks.\n\n/Joel\n\nOn Sun, Feb 19, 2023, at 20:54, Joel Jacobson wrote:\n> On Fri, Feb 17, 2023, at 21:26, Andres Freund wrote:\n>> Removing the free_var()s from numeric_add_opt_error() speeds it up from ~361ms\n>> to ~338ms.\n>\n> I notice numeric_add_opt_error() is extern and declared in numeric.h,\n> called from e.g. timestamp.c and jsonpath_exec.c. Is that a problem,\n> i.e. is there a risk it could be used in a for loop by some code \n> outside Numeric?\n>\n>> This code really needs some memory management overhead reduction love. Many\n>> allocation could be avoided by having a small on-stack \"buffer\" that's used\n>> unless the numeric is large.\n>\n> Nice idea!\n> Could something like the attached patch work?\n>\n> /Joel\n> Attachments:\n> * 0001-fixed-buf.patch\n\n-- \nKind regards,\n\nJoel",
"msg_date": "Sun, 19 Feb 2023 23:16:43 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On Sun, Feb 19, 2023, at 23:16, Joel Jacobson wrote:\n> Hi again,\n>\n> Ignore previous patch, new correct version attached, that also keeps \n> track of if the buf-field is in use or not.\n\nOps! One more thinko, detected when trying fixed_buf[8], which caused a seg fault.\n\nTo fix, a init_var() in alloc_var() is needed when we will use the fixed_buf instead of allocating,\nsince alloc_var() could be called in a loop for existing values, like in sqrt_var() for instance.\n\nAttached new version produces similar benchmark results, even with the extra init_var():\n\nHEAD:\n\n format | pretty_time | count\n-----------------------+-------------+-------\n numeric_in(1234,0,-1) | 31 ns | 74\n numeric_in(1234,0,-1) | 32 ns | 26\n(2 rows)\n\n0003-fixed-buf.patch:\n\n format | pretty_time | count\n-----------------------+-------------+-------\n numeric_in(1234,0,-1) | 24 ns | 1\n numeric_in(1234,0,-1) | 25 ns | 84\n numeric_in(1234,0,-1) | 26 ns | 15\n(3 rows)\n\n/Joel",
"msg_date": "Sun, 19 Feb 2023 23:55:38 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On Sun, Feb 19, 2023 at 11:55:38PM +0100, Joel Jacobson wrote:\n> To fix, a init_var() in alloc_var() is needed when we will use the\n> fixed_buf instead of allocating,\n> since alloc_var() could be called in a loop for existing values,\n> like in sqrt_var() for instance. \n> \n> Attached new version produces similar benchmark results, even with the extra init_var():\n\nPerhaps you should register this patch to the commit of March? Here\nit is:\nhttps://commitfest.postgresql.org/42/\n--\nMichael",
"msg_date": "Mon, 20 Feb 2023 16:38:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On Mon, Feb 20, 2023, at 08:38, Michael Paquier wrote:\n> Perhaps you should register this patch to the commit of March? Here\n> it is:\n> https://commitfest.postgresql.org/42/\n\nThanks, done.\n\n/Joel\n\n\n",
"msg_date": "Mon, 20 Feb 2023 09:03:01 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On Mon, 20 Feb 2023 at 08:03, Joel Jacobson <joel@compiler.org> wrote:\n>\n> On Mon, Feb 20, 2023, at 08:38, Michael Paquier wrote:\n> > Perhaps you should register this patch to the commit of March? Here\n> > it is:\n> > https://commitfest.postgresql.org/42/\n>\n> Thanks, done.\n>\n\nI have been testing this a bit, and I get less impressive results than\nthe ones reported so far.\n\nTesting Andres' example:\n\nSELECT max(a + b + '17'::numeric + c)\n FROM (SELECT generate_series(1::numeric, 1000::numeric)) aa(a),\n (SELECT generate_series(1::numeric, 100::numeric)) bb(b),\n (SELECT generate_series(1::numeric, 10::numeric)) cc(c);\n\nwith HEAD, I get:\n\nTime: 216.978 ms\nTime: 215.376 ms\nTime: 214.973 ms\nTime: 216.288 ms\nTime: 216.494 ms\n\nand removing the free_var() from numeric_add_opt_error() I get:\n\nTime: 212.706 ms\nTime: 212.684 ms\nTime: 211.378 ms\nTime: 213.383 ms\nTime: 213.050 ms\n\nThat's 1-2% faster, not the 6-7% that Andres saw.\n\nTesting the same example with the latest 0003-fixed-buf.patch, I get:\n\nTime: 224.115 ms\nTime: 225.382 ms\nTime: 225.691 ms\nTime: 224.135 ms\nTime: 225.412 ms\n\nwhich is now 4% slower.\n\nI think the problem is that if you increase the size of NumericVar,\nyou increase the stack space required, as well as adding some overhead\nto alloc_var(). Also, primitive operations like add_var() directly\ncall digitbuf_alloc(), so as it stands, they don't benefit from the\nstatic buffer. Also, I'm not convinced that a 4-digit static buffer\nwould really be of much benefit to many numeric computations anyway.\n\nTo try to test the real-world benefit to numeric_in(), I re-ran one of\nthe tests I used while testing the non-decimal integer patches, using\nCOPY to read a large number of random numerics from a file:\n\nCREATE TEMP TABLE foo(c1 numeric, c2 numeric, c3 numeric, c4 numeric,\n c5 numeric, c6 numeric, c7 numeric, c8 numeric,\n c9 numeric, c10 numeric);\n\nINSERT INTO foo\n SELECT trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4))\n FROM generate_series(1,10000000);\nCOPY foo TO '/tmp/random-numerics.csv';\n\n\\timing on\nTRUNCATE foo; COPY foo FROM '/tmp/random-numerics.csv';\nTRUNCATE foo; COPY foo FROM '/tmp/random-numerics.csv';\nTRUNCATE foo; COPY foo FROM '/tmp/random-numerics.csv';\nTRUNCATE foo; COPY foo FROM '/tmp/random-numerics.csv';\nTRUNCATE foo; COPY foo FROM '/tmp/random-numerics.csv';\n\nWith HEAD, this gave:\n\nTime: 10750.298 ms (00:10.750)\nTime: 10746.248 ms (00:10.746)\nTime: 10772.277 ms (00:10.772)\nTime: 10758.282 ms (00:10.758)\nTime: 10760.425 ms (00:10.760)\n\nand with 0003-fixed-buf.patch, it gave:\n\nTime: 10623.254 ms (00:10.623)\nTime: 10463.814 ms (00:10.464)\nTime: 10461.700 ms (00:10.462)\nTime: 10429.436 ms (00:10.429)\nTime: 10438.359 ms (00:10.438)\n\nSo that's a 2-3% gain, which might be worthwhile, if not for the\nslowdown in the other case.\n\nI actually had a slightly different idea to improve numeric.c's memory\nmanagement, which gives a noticeable improvement for a few of the\nsimple numeric functions:\n\nA common pattern in these numeric functions is to allocate memory for\nthe NumericVar's digit buffer while doing the computation, allocate\nmore memory for the Numeric result, copy the digits across, and then\nfree the NumericVar's digit buffer.\n\nThat can be reduced to just 1 palloc() and no pfree()'s by ensuring\nthat the initial allocation is large enough to hold the final Numeric\nresult, and then re-using that memory instead of allocating more. That\ncan be applied to all the numeric functions, saving a palloc() and a\npfree() in each case, and it fits quite well with the way\nmake_result() is used in all but one case (generate_series() needs to\nbe a little more careful to avoid trampling on the digit buffer of the\ncurrent value).\n\nIn Andres' generate_series() example, this gave:\n\nTime: 203.838 ms\nTime: 206.623 ms\nTime: 204.672 ms\nTime: 202.434 ms\nTime: 204.893 ms\n\nwhich is around 5-6% faster.\n\nIn the COPY test, it gave:\n\nTime: 10511.293 ms (00:10.511)\nTime: 10504.831 ms (00:10.505)\nTime: 10521.736 ms (00:10.522)\nTime: 10513.039 ms (00:10.513)\nTime: 10511.979 ms (00:10.512)\n\nwhich is around 2% faster than HEAD, and around 0.3% slower than\n0003-fixed-buf.patch\n\nNone of this had any noticeable impact on the time to run the\nregression tests, and I tried a few other simple examples, but it was\ndifficult to get consistent results, above the normal variation of the\ntest timings.\n\nTBH, I'm yet to be convinced that any of this is actually worthwhile.\nWe might shave a few percent off some simple numeric operations, but I\ndoubt it will make much difference to more complex computations. I'd\nneed to see some more realistic test results, or some real evidence of\npalloc/pfree causing significant overhead in a numeric computation.\n\nRegards,\nDean",
"msg_date": "Mon, 20 Feb 2023 11:32:32 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "Hi,\n\nI found another small but significant improvement of the previous patch:\n\nelse if (ndigits < var->buf_len)\n{\n- memset(var->buf, 0, var->buf_len);\n+ var->buf[0] = 0;\n var->digits = var->buf + 1;\n var->ndigits = ndigits;\n}\n\n\nWe don't need to set all buf elements to zero, only the first one.\nThis is not an improvement of HEAD, it's just a mistake I made in my previous patch.\n\n\nCOPY foo FROM '/tmp/random-numerics.csv';\n\nHEAD:\n\nTime: 8431.325 ms (00:08.431)\nTime: 8424.749 ms (00:08.425)\nTime: 8425.387 ms (00:08.425)\nTime: 8519.869 ms (00:08.520)\nTime: 8452.585 ms (00:08.453)\n\n0004-fixed-buf.patch:\nTime: 8539.475 ms (00:08.539)\nTime: 8401.628 ms (00:08.402)\nTime: 8399.440 ms (00:08.399)\nTime: 8373.861 ms (00:08.374)\nTime: 8388.002 ms (00:08.388)\n\n0005-fixed-buf.patch:\n\nTime: 8038.218 ms (00:08.038)\nTime: 8082.898 ms (00:08.083)\nTime: 7999.950 ms (00:08.000)\nTime: 8039.640 ms (00:08.040)\nTime: 7994.816 ms (00:07.995)\n\nAlmost half a second faster!\n\n/Joel",
"msg_date": "Mon, 20 Feb 2023 19:12:21 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "Hi,\n\nMy apologies, it seems my email didn't reach the list, probably due to the\nbenchmark plot images being to large. Here is the email again, but with\nURLs to the images instead, and benchmark updated with results for the\n0005-fixed-buf.patch.\n\n--\n\nOn Mon, Feb 20, 2023, at 12:32, Dean Rasheed wrote:\n> TBH, I'm yet to be convinced that any of this is actually worthwhile.\n> We might shave a few percent off some simple numeric operations, but I\n> doubt it will make much difference to more complex computations. I'd\n> need to see some more realistic test results, or some real evidence of\n> palloc/pfree causing significant overhead in a numeric computation.\n\nThanks for testing! Good point, I agree with your conclusion;\nthe small change to alloc_var() suggested in 0003-fixed-buf.patch probably\ndidn't provide enough speedups to motivate the change.\n\nIn the new attached patch, Andres fixed buffer idea has been implemented\nthroughout the entire numeric.c code base.\n\nCHANGES:\n--------\n\n* Instead of a bool, we now have a buf_len struct field, to keep track of the\nallocated capacity, which can be different from the numbers of digits\ncurrently used. buf_len == 0 indicates no memory is allocated, and fixed_buf\nis possibly used instead.\n\n* alloc_var() now reuses the allocated buffer if there is one big enough.\n\n* free_var() and zero_var() only call digitbuf_free() if they need to.\n\n* set_var_from_var() now also uses the fixed buffer and also reuses the\nallocated buffer if there is one and it's big enough.\n\n* To allow the NumericVar's on the stack to be used without having to allocate\ndigits buf, we have to change callers e.g. add_var(), div_var(), etc, to ensure\nthe result variable isn't the same as one of the operand NumericVars.\nThis wasn't necessary before, since we always allocated a new digits buf for the\nresult, which could then be assigned to the digits field when done with\ncomputations. However, this prevented us from relying solely on the existing\nNumericVar stack variables, so we needed a few new temporary NumericVar stack\nvariables, to hold intermediate results, and set_var_from_var() to copy the\noperand into the temp var.\n\nAssert()'s have been added to such functions, add_abs(), sub_abs(), div_var(),\ndiv_var_fast(), div_var_int64() and div_var_int64(), that enforce result being\na different object than the two operands.\n\nHere is one example from ceil_var() on this from the new 0004-fixed-buf.patch:\n\nHEAD:\n\n if (var->sign == NUMERIC_POS && cmp_var(var, &tmp) != 0)\n add_var(&tmp, &const_one, &tmp);\n set_var_from_var(&tmp, result);\n\nThe add_var() is a problem since &tmp is both the first operand and the result.\nFunnily enough, the fix in this particular case, and in floor_var(),\nis simpler and should be faster:\n\n0005-fixed-buf.patch:\n\n if (var->sign == NUMERIC_POS && cmp_var(var, &tmp) != 0)\n add_var(&tmp, &const_one, result);\n else\n set_var_from_var(&tmp, result);\n\nThis is avoids the set_var_from_var() if the if-branch is taken, as the result\nis written directly to result.\n\nAnother example from sqrt_var():\n\nHEAD:\n\n add_var(&q_var, &a1_var, &q_var);\n\n0005-fixed-buf.patch:\n\n set_var_from_var(&q_var, &tmp_var);\n add_var(&tmp_var, &a1_var, &q_var);\n\nThe extra set_var_from_var() seems to be a net-win in many cases,\nexcept for the generate_series() example which doesn't have any\nNumericVar's on the stack, except for the first iteration.\n\nBENCHMARK:\n----------\n\nSELECT max(a + b + '17'::numeric + c)\nFROM\n(SELECT generate_series(1::numeric, 1000::numeric)) aa(a),\n(SELECT generate_series(1::numeric, 100::numeric)) bb(b),\n(SELECT generate_series(1::numeric, 10::numeric)) cc(c);\n\nHEAD:\n\nTime: 158.698 ms\nTime: 142.486 ms\nTime: 141.443 ms\nTime: 142.044 ms\nTime: 141.651 ms\n\n0005-fixed-buf.patch:\n\nTime: 156.371 ms\nTime: 149.857 ms\nTime: 150.674 ms\nTime: 150.409 ms\nTime: 150.461 ms\n\nSELECT setseed(0.1234);\nCREATE TABLE t (n1 numeric, n2 numeric, n3 numeric, n4 numeric);\nINSERT INTO t (n1, n2, n3, n4)\nSELECT\n round(random()::numeric,2),\n round(random()::numeric*10,2),\n round(random()::numeric*100,2),\n round(random()::numeric*1000,2)\nFROM generate_series(1,1e7);\nCHECKPOINT;\nSELECT SUM(n1+n2+n3+n4) FROM t;\n\nHEAD:\n\nTime: 758.489 ms\nTime: 646.794 ms\nTime: 643.237 ms\nTime: 642.620 ms\nTime: 646.218 ms\n\n0005-fixed-buf.patch:\n\nTime: 748.093 ms\nTime: 628.799 ms\nTime: 629.853 ms\nTime: 629.166 ms\nTime: 627.768 ms\n\n\nCREATE TABLE ledger (amount numeric);\nINSERT INTO ledger (amount)\nSELECT generate_series(-100000.00,100000,0.01);\nCHECKPOINT;\nSELECT SUM(amount*1.25 + 0.5) FROM ledger;\n\nHEAD:\n\nTime: 1113.080 ms (00:01.113)\nTime: 931.998 ms\nTime: 931.009 ms\nTime: 932.476 ms\nTime: 933.509 ms\n\n0005-fixed-buf.patch:\n\nTime: 1067.298 ms (00:01.067)\nTime: 883.972 ms\nTime: 880.165 ms\nTime: 882.465 ms\nTime: 893.646 ms\n\nCREATE TEMP TABLE foo(c1 numeric, c2 numeric, c3 numeric, c4 numeric,\n c5 numeric, c6 numeric, c7 numeric, c8 numeric,\n c9 numeric, c10 numeric);\n\nINSERT INTO foo\n SELECT trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4)),\n trim_scale(round(random()::numeric*1e4, 4))\n FROM generate_series(1,10000000);\nCOPY foo TO '/tmp/random-numerics.csv';\n\nTRUNCATE foo; COPY foo FROM '/tmp/random-numerics.csv';\n\nHEAD:\n\nTime: 8515.644 ms (00:08.516)\nTime: 8405.150 ms (00:08.405)\nTime: 8399.067 ms (00:08.399)\nTime: 8678.949 ms (00:08.679)\nTime: 8388.152 ms (00:08.388)\n\n0005-fixed-buf.patch:\n\nTime: 8255.290 ms (00:08.255)\nTime: 7986.409 ms (00:07.986)\nTime: 8005.748 ms (00:08.006)\nTime: 8004.352 ms (00:08.004)\nTime: 8160.537 ms (00:08.161)\n\n\nFIXED_BUF_LEN:\n--------------\n\nIn 0005-fixed-buf.patch, the new FIXED_BUF_LEN def has been set to 8.\n\nI've benchmarked other values as well, 2, 4, 16, 32, but 8 seems to be the sweet\nspot. div_var() would benefit from FIXED_BUF_LEN 16 though.\n\nHere is the test results using different FIXED_BUF_LEN values.\n\nPardon the images, but it's difficult to textify the two dimensional results,\nas we need to vary the digit length of both operands.\n\nThis first image shows the execution time for numeric_add() with\noperands up to 20 decdigits:\n\nhttps://gist.githubusercontent.com/joelonsql/59bd2642d577fe4aaf2fb8b1ab7f67c4/raw/d99d8fdc6f34dbed255f310dfc998a429117301a/up-to-20-digits-add.pdf.png\n\nThe color scale is execution_time, where more reddish/hotter means slower,\nand more blueish/cooler means faster.\n\nAs we can see, it gets notably cooler already with FIXED_BUF_LEN 4,\nup to 8 decdigits, but it also gets a bit hotter for larger numbers.\nWith FIXED_BUF_LEN 8, it's cooler both for small numbers,\nbut it's also cooler for larger numbers.\n\nIf instead looking at numeric_div(),\nwe can see how FIXED_BUF_LEN 16 would be an improvement:\n\nhttps://gist.githubusercontent.com/joelonsql/59bd2642d577fe4aaf2fb8b1ab7f67c4/raw/d99d8fdc6f34dbed255f310dfc998a429117301a/up-to-20-digits-div.pdf.png\n\nThe plot for numeric_mul() is a bit more difficult to read,\nbut we can see some improvement already at FIXED_BUF_LEN 4:\n\nhttps://gist.githubusercontent.com/joelonsql/59bd2642d577fe4aaf2fb8b1ab7f67c4/raw/d99d8fdc6f34dbed255f310dfc998a429117301a/up-to-20-digits-mul.pdf.png\n\nNote the scales are different for all these three plots.\n\nIn the plot below, the scale is the same for all three operators,\nwhich can be nice to understand their relative execution time:\n\nhttps://gist.githubusercontent.com/joelonsql/59bd2642d577fe4aaf2fb8b1ab7f67c4/raw/d99d8fdc6f34dbed255f310dfc998a429117301a/up-to-20-digits-overview.pdf.png\n\nIn these plots we have only studied operands with up to 20 decdigits.\n\nFor completion, here is plots up to 200 decdigits:\n\nhttps://gist.githubusercontent.com/joelonsql/59bd2642d577fe4aaf2fb8b1ab7f67c4/raw/d99d8fdc6f34dbed255f310dfc998a429117301a/up-to-200-digits-overview.pdf.png\n\nAs we can see, there is no significant observable difference, as expected,\nsince the fixed buffer only improves moderately big numbers.\n\nAnd finally, here is a plot of up to 131072 decdigits:\n\nhttps://gist.githubusercontent.com/joelonsql/59bd2642d577fe4aaf2fb8b1ab7f67c4/raw/d99d8fdc6f34dbed255f310dfc998a429117301a/full-range-overview.pdf.png\n\nSome additional plots can be viewed at the end of this gist:\nhttps://gist.github.com/joelonsql/59bd2642d577fe4aaf2fb8b1ab7f67c4\n\nIt would be nice to avoid the additional tmp vars, as it glutters the interface.\nOne idea maybe could be to use an additional struct field for the writing of the result.\nOr at least add helper-functions to avoid an extra line of code for the affected places in the code.\n\n/Joel",
"msg_date": "Mon, 20 Feb 2023 23:16:54 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On 20.02.23 23:16, Joel Jacobson wrote:\n> In the new attached patch, Andres fixed buffer idea has been implemented\n> throughout the entire numeric.c code base.\n\nI think the definition of the \"preinitialized constants\" could be \nadapted to this as well. For example, instead of\n\n static const NumericDigit const_one_data[1] = {1};\n static const NumericVar const_one =\n {1, 0, NUMERIC_POS, 0, NULL, (NumericDigit *) const_one_data};\n\nit could be something like\n\n static const NumericVar const_one =\n {0, 0, NUMERIC_POS, 0, NULL, NULL, 1, {1}};\n\nOr perhaps with designators:\n\n static const NumericVar const_one =\n {.sign = NUMERIC_POS, .buflen = 1, .fixed_buf = {1}};\n\n\n\n",
"msg_date": "Thu, 2 Mar 2023 09:32:44 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "> On 20.02.23 23:16, Joel Jacobson wrote:\n> > In the new attached patch, Andres fixed buffer idea has been implemented\n> > throughout the entire numeric.c code base.\n>\n\nI have been going over this patch, and IMO it's far too invasive for\nthe fairly modest performance gains (and performance regressions in\nsome cases) that it gives (which seem to be somewhat smaller on my\nmachine).\n\nOne code change that I am particularly opposed to is changing all the\nlow-level functions like add_var(), mul_var(), etc., so that they no\nlonger accept the result being the same variable as any of the inputs.\nThat is a particularly convenient thing to be able to do, and without\nit, various functions become more complex and less readable, and have\nto resort to using more temporary variables.\n\nI actually find the whole business of attaching a static buffer and\nnew buf_len fields to NumericVar quite ugly, and the associated extra\ncomplexity in alloc_var(), free_var(), zero_var(), and\nset_var_from_var() is all part of that. Now that might be worth it, if\nthis gave significant performance gains across the board, but the\ntrouble is it doesn't. AFAICS, it seems to be just as likely to\ndegrade performance. For example:\n\nSELECT sqrt(6*sum(1/x/x)) FROM generate_series(1::numeric\n,10000000::numeric) g(x);\n\nis consistently 1-2% slower for me, with this patch. That's not much,\nbut then neither are most of the gains. In a lot of cases, it's so\nclose to the level of noise that I don't think most users will notice\none way or the other.\n\nSo IMO the results just don't justify such an extensive set of\nchanges, and I think we should abandon this fixed buffer approach.\n\nHaving said that, I think the results from the COPY test are worth\nlooking at more closely. Your results seem to suggest around a 5%\nimprovement. On my machine it was only around 3%, but that still might\nbe worth having, if it didn't involve such invasive changes throughout\nthe rest of the code.\n\nAs an experiment, I took another look at my earlier patch, making\nmake_result() construct the result using the same allocated memory as\nthe variable's digit buffer (if allocated). That eliminates around a\nthird of all free_var() calls from numeric.c, and for most functions,\nit saves both a palloc() and a pfree(). In the case of numeric_in(), I\nrealised that it's possible to go further, by reusing the decimal\ndigits buffer for the NumericVar's digits, and then later for the\nfinal Numeric result. Also, by carefully aligning things, it's\npossible to arrange it so that the final make_result() doesn't need to\ncopy/move the digits at all. With that I get something closer to a 15%\nimprovement in the COPY test, which is definitely worth having.\n\nIn the pi series above, it gave a 3-4% performance improvement, and\nthat seemed to be a common pattern across a number of other tests.\nIt's also a much less invasive change, since it's only really changing\nmake_result(), which makes the knock-on effects much more manageable,\nand reduces the chances of any performance regressions.\n\nI didn't do all the tests that you did though, so it would be\ninteresting to see how it fares in those.\n\nRegards,\nDean",
"msg_date": "Fri, 3 Mar 2023 15:11:27 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On Fri, Mar 3, 2023, at 16:11, Dean Rasheed wrote:\n> So IMO the results just don't justify such an extensive set of\n> changes, and I think we should abandon this fixed buffer approach.\n\nI agree. I was hoping it would be possible to reduce the invasiveness,\nbut I think it's difficult and probably not worth it.\n\n> Having said that, I think the results from the COPY test are worth\n> looking at more closely. Your results seem to suggest around a 5%\n> improvement. On my machine it was only around 3%, but that still might\n> be worth having, if it didn't involve such invasive changes throughout\n> the rest of the code.\n>\n> As an experiment, I took another look at my earlier patch, making\n> make_result() construct the result using the same allocated memory as\n> the variable's digit buffer (if allocated). That eliminates around a\n> third of all free_var() calls from numeric.c, and for most functions,\n> it saves both a palloc() and a pfree(). In the case of numeric_in(), I\n> realised that it's possible to go further, by reusing the decimal\n> digits buffer for the NumericVar's digits, and then later for the\n> final Numeric result. Also, by carefully aligning things, it's\n> possible to arrange it so that the final make_result() doesn't need to\n> copy/move the digits at all. With that I get something closer to a 15%\n> improvement in the COPY test, which is definitely worth having.\n\nNice! Patch LGTM.\n\n> I didn't do all the tests that you did though, so it would be\n> interesting to see how it fares in those.\n\nSELECT count(*) FROM generate_series(1::numeric, 10000000::numeric);\nTime: 1196.801 ms (00:01.197) -- HEAD\nTime: 1278.376 ms (00:01.278) -- make-result-using-vars-buf-v2.patch\n\nTRUNCATE foo; COPY foo FROM '/tmp/random-numerics.csv';\nTime: 8450.551 ms (00:08.451) -- HEAD\nTime: 7176.838 ms (00:07.177) -- make-result-using-vars-buf-v2.patch\n\nSELECT SUM(n1+n2+n3+n4) FROM t;\nTime: 643.961 ms -- HEAD\nTime: 620.303 ms -- make-result-using-vars-buf-v2.patch\n\nSELECT max(a + b + '17'::numeric + c)\nFROM\n(SELECT generate_series(1::numeric, 1000::numeric)) aa(a),\n(SELECT generate_series(1::numeric, 100::numeric)) bb(b),\n(SELECT generate_series(1::numeric, 10::numeric)) cc(c);\nTime: 141.070 ms -- HEAD\nTime: 139.562 ms -- make-result-using-vars-buf-v2.patch\n\nSELECT SUM(amount*1.25 + 0.5) FROM ledger;\nTime: 933.461 ms -- HEAD\nTime: 862.619 ms -- make-result-using-vars-buf-v2.patch\n\nLooks like a win in all cases except the first one.\n\nGreat work!\n\n/Joel\n\n\n",
"msg_date": "Sat, 04 Mar 2023 19:27:05 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On Fri, Mar 3, 2023, at 16:11, Dean Rasheed wrote:\n> Attachments:\n> * make-result-using-vars-buf-v2.patch\n\nOne suggestion: maybe add a comment explaining why the allocated buffer\nwhich size is based on strlen(cp) for the decimal digit values,\nis guaranteed to be large enough also for the result's digit buffer?\n\nI.e. some kind of proof why\n\n (NUMERIC_HDRSZ + strlen(cp) + DEC_DIGITS * 2) >= ((ndigits + 1) * sizeof(NumericDigit)))\n\nholds true in general.\n\n/Joel\n\n\n",
"msg_date": "Sun, 05 Mar 2023 09:53:32 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
},
{
"msg_contents": "On 05.03.23 09:53, Joel Jacobson wrote:\n> On Fri, Mar 3, 2023, at 16:11, Dean Rasheed wrote:\n>> Attachments:\n>> * make-result-using-vars-buf-v2.patch\n> \n> One suggestion: maybe add a comment explaining why the allocated buffer\n> which size is based on strlen(cp) for the decimal digit values,\n> is guaranteed to be large enough also for the result's digit buffer?\n> \n> I.e. some kind of proof why\n> \n> (NUMERIC_HDRSZ + strlen(cp) + DEC_DIGITS * 2) >= ((ndigits + 1) * sizeof(NumericDigit)))\n> \n> holds true in general.\n\nIt looks like this thread has fizzled out, and no one is really working \non the various proposed patch variants. Unless someone indicates that \nthey are still seriously pursuing this, I will mark this patch as \n\"Returned with feedback\".\n\n\n\n",
"msg_date": "Mon, 4 Sep 2023 08:55:29 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Missing free_var() at end of accum_sum_final()?"
}
] |
[
{
"msg_contents": "Hi hackers,\nI have this query as shown below:\n\nselect ref_1.r_comment as c0, subq_0.c1 as c1 from public.region as\nsample_0 right join public.partsupp as sample_1 right join public.lineitem\nas sample_2 on (cast(null as path) = cast(null as path)) on (cast(null as\n\"timestamp\") < cast(null as \"timestamp\")) inner join public.lineitem as\nref_0 on (true) left join (select sample_3.ps_availqty as c1,\nsample_3.ps_comment as c2 from public.partsupp as sample_3 where false\norder by c1, c2 ) as subq_0 on (sample_1.ps_supplycost = subq_0.c1 ) right\njoin public.region as ref_1 on (sample_1.ps_availqty = ref_1.r_regionkey )\nwhere ref_1.r_comment is not NULL order by c0, c1;\n\n*This query has different result on pg12.12 and on HEAD*,\non pg12.12:\n c0\n | c1\n-----------------------------------------------------------------------------------------------------------------+----\n even, ironic theodolites according to the bold platelets wa\n |\n furiously unusual packages use carefully above the unusual, exp\n |\n silent, bold requests sleep slyly across the quickly sly dependencies.\nfuriously silent instructions alongside |\n special, bold deposits haggle foxes. platelet\n |\n special Tiresias about the furiously even dolphins are furi\n |\n(5 rows)\n\nits plan :\n QUERY PLAN\n------------------------------------------------------\n Sort\n Sort Key: ref_1.r_comment, c1\n -> Hash Left Join\n Hash Cond: (ref_1.r_regionkey = ps_availqty)\n -> Seq Scan on region ref_1\n Filter: (r_comment IS NOT NULL)\n -> Hash\n -> Result\n One-Time Filter: false\n(9 rows)\n\nBut on HEAD(pg16devel), its results below:\nc0 | c1\n----+----\n(0 rows)\n\nits plan:\n QUERY PLAN\n----------------------------------------\n Sort\n Sort Key: ref_1.r_comment, subq_0.c1\n -> Result\n One-Time Filter: false\n(4 rows)\n\nAttached file included table schema info.\nregards, tender wang",
"msg_date": "Thu, 16 Feb 2023 15:16:23 +0800",
"msg_from": "tender wang <tndrwang@gmail.com>",
"msg_from_op": true,
"msg_subject": "wrong query result due to wang plan"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 3:16 PM tender wang <tndrwang@gmail.com> wrote:\n\n> select ref_1.r_comment as c0, subq_0.c1 as c1 from public.region as\n> sample_0 right join public.partsupp as sample_1 right join public.lineitem\n> as sample_2 on (cast(null as path) = cast(null as path)) on (cast(null as\n> \"timestamp\") < cast(null as \"timestamp\")) inner join public.lineitem as\n> ref_0 on (true) left join (select sample_3.ps_availqty as c1,\n> sample_3.ps_comment as c2 from public.partsupp as sample_3 where false\n> order by c1, c2 ) as subq_0 on (sample_1.ps_supplycost = subq_0.c1 ) right\n> join public.region as ref_1 on (sample_1.ps_availqty = ref_1.r_regionkey )\n> where ref_1.r_comment is not NULL order by c0, c1;\n>\n\nThe repro can be reduced to the query below.\n\ncreate table t (a int, b int);\n\n# explain (costs off) select * from t t1 left join (t t2 inner join t t3 on\nfalse left join t t4 on t2.b = t4.b) on t1.a = t2.a;\n QUERY PLAN\n--------------------------\n Result\n One-Time Filter: false\n(2 rows)\n\nAs we can see, the joinrel at the final level is marked as dummy, which\nis wrong. I traced this issue down to distribute_qual_to_rels() when we\nhandle variable-free clause. If such a clause is not an outer-join\nclause, and it contains no volatile functions either, we assign it the\nfull relid set of the current JoinDomain. I doubt this is always\ncorrect.\n\nSuch as in the query above, the clause 'false' is assigned relids {t2,\nt3, t4, t2/t4}. And that makes it a pushed down restriction to the\nsecond left join. This is all right if we plan this query in the\nuser-given order. But if we've commuted the two left joins, which is\nlegal, this pushed down and constant false restriction would make the\nfinal joinrel be dummy.\n\nIt seems we still need to check whether a variable-free qual comes from\nsomewhere that is below the nullable side of an outer join before we\ndecide that it can be evaluated at join domain level, just like we did\nbefore. So I wonder if we can add a 'below_outer_join' flag in\nJoinTreeItem, fill its value during deconstruct_recurse, and check it in\ndistribute_qual_to_rels() like\n\n /* eval at join domain level if not below outer join */\n- relids = bms_copy(jtitem->jdomain->jd_relids);\n+ relids = jtitem->below_outer_join ?\n+ bms_copy(qualscope) : bms_copy(jtitem->jdomain->jd_relids);\n\nThanks\nRichard\n\nOn Thu, Feb 16, 2023 at 3:16 PM tender wang <tndrwang@gmail.com> wrote:select\n ref_1.r_comment as c0,\n subq_0.c1 as c1\nfrom\n public.region as sample_0 \n right join public.partsupp as sample_1 \n right join public.lineitem as sample_2 \n on (cast(null as path) = cast(null as path))\n on (cast(null as \"timestamp\") < cast(null as \"timestamp\"))\n inner join public.lineitem as ref_0\n on (true)\n left join (select\n sample_3.ps_availqty as c1,\n sample_3.ps_comment as c2\n from\n public.partsupp as sample_3\n where false\n order by c1, c2 ) as subq_0\n on (sample_1.ps_supplycost = subq_0.c1 )\n right join public.region as ref_1\n on (sample_1.ps_availqty = ref_1.r_regionkey )\nwhere ref_1.r_comment is not NULL\norder by c0, c1; The repro can be reduced to the query below.create table t (a int, b int);# explain (costs off) select * from t t1 left join (t t2 inner join t t3 on false left join t t4 on t2.b = t4.b) on t1.a = t2.a; QUERY PLAN-------------------------- Result One-Time Filter: false(2 rows)As we can see, the joinrel at the final level is marked as dummy, whichis wrong. I traced this issue down to distribute_qual_to_rels() when wehandle variable-free clause. If such a clause is not an outer-joinclause, and it contains no volatile functions either, we assign it thefull relid set of the current JoinDomain. I doubt this is alwayscorrect.Such as in the query above, the clause 'false' is assigned relids {t2,t3, t4, t2/t4}. And that makes it a pushed down restriction to thesecond left join. This is all right if we plan this query in theuser-given order. But if we've commuted the two left joins, which islegal, this pushed down and constant false restriction would make thefinal joinrel be dummy.It seems we still need to check whether a variable-free qual comes fromsomewhere that is below the nullable side of an outer join before wedecide that it can be evaluated at join domain level, just like we didbefore. So I wonder if we can add a 'below_outer_join' flag inJoinTreeItem, fill its value during deconstruct_recurse, and check it indistribute_qual_to_rels() like /* eval at join domain level if not below outer join */- relids = bms_copy(jtitem->jdomain->jd_relids);+ relids = jtitem->below_outer_join ?+ bms_copy(qualscope) : bms_copy(jtitem->jdomain->jd_relids);ThanksRichard",
"msg_date": "Thu, 16 Feb 2023 17:50:59 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: wrong query result due to wang plan"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 5:50 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> It seems we still need to check whether a variable-free qual comes from\n> somewhere that is below the nullable side of an outer join before we\n> decide that it can be evaluated at join domain level, just like we did\n> before. So I wonder if we can add a 'below_outer_join' flag in\n> JoinTreeItem, fill its value during deconstruct_recurse, and check it in\n> distribute_qual_to_rels() like\n>\n> /* eval at join domain level if not below outer join */\n> - relids = bms_copy(jtitem->jdomain->jd_relids);\n> + relids = jtitem->below_outer_join ?\n> + bms_copy(qualscope) : bms_copy(jtitem->jdomain->jd_relids);\n>\n\nTo be concrete, I mean something like attached.\n\nThanks\nRichard",
"msg_date": "Thu, 16 Feb 2023 18:14:40 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: wrong query result due to wang plan"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 5:50 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> It seems we still need to check whether a variable-free qual comes from\n> somewhere that is below the nullable side of an outer join before we\n> decide that it can be evaluated at join domain level, just like we did\n> before. So I wonder if we can add a 'below_outer_join' flag in\n> JoinTreeItem, fill its value during deconstruct_recurse, and check it in\n> distribute_qual_to_rels()\n>\n\nIt occurs to me that we can leverage JoinDomain to tell if we are below\nthe nullable side of a higher-level outer join if the clause is not an\nouter-join clause. If we are belew outer join, the current JoinDomain\nis supposed to be a proper subset of top JoinDomain. Otherwise the\ncurrent JoinDomain must be equal to top JoinDomain. And that leads to a\nfix as code changes below.\n\n--- a/src/backend/optimizer/plan/initsplan.c\n+++ b/src/backend/optimizer/plan/initsplan.c\n@@ -2269,8 +2269,16 @@ distribute_qual_to_rels(PlannerInfo *root, Node\n*clause,\n }\n else\n {\n- /* eval at join domain level */\n- relids = bms_copy(jtitem->jdomain->jd_relids);\n+ JoinDomain *top_jdomain =\n+ linitial_node(JoinDomain, root->join_domains);\n+\n+ /*\n+ * eval at original syntactic level if we are below an outer join,\n+ * otherwise eval at join domain level, which is actually the top\n+ * of tree\n+ */\n+ relids = jtitem->jdomain == top_jdomain ?\n+ bms_copy(jtitem->jdomain->jd_relids) :\nbms_copy(qualscope);\n\n\nThanks\nRichard\n\nOn Thu, Feb 16, 2023 at 5:50 PM Richard Guo <guofenglinux@gmail.com> wrote:It seems we still need to check whether a variable-free qual comes fromsomewhere that is below the nullable side of an outer join before wedecide that it can be evaluated at join domain level, just like we didbefore. So I wonder if we can add a 'below_outer_join' flag inJoinTreeItem, fill its value during deconstruct_recurse, and check it indistribute_qual_to_rels() It occurs to me that we can leverage JoinDomain to tell if we are belowthe nullable side of a higher-level outer join if the clause is not anouter-join clause. If we are belew outer join, the current JoinDomainis supposed to be a proper subset of top JoinDomain. Otherwise thecurrent JoinDomain must be equal to top JoinDomain. And that leads to afix as code changes below.--- a/src/backend/optimizer/plan/initsplan.c+++ b/src/backend/optimizer/plan/initsplan.c@@ -2269,8 +2269,16 @@ distribute_qual_to_rels(PlannerInfo *root, Node *clause, } else {- /* eval at join domain level */- relids = bms_copy(jtitem->jdomain->jd_relids);+ JoinDomain *top_jdomain =+ linitial_node(JoinDomain, root->join_domains);++ /*+ * eval at original syntactic level if we are below an outer join,+ * otherwise eval at join domain level, which is actually the top+ * of tree+ */+ relids = jtitem->jdomain == top_jdomain ?+ bms_copy(jtitem->jdomain->jd_relids) : bms_copy(qualscope);ThanksRichard",
"msg_date": "Fri, 17 Feb 2023 15:35:28 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: wrong query result due to wang plan"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> It occurs to me that we can leverage JoinDomain to tell if we are below\n> the nullable side of a higher-level outer join if the clause is not an\n> outer-join clause. If we are belew outer join, the current JoinDomain\n> is supposed to be a proper subset of top JoinDomain. Otherwise the\n> current JoinDomain must be equal to top JoinDomain. And that leads to a\n> fix as code changes below.\n\nThat doesn't look right at all: surely this situation can occur further\ndown in a join tree, not only just below top level.\n\nI thought about this some more and realized that the whole business of\ntrying to push a qual up the join tree is probably obsolete.\n\nIn the first place, there's more than one problem with assigning the\nON FALSE condition the required_relids {t2, t3, t4, t2/t4}. As you say,\nit causes bogus conclusions about which joinrel can be considered empty if\nwe commute the outer joins' order. But also, doing it this way loses the\ninformation that t2/t3 can be considered empty, if we do the joins in\nan order where that is useful to know.\n\nIn the second place, I think recording the info that t2/t3 is empty is\nprobably sufficient now, because of the mark_dummy_rel/is_dummy_rel\nbookkeeping in joinrels.c (which did not exist when we first added this\n\"push to top of tree\" hack). If we know t2/t3 is empty then we will\npropagate that knowledge to {t2, t3, t4, t2/t4} when it's formed,\nwithout needing to have a clause that can be applied at that join level.\n\nSo that leads to a conclusion that we could just forget the whole\nthing and always use the syntactic qualscope here. I tried that\nand it doesn't break any existing regression tests, which admittedly\ndoesn't prove a lot in this area :-(. However, we'd then need to\ndecide what to do in process_implied_equality, which is annotated as\n\n * \"qualscope\" is the nominal syntactic level to impute to the restrictinfo.\n * This must contain at least all the rels used in the expressions, but it\n * is used only to set the qual application level when both exprs are\n * variable-free. (Hence, it should usually match the join domain in which\n * the clause applies.)\n\nand indeed equivclass.c is passing a join domain relid set. It's\nnot hard to demonstrate that that path is also broken, for instance\n\nexplain (costs off)\nselect * from int8_tbl t1 left join\n (int8_tbl t2 inner join int8_tbl t3 on (t2.q1-t3.q2) = 0 and (t2.q1-t3.q2) = 1\n left join int8_tbl t4 on t2.q2 = t4.q2)\non t1.q1 = t2.q1;\n\nOne idea I played with is that we could take the join domain relid set\nand subtract off the OJ relid and RHS relids of any fully-contained\nSpecialJoinInfos, reasoning that we can reconstruct the fact that\nthose won't make the join domain's overall result nondummy, and\nthereby avoiding the possibility of confusion if we end up commuting\nwith one of those joins. This feels perhaps overly complicated,\nthough, compared to the brain-dead-simple \"use the syntactic scope\"\napproach. Maybe we can get equivclass.c to do something equivalent\nto that? Or maybe we only need to use this complex rule in\nprocess_implied_equality?\n\n(I'm also starting to realize that the current basically-syntactic\ndefinition of join domains isn't really going to work with commutable\nouter joins, so maybe the ultimate outcome is that the join domain\nitself is defined in this more narrowly scoped way. But that feels\nlike a task to tackle later.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Feb 2023 15:26:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: wrong query result due to wang plan"
},
{
"msg_contents": "I wrote:\n> So that leads to a conclusion that we could just forget the whole\n> thing and always use the syntactic qualscope here. I tried that\n> and it doesn't break any existing regression tests, which admittedly\n> doesn't prove a lot in this area :-(.\n\nHmm, I thought it worked, but when I tried it again just now I see a\nfailure (well, a less efficient plan) for one of the recently added\ntest cases. I also tried the idea of stripping off lower outer joins,\nbut that doesn't work in distribute_qual_to_rels, because we haven't\nyet formed the SpecialJoinInfos for all the outer joins that are below\nthe top of the join domain.\n\nSo for now I agree with your solution in distribute_qual_to_rels.\nIt's about equivalent to what we did in older branches, and it's not\nreal clear that cases with pseudoconstant quals in lower join domains\nare common enough to be worth sweating over.\n\nWe still have to fix process_implied_equality though, and in that\ncontext we do have all the SpecialJoinInfos, so the strip-the-outer-joins\nfix seems to work. See attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 19 Feb 2023 14:01:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: wrong query result due to wang plan"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 3:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> We still have to fix process_implied_equality though, and in that\n> context we do have all the SpecialJoinInfos, so the strip-the-outer-joins\n> fix seems to work. See attached.\n\n\nYeah, process_implied_equality is also broken for variable-free clause.\nI failed to notice that :-(. I'm looking at the strip-the-outer-joins\ncodes. At first I wondered why it only removes JOIN_LEFT outer joins\nfrom below a JoinDomain's relids. After a second thought I think it's\nno problem here since only left outer joins have chance to commute with\njoins outside the JoinDomain.\n\nI'm thinking that maybe we can do the strip-the-outer-joins work only\nwhen it's not the top JoinDomain. When we are in the top JoinDomain, it\nseems to me that it's safe to push the qual to the top of the tree.\n\n- /* eval at join domain level */\n- relids = bms_copy(qualscope);\n+ /* eval at join domain's safe level */\n+ if (!bms_equal(qualscope,\n+ ((JoinDomain *)\nlinitial(root->join_domains))->jd_relids))\n+ relids = get_join_domain_min_rels(root, qualscope);\n+ else\n+ relids = bms_copy(qualscope);\n\nThanks\nRichard\n\nOn Mon, Feb 20, 2023 at 3:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nWe still have to fix process_implied_equality though, and in that\ncontext we do have all the SpecialJoinInfos, so the strip-the-outer-joins\nfix seems to work. See attached. Yeah, process_implied_equality is also broken for variable-free clause.I failed to notice that :-(. I'm looking at the strip-the-outer-joinscodes. At first I wondered why it only removes JOIN_LEFT outer joinsfrom below a JoinDomain's relids. After a second thought I think it'sno problem here since only left outer joins have chance to commute withjoins outside the JoinDomain.I'm thinking that maybe we can do the strip-the-outer-joins work onlywhen it's not the top JoinDomain. When we are in the top JoinDomain, itseems to me that it's safe to push the qual to the top of the tree.- /* eval at join domain level */- relids = bms_copy(qualscope);+ /* eval at join domain's safe level */+ if (!bms_equal(qualscope,+ ((JoinDomain *) linitial(root->join_domains))->jd_relids))+ relids = get_join_domain_min_rels(root, qualscope);+ else+ relids = bms_copy(qualscope);ThanksRichard",
"msg_date": "Mon, 20 Feb 2023 17:51:30 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: wrong query result due to wang plan"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> I'm thinking that maybe we can do the strip-the-outer-joins work only\n> when it's not the top JoinDomain. When we are in the top JoinDomain, it\n> seems to me that it's safe to push the qual to the top of the tree.\n\nYeah, because there's nothing to commute with. Might as well do that\nfor consistency with the distribute_qual_to_rels behavior, although\nI felt it was better to put it inside get_join_domain_min_rels.\n\nPushed, thanks for the report!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Feb 2023 12:41:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: wrong query result due to wang plan"
}
] |
[
{
"msg_contents": "Hi hackers,\r\n\r\nI noticed that the document of GUC createrole_self_grant doesn't mention its\r\ndefault value. The attached patch adds that.\r\n\r\nRegards,\r\nShi Yu",
"msg_date": "Thu, 16 Feb 2023 09:47:03 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Missing default value of createrole_self_grant in document"
},
{
"msg_contents": "> On 16 Feb 2023, at 10:47, shiy.fnst@fujitsu.com wrote:\n\n> I noticed that the document of GUC createrole_self_grant doesn't mention its\n> default value. The attached patch adds that.\n\nAgreed, showing the default value in the documentation is a good pattern IMO.\nUnless objected to I'll go apply this in a bit.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 16 Feb 2023 12:34:23 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Missing default value of createrole_self_grant in document"
}
] |
[
{
"msg_contents": "Hi,\n\n\nIn 'instr_time.h' it is stated that:\n\n* When summing multiple measurements, it's recommended to leave the\n* running sum in instr_time form (ie, use INSTR_TIME_ADD or\n* INSTR_TIME_ACCUM_DIFF) and convert to a result format only at the end.\n\n\nSo, I refactored 'PendingWalStats' to use 'instr_time' instead of \n'PgStat_Counter' while accumulating 'wal_write_time' and \n'wal_sync_time'. Also, I refactored some calculations to use \n'INSTR_TIME_ACCUM_DIFF' instead of using 'INSTR_TIME_SUBTRACT' and \n'INSTR_TIME_ADD'. What do you think?\n\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Thu, 16 Feb 2023 16:19:02 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": true,
"msg_subject": "Refactor calculations to use instr_time"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-16 16:19:02 +0300, Nazir Bilal Yavuz wrote:\n> What do you think?\n\nHere's a small review:\n\n> +#define WALSTAT_ACC(fld, var_to_add) \\\n> + \t(stats_shmem->stats.fld += var_to_add.fld)\n> +#define WALLSTAT_ACC_INSTR_TIME_TYPE(fld) \\\n> +\t(stats_shmem->stats.fld += INSTR_TIME_GET_MICROSEC(PendingWalStats.fld))\n> +\tWALSTAT_ACC(wal_records, diff);\n> +\tWALSTAT_ACC(wal_fpi, diff);\n> +\tWALSTAT_ACC(wal_bytes, diff);\n> +\tWALSTAT_ACC(wal_buffers_full, PendingWalStats);\n> +\tWALSTAT_ACC(wal_write, PendingWalStats);\n> +\tWALSTAT_ACC(wal_sync, PendingWalStats);\n> +\tWALLSTAT_ACC_INSTR_TIME_TYPE(wal_write_time);\n> +\tWALLSTAT_ACC_INSTR_TIME_TYPE(wal_sync_time);\n> #undef WALSTAT_ACC\n> -\n> \tLWLockRelease(&stats_shmem->lock);\n\nWALSTAT_ACC is undefined, but WALLSTAT_ACC_INSTR_TIME_TYPE isn't.\n\nI'd not remove the newline before LWLockRelease().\n\n\n> \t/*\n> diff --git a/src/include/pgstat.h b/src/include/pgstat.h\n> index db9675884f3..295c5eabf38 100644\n> --- a/src/include/pgstat.h\n> +++ b/src/include/pgstat.h\n> @@ -445,6 +445,21 @@ typedef struct PgStat_WalStats\n> \tTimestampTz stat_reset_timestamp;\n> } PgStat_WalStats;\n> \n> +/* Created for accumulating wal_write_time and wal_sync_time as a\n> instr_time\n\nMinor code-formatting point: In postgres we don't put code in the same line as\na multi-line comment starting with the /*. So either\n\n/* single line comment */\nor\n/*\n * multi line\n * comment\n */\n\n> + * but instr_time can't be used as a type where it ends up on-disk\n> + * because its units may change. PgStat_WalStats type is used for\n> + * in-memory/on-disk data. So, PgStat_PendingWalUsage is created for\n> + * accumulating intervals as a instr_time.\n> + */\n> +typedef struct PgStat_PendingWalUsage\n> +{\n> +\tPgStat_Counter wal_buffers_full;\n> +\tPgStat_Counter wal_write;\n> +\tPgStat_Counter wal_sync;\n> +\tinstr_time wal_write_time;\n> +\tinstr_time wal_sync_time;\n> +} PgStat_PendingWalUsage;\n> +\n\nI wonder if we should try to put pgWalUsage in here. But that's probably\nbetter done as a separate patch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 08:13:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Refactor calculations to use instr_time"
},
{
"msg_contents": "Hi,\n\n\nOn 2/16/23 19:13, Andres Freund wrote:\n>> +#define WALSTAT_ACC(fld, var_to_add) \\\n>> + \t(stats_shmem->stats.fld += var_to_add.fld)\n>> +#define WALLSTAT_ACC_INSTR_TIME_TYPE(fld) \\\n>> +\t(stats_shmem->stats.fld += INSTR_TIME_GET_MICROSEC(PendingWalStats.fld))\n>> +\tWALSTAT_ACC(wal_records, diff);\n>> +\tWALSTAT_ACC(wal_fpi, diff);\n>> +\tWALSTAT_ACC(wal_bytes, diff);\n>> +\tWALSTAT_ACC(wal_buffers_full, PendingWalStats);\n>> +\tWALSTAT_ACC(wal_write, PendingWalStats);\n>> +\tWALSTAT_ACC(wal_sync, PendingWalStats);\n>> +\tWALLSTAT_ACC_INSTR_TIME_TYPE(wal_write_time);\n>> +\tWALLSTAT_ACC_INSTR_TIME_TYPE(wal_sync_time);\n>> #undef WALSTAT_ACC\n>> -\n>> \tLWLockRelease(&stats_shmem->lock);\n> WALSTAT_ACC is undefined, but WALLSTAT_ACC_INSTR_TIME_TYPE isn't.\n>\n> I'd not remove the newline before LWLockRelease().\n>\n>\n>> \t/*\n>> diff --git a/src/include/pgstat.h b/src/include/pgstat.h\n>> index db9675884f3..295c5eabf38 100644\n>> --- a/src/include/pgstat.h\n>> +++ b/src/include/pgstat.h\n>> @@ -445,6 +445,21 @@ typedef struct PgStat_WalStats\n>> \tTimestampTz stat_reset_timestamp;\n>> } PgStat_WalStats;\n>> \n>> +/* Created for accumulating wal_write_time and wal_sync_time as a\n>> instr_time\n> Minor code-formatting point: In postgres we don't put code in the same line as\n> a multi-line comment starting with the /*. So either\n>\n> /* single line comment */\n> or\n> /*\n> * multi line\n> * comment\n> */\n\n\nThanks for the review. I updated the patch.\n\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Fri, 17 Feb 2023 13:53:36 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor calculations to use instr_time"
},
{
"msg_contents": "At Fri, 17 Feb 2023 13:53:36 +0300, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote in \n> Thanks for the review. I updated the patch.\n\n\n \tWalUsageAccumDiff(&diff, &pgWalUsage, &prevWalUsage);\n-\tPendingWalStats.wal_records = diff.wal_records;\n-\tPendingWalStats.wal_fpi = diff.wal_fpi;\n-\tPendingWalStats.wal_bytes = diff.wal_bytes;\n...\n+\tWALSTAT_ACC(wal_records, diff);\n+\tWALSTAT_ACC(wal_fpi, diff);\n+\tWALSTAT_ACC(wal_bytes, diff);\n+\tWALSTAT_ACC(wal_buffers_full, PendingWalStats);\n\n\nThe lifetime of the variable \"diff\" seems to be longer now. Wouldn't\nit be clearer if we renamed it to something more meaningful, like\nwal_usage_diff, WalUsageDiff or PendingWalUsage? Along those same\nlines, it occurs to me that the new struct should be named\nPgStat_PendingWalStats, instead of ..Usage. That change makes the name\nof the type and the variable consistent.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 20 Feb 2023 12:01:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor calculations to use instr_time"
},
{
"msg_contents": "Hi,\n\nThanks for the review.\n\nOn Mon, 20 Feb 2023 at 06:01, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 17 Feb 2023 13:53:36 +0300, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote in\n> > Thanks for the review. I updated the patch.\n>\n>\n> WalUsageAccumDiff(&diff, &pgWalUsage, &prevWalUsage);\n> - PendingWalStats.wal_records = diff.wal_records;\n> - PendingWalStats.wal_fpi = diff.wal_fpi;\n> - PendingWalStats.wal_bytes = diff.wal_bytes;\n> ...\n> + WALSTAT_ACC(wal_records, diff);\n> + WALSTAT_ACC(wal_fpi, diff);\n> + WALSTAT_ACC(wal_bytes, diff);\n> + WALSTAT_ACC(wal_buffers_full, PendingWalStats);\n>\n>\n> The lifetime of the variable \"diff\" seems to be longer now. Wouldn't\n> it be clearer if we renamed it to something more meaningful, like\n> wal_usage_diff, WalUsageDiff or PendingWalUsage? Along those same\n> lines, it occurs to me that the new struct should be named\n> PgStat_PendingWalStats, instead of ..Usage. That change makes the name\n> of the type and the variable consistent.\n\nI agree. The patch is updated.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Tue, 21 Feb 2023 16:11:19 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor calculations to use instr_time"
},
{
"msg_contents": "At Tue, 21 Feb 2023 16:11:19 +0300, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote in \n> I agree. The patch is updated.\n\nThanks, that part looks good to me. I'd like to provide some\nadditional comments.\n\nPgStat_PendingStats should be included in typedefs.list.\n\n\n+ * Created for accumulating wal_write_time and wal_sync_time as a instr_time\n+ * but instr_time can't be used as a type where it ends up on-disk\n+ * because its units may change. PgStat_WalStats type is used for\n+ * in-memory/on-disk data. So, PgStat_PendingWalStats is created for\n+ * accumulating intervals as a instr_time.\n+ */\n+typedef struct PgStat_PendingWalStats\n\nIMHO, this comment looks somewhat off. Maybe we could try something\nlike the following instead?\n\n> This struct stores wal-related durations as instr_time, which makes it\n> easier to accumulate them without requiring type conversions. Then,\n> during stats flush, they will be moved into shared stats with type\n> conversions.\n\n\nThe aim of this patch is to keep using instr_time for accumulation.\nSo it seems like we could do the same refactoring for\npgStatBlockReadTime, pgStatBlockWriteTime, pgStatActiveTime and\npgStatTransactionIdleTime. What do you think - should we include this\nadditional refactoring in the same patch or make a separate one for\nit?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 22 Feb 2023 11:50:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor calculations to use instr_time"
},
{
"msg_contents": "Hi,\n\nThanks for the review.\n\nOn Wed, 22 Feb 2023 at 05:50, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> PgStat_PendingStats should be included in typedefs.list.\n\nDone.\n\n>\n> + * Created for accumulating wal_write_time and wal_sync_time as a instr_time\n> + * but instr_time can't be used as a type where it ends up on-disk\n> + * because its units may change. PgStat_WalStats type is used for\n> + * in-memory/on-disk data. So, PgStat_PendingWalStats is created for\n> + * accumulating intervals as a instr_time.\n> + */\n> +typedef struct PgStat_PendingWalStats\n>\n> IMHO, this comment looks somewhat off. Maybe we could try something\n> like the following instead?\n>\n> > This struct stores wal-related durations as instr_time, which makes it\n> > easier to accumulate them without requiring type conversions. Then,\n> > during stats flush, they will be moved into shared stats with type\n> > conversions.\n\nDone. And I think we should write why we didn't change\nPgStat_WalStats's variable types and instead created a new struct.\nMaybe we can explain it in the commit description?\n\n>\n> The aim of this patch is to keep using instr_time for accumulation.\n> So it seems like we could do the same refactoring for\n> pgStatBlockReadTime, pgStatBlockWriteTime, pgStatActiveTime and\n> pgStatTransactionIdleTime. What do you think - should we include this\n> additional refactoring in the same patch or make a separate one for\n> it?\n\nI tried a bit but it seems the required changes for additional\nrefactoring aren't small. So, I think we can create a separate patch\nfor these changes.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Wed, 22 Feb 2023 13:13:03 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor calculations to use instr_time"
},
{
"msg_contents": "Hi,\n\nThere was a warning while applying the patch, v5 is attached.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Thu, 9 Mar 2023 16:02:44 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor calculations to use instr_time"
},
{
"msg_contents": "On Thu, Mar 09, 2023 at 04:02:44PM +0300, Nazir Bilal Yavuz wrote:\n> From dcd49e48a0784a95b8731df1c6ee7c3a612a8529 Mon Sep 17 00:00:00 2001\n> From: Nazir Bilal Yavuz <byavuz81@gmail.com>\n> Date: Thu, 9 Mar 2023 15:35:38 +0300\n> Subject: [PATCH v5] Refactor instr_time calculations\n> \n> Also, some calculations are refactored to use 'INSTR_TIME_ACCUM_DIFF' instead\n> of using 'INSTR_TIME_SUBTRACT' and 'INSTR_TIME_ADD'.\n> ---\n> src/backend/access/transam/xlog.c | 6 ++---\n> src/backend/storage/file/buffile.c | 6 ++---\n> src/backend/utils/activity/pgstat_wal.c | 31 +++++++++++++------------\n> src/include/pgstat.h | 17 +++++++++++++-\n> src/tools/pgindent/typedefs.list | 1 +\n> 5 files changed, 37 insertions(+), 24 deletions(-)\n> \n> diff --git a/src/backend/utils/activity/pgstat_wal.c b/src/backend/utils/activity/pgstat_wal.c\n> index e8598b2f4e0..58daae3fbd6 100644\n> --- a/src/backend/utils/activity/pgstat_wal.c\n> +++ b/src/backend/utils/activity/pgstat_wal.c\n> @@ -88,25 +88,26 @@ pgstat_flush_wal(bool nowait)\n> \t * Calculate how much WAL usage counters were increased by subtracting the\n> \t * previous counters from the current ones.\n> \t */\n> -\tWalUsageAccumDiff(&diff, &pgWalUsage, &prevWalUsage);\n> -\tPendingWalStats.wal_records = diff.wal_records;\n> -\tPendingWalStats.wal_fpi = diff.wal_fpi;\n> -\tPendingWalStats.wal_bytes = diff.wal_bytes;\n> +\tWalUsageAccumDiff(&wal_usage_diff, &pgWalUsage, &prevWalUsage);\n> \n> \tif (!nowait)\n> \t\tLWLockAcquire(&stats_shmem->lock, LW_EXCLUSIVE);\n> \telse if (!LWLockConditionalAcquire(&stats_shmem->lock, LW_EXCLUSIVE))\n> \t\treturn true;\n> \n> -#define WALSTAT_ACC(fld) stats_shmem->stats.fld += PendingWalStats.fld\n> -\tWALSTAT_ACC(wal_records);\n> -\tWALSTAT_ACC(wal_fpi);\n> -\tWALSTAT_ACC(wal_bytes);\n> -\tWALSTAT_ACC(wal_buffers_full);\n> -\tWALSTAT_ACC(wal_write);\n> -\tWALSTAT_ACC(wal_sync);\n> -\tWALSTAT_ACC(wal_write_time);\n> -\tWALSTAT_ACC(wal_sync_time);\n> +#define WALSTAT_ACC(fld, var_to_add) \\\n> +\t(stats_shmem->stats.fld += var_to_add.fld)\n> +#define WALLSTAT_ACC_INSTR_TIME_TYPE(fld) \\\n> +\t(stats_shmem->stats.fld += INSTR_TIME_GET_MICROSEC(PendingWalStats.fld))\n> +\tWALSTAT_ACC(wal_records, wal_usage_diff);\n> +\tWALSTAT_ACC(wal_fpi, wal_usage_diff);\n> +\tWALSTAT_ACC(wal_bytes, wal_usage_diff);\n> +\tWALSTAT_ACC(wal_buffers_full, PendingWalStats);\n> +\tWALSTAT_ACC(wal_write, PendingWalStats);\n> +\tWALSTAT_ACC(wal_sync, PendingWalStats);\n> +\tWALLSTAT_ACC_INSTR_TIME_TYPE(wal_write_time);\n\nI think you want one less L here?\nWALLSTAT_ACC_INSTR_TIME_TYPE -> WALSTAT_ACC_INSTR_TIME_TYPE\n\nAlso, I don't quite understand why TYPE is at the end of the name. I\nthink it would still be clear without it.\n\nI might find it clearer if the WALSTAT_ACC_INSTR_TIME_TYPE macro was\ndefined before using it for those fields instead of defining it right\nafter defining WALSTAT_ACC.\n\n> +\tWALLSTAT_ACC_INSTR_TIME_TYPE(wal_sync_time);\n> +#undef WALLSTAT_ACC_INSTR_TIME_TYPE\n> #undef WALSTAT_ACC\n> \n> \tLWLockRelease(&stats_shmem->lock);\n> diff --git a/src/include/pgstat.h b/src/include/pgstat.h\n> index f43fac09ede..5bbc55bb341 100644\n> --- a/src/include/pgstat.h\n> +++ b/src/include/pgstat.h\n> @@ -442,6 +442,21 @@ typedef struct PgStat_WalStats\n> \tTimestampTz stat_reset_timestamp;\n> } PgStat_WalStats;\n> \n> +/*\n> + * This struct stores wal-related durations as instr_time, which makes it\n> + * easier to accumulate them without requiring type conversions. Then,\n> + * during stats flush, they will be moved into shared stats with type\n> + * conversions.\n> + */\n> +typedef struct PgStat_PendingWalStats\n> +{\n> +\tPgStat_Counter wal_buffers_full;\n> +\tPgStat_Counter wal_write;\n> +\tPgStat_Counter wal_sync;\n> +\tinstr_time wal_write_time;\n> +\tinstr_time wal_sync_time;\n> +} PgStat_PendingWalStats;\n> +\n\nSo, I am not a fan of having this second struct (PgStat_PendingWalStats)\nwhich only has a subset of the members of PgStat_WalStats. It is pretty\nconfusing.\n\nIt is okay to have two structs -- one that is basically \"in-memory\" and\none that is a format that can be on disk, but these two structs with\ndifferent members are confusing and don't convey why we have the two\nstructs.\n\nI would either put WalUsage into PgStat_PendingWalStats (so that it has\nall the same members as PgStat_WalStats), or figure out a way to\nmaintain WalUsage separately from PgStat_WalStats or something else.\nWorst case, add more comments to the struct definitions to explain why\nthey have the members they have and how WalUsage relates to them.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 16 Mar 2023 19:02:43 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor calculations to use instr_time"
},
{
"msg_contents": "Hi,\n\nThanks for the review.\n\nOn Fri, 17 Mar 2023 at 02:02, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I think you want one less L here?\n> WALLSTAT_ACC_INSTR_TIME_TYPE -> WALSTAT_ACC_INSTR_TIME_TYPE\n\nDone.\n\n> Also, I don't quite understand why TYPE is at the end of the name. I\n> think it would still be clear without it.\n\nDone.\n\n> I might find it clearer if the WALSTAT_ACC_INSTR_TIME_TYPE macro was\n> defined before using it for those fields instead of defining it right\n> after defining WALSTAT_ACC.\n\nSince it is undefined together with WALSTAT_ACC, defining them\ntogether makes sense to me.\n\n> > + * This struct stores wal-related durations as instr_time, which makes it\n> > + * easier to accumulate them without requiring type conversions. Then,\n> > + * during stats flush, they will be moved into shared stats with type\n> > + * conversions.\n> > + */\n> > +typedef struct PgStat_PendingWalStats\n> > +{\n> > + PgStat_Counter wal_buffers_full;\n> > + PgStat_Counter wal_write;\n> > + PgStat_Counter wal_sync;\n> > + instr_time wal_write_time;\n> > + instr_time wal_sync_time;\n> > +} PgStat_PendingWalStats;\n> > +\n>\n> So, I am not a fan of having this second struct (PgStat_PendingWalStats)\n> which only has a subset of the members of PgStat_WalStats. It is pretty\n> confusing.\n>\n> It is okay to have two structs -- one that is basically \"in-memory\" and\n> one that is a format that can be on disk, but these two structs with\n> different members are confusing and don't convey why we have the two\n> structs.\n>\n> I would either put WalUsage into PgStat_PendingWalStats (so that it has\n> all the same members as PgStat_WalStats), or figure out a way to\n> maintain WalUsage separately from PgStat_WalStats or something else.\n> Worst case, add more comments to the struct definitions to explain why\n> they have the members they have and how WalUsage relates to them.\n\nYes, but like Andres said this could be better done as a separate patch.\n\nv6 is attached.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Thu, 23 Mar 2023 17:38:14 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor calculations to use instr_time"
},
{
"msg_contents": "Hi,\n\nI pushed this version! Thanks all, for the contribution and reviews.\n\n\n> > I would either put WalUsage into PgStat_PendingWalStats (so that it has\n> > all the same members as PgStat_WalStats), or figure out a way to\n> > maintain WalUsage separately from PgStat_WalStats or something else.\n> > Worst case, add more comments to the struct definitions to explain why\n> > they have the members they have and how WalUsage relates to them.\n> \n> Yes, but like Andres said this could be better done as a separate patch.\n\nI invite you to write a patch for that for 17...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 Mar 2023 14:25:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Refactor calculations to use instr_time"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on [1], I was in need to know WAL read stats (number of\ntimes and amount of WAL data read from disk, time taken to read) to\nmeasure the benefit. I had to write a developer patch to capture WAL\nread stats as pg_stat_wal currently emits WAL write stats. With recent\nworks on pg_stat_io which emit data read IO stats too, I think it's\nbetter to not miss WAL read stats. It might help others who keep an\neye on IOPS of the production servers for various reasons. The WAL\nread stats I'm thinking useful are wal_read_bytes - total amount of\nWAL read, wal_read - total number of times WAL is read from disk,\nwal_read_time - total amount of time spent reading WAL (tracked only\nwhen an existing GUC track_wal_io_timing is on).\n\nI came up with a patch and attached it here. The WAL readers that add\nto WAL read stats are WAL senders, startup process and other backends\nusing xlogreader for logical replication or pg_walinspect SQL\nfunctions. They all report stats to shared memory by calling\npgstat_report_wal() in appropriate locations. In standby mode, calling\npgstat_report_wa() for every record seems to be costly. Therefore, I\nchose to report stats every 1024 WAL records (a random number,\nsuggestions for a better a way are welcome here).\n\nNote that the patch needs a bit more work, per [2]. With the patch,\nthe WAL senders (processes exiting after checkpointer) will generate\nstats and we need to either let all or only one WAL sender to write\nstats to disk. Allowing one WAL sender to write might be tricky.\nAllowing all WAL senders to write might make too many writes to the\nstats file. And, we need a lock to let only one process write. I can't\nthink of a best way here at the moment.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACXKKK=wbiG5_t6dGao5GoecMwRkhr7GjVBM_jg54+Na=Q@mail.gmail.com\n[2]\n /*\n * Write out stats after shutdown. This needs to be called by exactly one\n * process during a normal shutdown, and since checkpointer is shut down\n * very late...\n *\n * Walsenders are shut down after the checkpointer, but currently don't\n * report stats. If that changes, we need a more complicated solution.\n */\n before_shmem_exit(pgstat_before_server_shutdown, 0);\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 16 Feb 2023 23:39:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add WAL read stats to pg_stat_wal"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-16 23:39:00 +0530, Bharath Rupireddy wrote:\n> While working on [1], I was in need to know WAL read stats (number of\n> times and amount of WAL data read from disk, time taken to read) to\n> measure the benefit. I had to write a developer patch to capture WAL\n> read stats as pg_stat_wal currently emits WAL write stats. With recent\n> works on pg_stat_io which emit data read IO stats too, I think it's\n> better to not miss WAL read stats. It might help others who keep an\n> eye on IOPS of the production servers for various reasons. The WAL\n> read stats I'm thinking useful are wal_read_bytes - total amount of\n> WAL read, wal_read - total number of times WAL is read from disk,\n> wal_read_time - total amount of time spent reading WAL (tracked only\n> when an existing GUC track_wal_io_timing is on).\n\nI doesn't really seem useful to have this in pg_stat_wal, because you can't\nreally figure out where those reads are coming from. Are they crash recovery?\nWalsender? ...?\n\nI think this'd better be handled by adding WAL support for pg_stat_io. Then\nthe WAL reads would be attributed to the relevant backend type, making it\neasier to answer such questions. Going forward I want to add support for\nseeing pg_stat_io for individual connections, which'd then automatically\nsupport this feature for the WAL reads as well.\n\nEventually I think pg_stat_wal should only track wal_records, wal_fpi,\nwal_buffers_full and fill the other columns from pg_stat_io.\n\n\nHowever, this doesn't \"solve\" the following issue:\n\n> Note that the patch needs a bit more work, per [2]. With the patch,\n> the WAL senders (processes exiting after checkpointer) will generate\n> stats and we need to either let all or only one WAL sender to write\n> stats to disk. Allowing one WAL sender to write might be tricky.\n> Allowing all WAL senders to write might make too many writes to the\n> stats file. And, we need a lock to let only one process write. I can't\n> think of a best way here at the moment.\n> \n> Thoughts?\n> \n> [1] https://www.postgresql.org/message-id/CALj2ACXKKK=wbiG5_t6dGao5GoecMwRkhr7GjVBM_jg54+Na=Q@mail.gmail.com\n> [2]\n> /*\n> * Write out stats after shutdown. This needs to be called by exactly one\n> * process during a normal shutdown, and since checkpointer is shut down\n> * very late...\n> *\n> * Walsenders are shut down after the checkpointer, but currently don't\n> * report stats. If that changes, we need a more complicated solution.\n> */\n> before_shmem_exit(pgstat_before_server_shutdown, 0);\n\nI wonder if we should keep the checkpointer around for longer. If we have\ncheckpointer signal postmaster after it wrote the shutdown checkpoint,\npostmaster could signal walsenders to shut down, and checkpointer could do\nsome final work, like writing out the stats.\n\nI suspect this could be useful for other things as well. It's awkward that we\ndon't have a place to put \"just before shutting down\" type tasks. And\ncheckpointer seems well suited for that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 11:11:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add WAL read stats to pg_stat_wal"
},
{
"msg_contents": "At Thu, 16 Feb 2023 11:11:38 -0800, Andres Freund <andres@anarazel.de> wrote in \n> I wonder if we should keep the checkpointer around for longer. If we have\n> checkpointer signal postmaster after it wrote the shutdown checkpoint,\n> postmaster could signal walsenders to shut down, and checkpointer could do\n> some final work, like writing out the stats.\n> I suspect this could be useful for other things as well. It's awkward that we\n> don't have a place to put \"just before shutting down\" type tasks. And\n> checkpointer seems well suited for that.\n\nI totally agree that it will be useful, but I'm not quite sure how\ncheckpointer would be able to let postmaster know about that state\nwithout requiring access to shared memory.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 20 Feb 2023 14:21:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add WAL read stats to pg_stat_wal"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 12:41 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-02-16 23:39:00 +0530, Bharath Rupireddy wrote:\n> > While working on [1], I was in need to know WAL read stats (number of\n> > times and amount of WAL data read from disk, time taken to read) to\n> > measure the benefit. I had to write a developer patch to capture WAL\n> > read stats as pg_stat_wal currently emits WAL write stats. With recent\n> > works on pg_stat_io which emit data read IO stats too, I think it's\n> > better to not miss WAL read stats. It might help others who keep an\n> > eye on IOPS of the production servers for various reasons. The WAL\n> > read stats I'm thinking useful are wal_read_bytes - total amount of\n> > WAL read, wal_read - total number of times WAL is read from disk,\n> > wal_read_time - total amount of time spent reading WAL (tracked only\n> > when an existing GUC track_wal_io_timing is on).\n>\n> I doesn't really seem useful to have this in pg_stat_wal, because you can't\n> really figure out where those reads are coming from. Are they crash recovery?\n> Walsender? ...?\n\nYes, that's the limitation with what I've proposed.\n\n> I think this'd better be handled by adding WAL support for pg_stat_io. Then\n> the WAL reads would be attributed to the relevant backend type, making it\n> easier to answer such questions. Going forward I want to add support for\n> seeing pg_stat_io for individual connections, which'd then automatically\n> support this feature for the WAL reads as well.\n>\n> Eventually I think pg_stat_wal should only track wal_records, wal_fpi,\n> wal_buffers_full and fill the other columns from pg_stat_io.\n\npg_stat_io being one place for all IO related information sounds apt\nand useful. And similarly, we might want to push write/read/flush info\nfrom pg_stat_slru to pg_stat_io.\n\n> However, this doesn't \"solve\" the following issue:\n>\n> > Note that the patch needs a bit more work, per [2]. With the patch,\n> > the WAL senders (processes exiting after checkpointer) will generate\n> > stats and we need to either let all or only one WAL sender to write\n> > stats to disk. Allowing one WAL sender to write might be tricky.\n> > Allowing all WAL senders to write might make too many writes to the\n> > stats file. And, we need a lock to let only one process write. I can't\n> > think of a best way here at the moment.\n> >\n> > Thoughts?\n> >\n> > [1] https://www.postgresql.org/message-id/CALj2ACXKKK=wbiG5_t6dGao5GoecMwRkhr7GjVBM_jg54+Na=Q@mail.gmail.com\n> > [2]\n> > /*\n> > * Write out stats after shutdown. This needs to be called by exactly one\n> > * process during a normal shutdown, and since checkpointer is shut down\n> > * very late...\n> > *\n> > * Walsenders are shut down after the checkpointer, but currently don't\n> > * report stats. If that changes, we need a more complicated solution.\n> > */\n> > before_shmem_exit(pgstat_before_server_shutdown, 0);\n>\n> I wonder if we should keep the checkpointer around for longer. If we have\n> checkpointer signal postmaster after it wrote the shutdown checkpoint,\n> postmaster could signal walsenders to shut down, and checkpointer could do\n> some final work, like writing out the stats.\n>\n> I suspect this could be useful for other things as well. It's awkward that we\n> don't have a place to put \"just before shutting down\" type tasks. And\n> checkpointer seems well suited for that.\n\nYes, there are some places that still assume checkpointer is the last\nprocess to exit, for instance see [1]. If we can truly make it happen,\nit'll be useful. I'll come up with more thoughts (and perhaps a patch)\non this soon.\n\n[1]\n /*\n * Checkpointer is the last process to shut down, so we ask it to hold\n * the keys for a range of other tasks required most of which have\n * nothing to do with checkpointing at all.\n *\n * For various reasons, some config values can change dynamically so\n * the primary copy of them is held in shared memory to make sure all\n * backends see the same value. We make Checkpointer responsible for\n * updating the shared memory copy if the parameter setting changes\n * because of SIGHUP.\n */\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 20 Feb 2023 20:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add WAL read stats to pg_stat_wal"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 10:51 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 16 Feb 2023 11:11:38 -0800, Andres Freund <andres@anarazel.de> wrote in\n> > I wonder if we should keep the checkpointer around for longer. If we have\n> > checkpointer signal postmaster after it wrote the shutdown checkpoint,\n> > postmaster could signal walsenders to shut down, and checkpointer could do\n> > some final work, like writing out the stats.\n> > I suspect this could be useful for other things as well. It's awkward that we\n> > don't have a place to put \"just before shutting down\" type tasks. And\n> > checkpointer seems well suited for that.\n>\n> I totally agree that it will be useful, but I'm not quite sure how\n> checkpointer would be able to let postmaster know about that state\n> without requiring access to shared memory.\n\nThe checkpointer can either set a flag in shared memory\n(CheckpointerShmem or XLogCtl) or send a multiplexed SIGUSR1 (of\ncourse, this one too needs shared memory access for PMSignalState) or\nSIGUSR2 (pqsignal(SIGUSR2, dummy_handler); /* unused, reserve for\nchildren */) if we don't want shared memory access after it writes a\nshutdown checkpoint.\n\nHaving said that, what's the problem if we use shared memory to report\nthe shutdown checkpoint to the postmaster? In case of abnormal\nshutdown where shared memory gets corrupted, we don't even write a\nshutdown checkpoint, no? In such a case, the postmaster doesn't send\nSIGUSR2 to the checkpointer, instead it sends SIGQUIT. AFICS, using\nshared memory doesn't seem to have any problem. Do you have any other\nthoughts in mind?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 20 Feb 2023 20:15:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add WAL read stats to pg_stat_wal"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-20 14:21:39 +0900, Kyotaro Horiguchi wrote:\n> I totally agree that it will be useful, but I'm not quite sure how\n> checkpointer would be able to let postmaster know about that state\n> without requiring access to shared memory.\n\nSendPostmasterSignal(PMSIGNAL_SHUTDOWN_CHECKPOINT_COMPLETE);\nor such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Feb 2023 08:29:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add WAL read stats to pg_stat_wal"
},
{
"msg_contents": "At Mon, 20 Feb 2023 08:29:06 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2023-02-20 14:21:39 +0900, Kyotaro Horiguchi wrote:\n> > I totally agree that it will be useful, but I'm not quite sure how\n> > checkpointer would be able to let postmaster know about that state\n> > without requiring access to shared memory.\n> \n> SendPostmasterSignal(PMSIGNAL_SHUTDOWN_CHECKPOINT_COMPLETE);\n> or such.\n\nAh, that's it. Thanks!\n\nregarsd.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 21 Feb 2023 11:58:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add WAL read stats to pg_stat_wal"
},
{
"msg_contents": "At Mon, 20 Feb 2023 20:15:00 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Having said that, what's the problem if we use shared memory to report\n> the shutdown checkpoint to the postmaster? In case of abnormal\n> shutdown where shared memory gets corrupted, we don't even write a\n> shutdown checkpoint, no? In such a case, the postmaster doesn't send\n> SIGUSR2 to the checkpointer, instead it sends SIGQUIT. AFICS, using\n> shared memory doesn't seem to have any problem. Do you have any other\n> thoughts in mind?\n\nI had a baseless belief that postmaster doesn't touch shared memory,\nbut as Andres suggested, SendPostmasterSignal() already does that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 21 Feb 2023 12:00:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add WAL read stats to pg_stat_wal"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI've been working on a federated database project that heavily relies on\nforeign data wrappers. During benchmarking, we noticed high system CPU\nusage in OLTP-related cases, which we traced back to multiple brk calls\nresulting from block frees in AllocSetReset upon ExecutorEnd's\nFreeExecutorState. This is because FDWs allocate their own derived\nexecution states and required data structures within this context,\nexceeding the initial 8K allocation, that need to be cleaned-up.\n\nIncreasing the default query context allocation from ALLOCSET_DEFAULT_SIZES\nto a larger initial \"appropriate size\" solved the issue and almost doubled\nthe throughput. However, the \"appropriate size\" is workload and\nimplementation dependent, so making it configurable may be better than\nincreasing the defaults, which would negatively impact users (memory-wise)\nwho aren't encountering this scenario.\n\nI have a patch to make it configurable, but before submitting it, I wanted\nto hear your thoughts and feedback on this and any other alternative ideas\nyou may have.\n\n-- \nJonah H. Harris\n\nHi everyone,I've been working on a federated database project that heavily relies on foreign data wrappers. During benchmarking, we noticed high system CPU usage in OLTP-related cases, which we traced back to multiple brk calls resulting from block frees in AllocSetReset upon ExecutorEnd's FreeExecutorState. This is because FDWs allocate their own derived execution states and required data structures within this context, exceeding the initial 8K allocation, that need to be cleaned-up.Increasing the default query context allocation from ALLOCSET_DEFAULT_SIZES to a larger initial \"appropriate size\" solved the issue and almost doubled the throughput. However, the \"appropriate size\" is workload and implementation dependent, so making it configurable may be better than increasing the defaults, which would negatively impact users (memory-wise) who aren't encountering this scenario.I have a patch to make it configurable, but before submitting it, I wanted to hear your thoughts and feedback on this and any other alternative ideas you may have.-- Jonah H. Harris",
"msg_date": "Thu, 16 Feb 2023 16:49:07 -0500",
"msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reducing System Allocator Thrashing of ExecutorState to Alleviate\n FDW-related Performance Degradations"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-16 16:49:07 -0500, Jonah H. Harris wrote:\n> I've been working on a federated database project that heavily relies on\n> foreign data wrappers. During benchmarking, we noticed high system CPU\n> usage in OLTP-related cases, which we traced back to multiple brk calls\n> resulting from block frees in AllocSetReset upon ExecutorEnd's\n> FreeExecutorState. This is because FDWs allocate their own derived\n> execution states and required data structures within this context,\n> exceeding the initial 8K allocation, that need to be cleaned-up.\n\nWhat PG version?\n\nDo you have a way to reproduce this with core code,\ne.g. postgres_fdw/file_fdw?\n\nWhat is all that memory used for? Is it possible that the real issue are too\nmany tiny allocations, due to some allocation growing slowly?\n\n\n> Increasing the default query context allocation from ALLOCSET_DEFAULT_SIZES\n> to a larger initial \"appropriate size\" solved the issue and almost doubled\n> the throughput. However, the \"appropriate size\" is workload and\n> implementation dependent, so making it configurable may be better than\n> increasing the defaults, which would negatively impact users (memory-wise)\n> who aren't encountering this scenario.\n> \n> I have a patch to make it configurable, but before submitting it, I wanted\n> to hear your thoughts and feedback on this and any other alternative ideas\n> you may have.\n\nThis seems way too magic to expose to users. How would they ever know how to\nset it? And it will heavily on the specific queries, so a global config won't\nwork well.\n\nIf the issue is a specific FDW needing to make a lot of allocations, I can see\nadding an API to tell a memory context that it ought to be ready to allocate a\ncertain amount of memory efficiently (e.g. by increasing the block size of the\nnext allocation by more than 2x).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 16:32:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to\n Alleviate FDW-related Performance Degradations"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 7:32 PM Andres Freund <andres@anarazel.de> wrote:\n\n> What PG version?\n>\n\nHey, Andres. Thanks for the reply.\n\nGiven not much changed regarding that allocation context IIRC, I’d think\nall recents. It was observed in 13, 14, and 15.\n\nDo you have a way to reproduce this with core code,\n> e.g. postgres_fdw/file_fdw?\n\n\nI’ll have to create one - it was most evident on a TPC-C or sysbench test\nusing the Postgres, MySQL, SQLite, and Oracle FDWs. It may be reproducible\nwith pgbench as well.\n\nWhat is all that memory used for? Is it possible that the real issue are too\n> many tiny allocations, due to some allocation growing slowly?\n\n\nThe FDW state management allocations and whatever each FDW needs to\naccomplish its goals. Different FDWs do different things.\n\nThis seems way too magic to expose to users. How would they ever know how to\n> set it? And it will heavily on the specific queries, so a global config\n> won't\n> work well.\n\n\nAgreed on the nastiness of exposing it directly. Not that we don’t give\nusers control of memory anyway, but that one is easier to mess up without\nat least putting some custom set bounds on it.\n\n\nIf the issue is a specific FDW needing to make a lot of allocations, I can\n> see\n> adding an API to tell a memory context that it ought to be ready to\n> allocate a\n> certain amount of memory efficiently (e.g. by increasing the block size of\n> the\n> next allocation by more than 2x).\n\n\nWhile I’m happy to be wrong, it seems is an inherent problem not really\nspecific to FDW implementations themselves but the general expectation that\nall FDWs are using more of that context than non-FDW cases for similar\ntypes of operations, which wasn’t really a consideration in that allocation\nover time.\n\nIf we come up with some sort of alternate allocation strategy, I don’t know\nhow it would be very clean API-wise, but it’s definitely an idea.\n\n\n\n\n\n-- \nJonah H. Harris\n\nOn Thu, Feb 16, 2023 at 7:32 PM Andres Freund <andres@anarazel.de> wrote:\nWhat PG version?\nHey, Andres. Thanks for the reply.Given not much changed regarding that allocation context IIRC, I’d think all recents. It was observed in 13, 14, and 15.\nDo you have a way to reproduce this with core code,\ne.g. postgres_fdw/file_fdw?I’ll have to create one - it was most evident on a TPC-C or sysbench test using the Postgres, MySQL, SQLite, and Oracle FDWs. It may be reproducible with pgbench as well.What is all that memory used for? Is it possible that the real issue are too\nmany tiny allocations, due to some allocation growing slowly?The FDW state management allocations and whatever each FDW needs to accomplish its goals. Different FDWs do different things.This seems way too magic to expose to users. How would they ever know how to\nset it? And it will heavily on the specific queries, so a global config won't\nwork well.Agreed on the nastiness of exposing it directly. Not that we don’t give users control of memory anyway, but that one is easier to mess up without at least putting some custom set bounds on it.\nIf the issue is a specific FDW needing to make a lot of allocations, I can see\nadding an API to tell a memory context that it ought to be ready to allocate a\ncertain amount of memory efficiently (e.g. by increasing the block size of the\nnext allocation by more than 2x).While I’m happy to be wrong, it seems is an inherent problem not really specific to FDW implementations themselves but the general expectation that all FDWs are using more of that context than non-FDW cases for similar types of operations, which wasn’t really a consideration in that allocation over time.If we come up with some sort of alternate allocation strategy, I don’t know how it would be very clean API-wise, but it’s definitely an idea.-- Jonah H. Harris",
"msg_date": "Thu, 16 Feb 2023 21:34:18 -0500",
"msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to Alleviate\n FDW-related Performance Degradations"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-16 21:34:18 -0500, Jonah H. Harris wrote:\n> On Thu, Feb 16, 2023 at 7:32 PM Andres Freund <andres@anarazel.de> wrote:\n> Given not much changed regarding that allocation context IIRC, I’d think\n> all recents. It was observed in 13, 14, and 15.\n\nWe did have a fair bit of changes in related code in the last few years,\nincluding some in 16. I wouldn't expect them to help *hugely*, but also\nwouldn't be surprised if it showed.\n\n\n> I’ll have to create one - it was most evident on a TPC-C or sysbench test\n> using the Postgres, MySQL, SQLite, and Oracle FDWs. It may be reproducible\n> with pgbench as well.\n\nI'd like a workload that hits a perf issue with this, because I think there\nlikely are some general performance improvements that we could make, without\nchanging the initial size or the \"growth rate\".\n\nPerhaps, as a starting point, you could get\n MemoryContextStats(queryDesc->estate->es_query_cxt)\nboth at the end of standard_ExecutorStart() and at the beginning of\nstandard_ExecutorFinish(), for one of the queries triggering the performance\nissues?\n\n\n> > If the issue is a specific FDW needing to make a lot of allocations, I can\n> > see\n> > adding an API to tell a memory context that it ought to be ready to\n> > allocate a\n> > certain amount of memory efficiently (e.g. by increasing the block size of\n> > the\n> > next allocation by more than 2x).\n>\n>\n> While I’m happy to be wrong, it seems is an inherent problem not really\n> specific to FDW implementations themselves but the general expectation that\n> all FDWs are using more of that context than non-FDW cases for similar\n> types of operations, which wasn’t really a consideration in that allocation\n> over time.\n\nLots of things can end up in the query context, it's really not FDW specific\nfor it to be of nontrivial size. E.g. most tuples passed around end up in it.\n\nSimilar performance issues also exists for plenty other memory contexts. Which\nfor me that's a reason *not* to make it configurable just for\nCreateExecutorState. Or were you proposing to make ALLOCSET_DEFAULT_INITSIZE\nconfigurable? That would end up with a lot of waste, I think.\n\n\nThe executor context case might actually be a comparatively easy case to\naddress. There's really two \"phases\" of use for es_query_ctx. First, we create\nthe entire executor tree in it, during standard_ExecutorStart(). Second,\nduring query execution, we allocate things with query lifetime (be that\nbecause they need to live till the end, or because they are otherwise\nmanaged, like tuples).\n\nEven very simple queries end up with multiple blocks at the end:\nE.g.\n SELECT relname FROM pg_class WHERE relkind = 'r' AND relname = 'frak';\nyields:\n ExecutorState: 43784 total in 3 blocks; 8960 free (5 chunks); 34824 used\n ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n Grand total: 51976 bytes in 4 blocks; 16888 free (5 chunks); 35088 used\n\nSo quite justifiably we could just increase the hardcoded size in\nCreateExecutorState. I'd expect that starting a few size classes up would help\nnoticeably.\n\n\nBut I think we likely could do better here. The amount of memory that ends up\nin es_query_cxt during \"phase 1\" strongly correlates with the complexity of\nthe statement, as the whole executor tree ends up in it. Using information\nabout the complexity of the planned statement to influence es_query_cxt's\nblock sizes would make sense to me. I suspect it's a decent enough proxy for\n\"phase 2\" as well.\n\n\nMedium-long term I really want to allocate at least all the executor nodes\nthemselves in a single allocation. But that's a bit further out than what\nwe're talking about here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 19:40:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to\n Alleviate FDW-related Performance Degradations"
},
{
"msg_contents": "On Fri, 17 Feb 2023 at 16:40, Andres Freund <andres@anarazel.de> wrote:\n> I'd like a workload that hits a perf issue with this, because I think there\n> likely are some general performance improvements that we could make, without\n> changing the initial size or the \"growth rate\".\n\nI didn't hear it mentioned explicitly here, but I suspect it's faster\nwhen increasing the initial size due to the memory context caching\ncode that reuses aset MemoryContexts (see context_freelists[] in\naset.c). Since we reset the context before caching it, then it'll\nremain fast when we can reuse a context, provided we don't need to do\na malloc for an additional block beyond the initial block that's kept\nin the cache.\n\nMaybe we should think of a more general-purpose way of doing this\ncaching which just keeps a global-to-the-process dclist of blocks\nlaying around. We could see if we have any free blocks both when\ncreating the context and also when we need to allocate another block.\nI see no reason why this couldn't be shared among the other context\ntypes rather than keeping this cache stuff specific to aset.c. slab.c\nmight need to be pickier if the size isn't exactly what it needs, but\ngeneration.c should be able to make use of it the same as aset.c\ncould. I'm unsure what'd we'd need in the way of size classing for\nthis, but I suspect we'd need to pay attention to that rather than do\nthings like hand over 16MBs of memory to some context that only wants\na 1KB initial block.\n\nDavid\n\n\n",
"msg_date": "Fri, 17 Feb 2023 17:26:20 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to Alleviate\n FDW-related Performance Degradations"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 11:26 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I didn't hear it mentioned explicitly here, but I suspect it's faster\n> when increasing the initial size due to the memory context caching\n> code that reuses aset MemoryContexts (see context_freelists[] in\n> aset.c). Since we reset the context before caching it, then it'll\n> remain fast when we can reuse a context, provided we don't need to do\n> a malloc for an additional block beyond the initial block that's kept\n> in the cache.\n\n\nThis is what we were seeing. The larger initial size reduces/eliminates the\nmultiple smaller blocks that are malloced and freed in each per-query\nexecution.\n\nMaybe we should think of a more general-purpose way of doing this\n> caching which just keeps a global-to-the-process dclist of blocks\n> laying around. We could see if we have any free blocks both when\n> creating the context and also when we need to allocate another block.\n> I see no reason why this couldn't be shared among the other context\n> types rather than keeping this cache stuff specific to aset.c. slab.c\n> might need to be pickier if the size isn't exactly what it needs, but\n> generation.c should be able to make use of it the same as aset.c\n> could. I'm unsure what'd we'd need in the way of size classing for\n> this, but I suspect we'd need to pay attention to that rather than do\n> things like hand over 16MBs of memory to some context that only wants\n> a 1KB initial block.\n\n\nYeah. There’s definitely a smarter and more reusable approach than I was\nproposing. A lot of that code is fairly mature and I figured more people\nwouldn’t want to alter it in such ways - but I’m up for it if an approach\nlike this is the direction we’d want to go in.\n\n\n\n-- \nJonah H. Harris\n\nOn Thu, Feb 16, 2023 at 11:26 PM David Rowley <dgrowleyml@gmail.com> wrote:I didn't hear it mentioned explicitly here, but I suspect it's faster\nwhen increasing the initial size due to the memory context caching\ncode that reuses aset MemoryContexts (see context_freelists[] in\naset.c). Since we reset the context before caching it, then it'll\nremain fast when we can reuse a context, provided we don't need to do\na malloc for an additional block beyond the initial block that's kept\nin the cache.This is what we were seeing. The larger initial size reduces/eliminates the multiple smaller blocks that are malloced and freed in each per-query execution.Maybe we should think of a more general-purpose way of doing this\ncaching which just keeps a global-to-the-process dclist of blocks\nlaying around. We could see if we have any free blocks both when\ncreating the context and also when we need to allocate another block.\nI see no reason why this couldn't be shared among the other context\ntypes rather than keeping this cache stuff specific to aset.c. slab.c\nmight need to be pickier if the size isn't exactly what it needs, but\ngeneration.c should be able to make use of it the same as aset.c\ncould. I'm unsure what'd we'd need in the way of size classing for\nthis, but I suspect we'd need to pay attention to that rather than do\nthings like hand over 16MBs of memory to some context that only wants\na 1KB initial block.Yeah. There’s definitely a smarter and more reusable approach than I was proposing. A lot of that code is fairly mature and I figured more people wouldn’t want to alter it in such ways - but I’m up for it if an approach like this is the direction we’d want to go in.-- Jonah H. Harris",
"msg_date": "Thu, 16 Feb 2023 23:40:18 -0500",
"msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to Alleviate\n FDW-related Performance Degradations"
},
{
"msg_contents": "On Fri, 17 Feb 2023 at 17:40, Jonah H. Harris <jonah.harris@gmail.com> wrote:\n> Yeah. There’s definitely a smarter and more reusable approach than I was proposing. A lot of that code is fairly mature and I figured more people wouldn’t want to alter it in such ways - but I’m up for it if an approach like this is the direction we’d want to go in.\n\nI've spent quite a bit of time in this area recently and I think that\ncontext_freelists[] is showing its age now. It does seem that slab and\ngeneration were added before context_freelists[] (9fa6f00b), but not\nby much, and those new contexts had fewer users back then. It feels a\nlittle unfair that aset should get to cache but the other context\ntypes don't. I don't think each context type should have some\nseparate cache either as that probably means more memory wasted.\nHaving something agnostic to if it's allocating a new context or\nadding a block to an existing one seems like a good idea to me.\n\nI think the tricky part will be the discussion around which size\nclasses to keep around and in which cases can we use a larger\nallocation without worrying too much that it'll be wasted. We also\ndon't really want to make the minimum memory that a backend can keep\naround too bad. Patches such as [1] are trying to reduce that. Maybe\nwe can just keep a handful of blocks of 1KB, 8KB and 16KB around, or\nmore accurately put, ALLOCSET_SMALL_INITSIZE,\nALLOCSET_DEFAULT_INITSIZE and ALLOCSET_DEFAULT_INITSIZE * 2, so that\nit works correctly if someone adjusts those definitions.\n\nI think you'll want to look at what the maximum memory a backend can\nkeep around in context_freelists[] and not make the worst-case memory\nconsumption worse than it is today.\n\nI imagine this would be some new .c file in src/backend/utils/mmgr\nwhich aset.c, generation.c and slab.c each call a function from to see\nif we have any cached blocks of that size. You'd want to call that in\nall places we call malloc() from those files apart from when aset.c\nand generation.c malloc() for a dedicated block. You can probably get\naway with replacing all of the free() calls with a call to another\nfunction where you pass the pointer and the size of the block to have\nit decide if it's going to free() it or cache it. I doubt you need to\ncare too much if the block is from a dedicated allocation or a normal\nblock. We'd just always free() if it's not in the size classes that\nwe care about.\n\nDavid\n\n[1] https://commitfest.postgresql.org/42/3867/\n\n\n",
"msg_date": "Fri, 17 Feb 2023 18:03:37 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to Alleviate\n FDW-related Performance Degradations"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 12:03 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 17 Feb 2023 at 17:40, Jonah H. Harris <jonah.harris@gmail.com>\n> wrote:\n> > Yeah. There’s definitely a smarter and more reusable approach than I was\n> proposing. A lot of that code is fairly mature and I figured more people\n> wouldn’t want to alter it in such ways - but I’m up for it if an approach\n> like this is the direction we’d want to go in.\n>\n> Having something agnostic to if it's allocating a new context or\n> adding a block to an existing one seems like a good idea to me.\n>\n\nI like this idea.\n\n\n> I think the tricky part will be the discussion around which size\n> classes to keep around and in which cases can we use a larger\n> allocation without worrying too much that it'll be wasted. We also\n> don't really want to make the minimum memory that a backend can keep\n> around too bad. Patches such as [1] are trying to reduce that. Maybe\n> we can just keep a handful of blocks of 1KB, 8KB and 16KB around, or\n> more accurately put, ALLOCSET_SMALL_INITSIZE,\n> ALLOCSET_DEFAULT_INITSIZE and ALLOCSET_DEFAULT_INITSIZE * 2, so that\n> it works correctly if someone adjusts those definitions.\n>\n\nPer that patch and the general idea, what do you think of either:\n\n1. A single GUC, something like backend_keep_mem, that represents the\ncached memory we'd retain rather than send directly to free()?\n2. Multiple GUCs, one per block size?\n\nWhile #2 would give more granularity, I'm not sure it would necessarily be\nneeded. The main issue I'd see in that case would be the selection approach\nto block sizes to keep given a fixed amount of keep memory. We'd generally\nwant the majority of the next queries to make use of it as best as\npossible, so we'd either need each size to be equally represented or some\nheuristic.\n\nI don't really like #2, but threw it out there :)\n\nI think you'll want to look at what the maximum memory a backend can\n> keep around in context_freelists[] and not make the worst-case memory\n> consumption worse than it is today.\n>\n\nAgreed.\n\n\n> I imagine this would be some new .c file in src/backend/utils/mmgr\n> which aset.c, generation.c and slab.c each call a function from to see\n> if we have any cached blocks of that size. You'd want to call that in\n> all places we call malloc() from those files apart from when aset.c\n> and generation.c malloc() for a dedicated block. You can probably get\n> away with replacing all of the free() calls with a call to another\n> function where you pass the pointer and the size of the block to have\n> it decide if it's going to free() it or cache it.\n\n\nAgreed. I would see this as practically just a generic allocator free-list;\nis that how you view it also?\n\n\n> I doubt you need to care too much if the block is from a dedicated\n> allocation or a normal\n> block. We'd just always free() if it's not in the size classes that\n> we care about.\n>\n\nAgreed.\n\n--\nJonah H. Harris\n\nOn Fri, Feb 17, 2023 at 12:03 AM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 17 Feb 2023 at 17:40, Jonah H. Harris <jonah.harris@gmail.com> wrote:\n> Yeah. There’s definitely a smarter and more reusable approach than I was proposing. A lot of that code is fairly mature and I figured more people wouldn’t want to alter it in such ways - but I’m up for it if an approach like this is the direction we’d want to go in.\nHaving something agnostic to if it's allocating a new context or\nadding a block to an existing one seems like a good idea to me.I like this idea. I think the tricky part will be the discussion around which size\nclasses to keep around and in which cases can we use a larger\nallocation without worrying too much that it'll be wasted. We also\ndon't really want to make the minimum memory that a backend can keep\naround too bad. Patches such as [1] are trying to reduce that. Maybe\nwe can just keep a handful of blocks of 1KB, 8KB and 16KB around, or\nmore accurately put, ALLOCSET_SMALL_INITSIZE,\nALLOCSET_DEFAULT_INITSIZE and ALLOCSET_DEFAULT_INITSIZE * 2, so that\nit works correctly if someone adjusts those definitions.Per that patch and the general idea, what do you think of either:1. A single GUC, something like backend_keep_mem, that represents the cached memory we'd retain rather than send directly to free()?2. Multiple GUCs, one per block size?While #2 would give more granularity, I'm not sure it would necessarily be needed. The main issue I'd see in that case would be the selection approach to block sizes to keep given a fixed amount of keep memory. We'd generally want the majority of the next queries to make use of it as best as possible, so we'd either need each size to be equally represented or some heuristic.I don't really like #2, but threw it out there :)I think you'll want to look at what the maximum memory a backend can\nkeep around in context_freelists[] and not make the worst-case memory\nconsumption worse than it is today.Agreed. I imagine this would be some new .c file in src/backend/utils/mmgr\nwhich aset.c, generation.c and slab.c each call a function from to see\nif we have any cached blocks of that size. You'd want to call that in\nall places we call malloc() from those files apart from when aset.c\nand generation.c malloc() for a dedicated block. You can probably get\naway with replacing all of the free() calls with a call to another\nfunction where you pass the pointer and the size of the block to have\nit decide if it's going to free() it or cache it.Agreed. I would see this as practically just a generic allocator free-list; is that how you view it also? I doubt you need to care too much if the block is from a dedicated allocation or a normal\nblock. We'd just always free() if it's not in the size classes that\nwe care about.Agreed.--Jonah H. Harris",
"msg_date": "Fri, 17 Feb 2023 11:46:03 -0500",
"msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to Alleviate\n FDW-related Performance Degradations"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-17 17:26:20 +1300, David Rowley wrote:\n> I didn't hear it mentioned explicitly here, but I suspect it's faster\n> when increasing the initial size due to the memory context caching\n> code that reuses aset MemoryContexts (see context_freelists[] in\n> aset.c). Since we reset the context before caching it, then it'll\n> remain fast when we can reuse a context, provided we don't need to do\n> a malloc for an additional block beyond the initial block that's kept\n> in the cache.\n\nI'm not so sure this is the case. Which is one of the reasons I'd really like\nto see a) memory context stats for executor context b) a CPU profile of the\nproblem c) a reproducer.\n\nJonah, did you just increase the initial size, or did you potentially also\nincrease the maximum block size?\n\nAnd did you increase ALLOCSET_DEFAULT_INITSIZE everywhere, or just passed a\nlarger block size in CreateExecutorState()? If the latter,the context\nfreelist wouldn't even come into play.\n\n\nA 8MB max block size is pretty darn small if you have a workload that ends up\nwith a gigabytes worth of blocks.\n\nAnd the problem also could just be that the default initial blocks size takes\ntoo long to ramp up to a reasonable block size. I think it's 20 blocks to get\nfrom ALLOCSET_DEFAULT_INITSIZE to ALLOCSET_DEFAULT_MAXSIZE. Even if you\nallocate a good bit more than 8MB, having to additionally go through 20\nsmaller chunks is going to be noticable until you reach a good bit higher\nnumber of blocks.\n\n\n> Maybe we should think of a more general-purpose way of doing this\n> caching which just keeps a global-to-the-process dclist of blocks\n> laying around. We could see if we have any free blocks both when\n> creating the context and also when we need to allocate another block.\n\nNot so sure about that. I suspect the problem could just as well be the\nmaximum block size, leading to too many blocks being allocated. Perhaps we\nshould scale that to a certain fraction of work_mem, by default?\n\nEither way, I don't think we should go too deep without some data, too likely\nto miss the actual problem.\n\n\n\n> I see no reason why this couldn't be shared among the other context\n> types rather than keeping this cache stuff specific to aset.c. slab.c\n> might need to be pickier if the size isn't exactly what it needs, but\n> generation.c should be able to make use of it the same as aset.c\n> could. I'm unsure what'd we'd need in the way of size classing for\n> this, but I suspect we'd need to pay attention to that rather than do\n> things like hand over 16MBs of memory to some context that only wants\n> a 1KB initial block.\n\nPossible. I can see something like a generic \"free block\" allocator being\nuseful. Potentially with allocating the underlying memory with larger mmap()s\nthan we need for individual blocks.\n\n\nRandom note:\n\nI wonder if we should having a bitmap (in an int) in front of aset's\nfreelist. In a lot of cases we incur plenty cache misses, just to find the\nfreelist bucket empty.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Feb 2023 09:52:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to\n Alleviate FDW-related Performance Degradations"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-17 09:52:01 -0800, Andres Freund wrote:\n> On 2023-02-17 17:26:20 +1300, David Rowley wrote:\n> Random note:\n>\n> I wonder if we should having a bitmap (in an int) in front of aset's\n> freelist. In a lot of cases we incur plenty cache misses, just to find the\n> freelist bucket empty.\n\nTwo somewhat related thoughts:\n\n1) We should move AllocBlockData->freeptr into AllocSetContext. It's only ever\n used for the block at the head of ->blocks.\n\n We completely unnecessarily incur more cache line misses due to this (and\n waste a tiny bit of space).\n\n2) We should introduce an API mcxt.c API to perform allocations that the\n caller promises not to individually free. We've talked a bunch about\n introducing a bump allocator memory context, but that requires using\n dedicated memory contexts, which incurs noticable space overhead, whereas\n just having a separate function call for the existing memory contexts\n doesn't have that issue.\n\n For aset.c we should just allocate from set->freeptr, without going through\n the freelist. Obviously we'd not round up to a power of 2. And likely, at\n least outside of assert builds, we should not have a chunk header.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Feb 2023 10:30:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to\n Alleviate FDW-related Performance Degradations"
},
{
"msg_contents": "On Tue, 21 Feb 2023 at 07:30, Andres Freund <andres@anarazel.de> wrote:\n> 2) We should introduce an API mcxt.c API to perform allocations that the\n> caller promises not to individually free.\n\nIt's not just pfree. Offhand, there's also repalloc,\nGetMemoryChunkSpace and GetMemoryChunkContext too.\n\nI am interested in a bump allocator for tuplesort.c. There it would be\nused in isolation and all the code which would touch pointers\nallocated by the bump allocator would be self-contained to the\ntuplesorting code.\n\nWhat use case do you have in mind?\n\nDavid\n\n\n",
"msg_date": "Tue, 21 Feb 2023 08:33:22 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to Alleviate\n FDW-related Performance Degradations"
},
{
"msg_contents": "Hi,\n\n\nOn 2023-02-21 08:33:22 +1300, David Rowley wrote:\n> On Tue, 21 Feb 2023 at 07:30, Andres Freund <andres@anarazel.de> wrote:\n> > 2) We should introduce an API mcxt.c API to perform allocations that the\n> > caller promises not to individually free.\n> \n> It's not just pfree. Offhand, there's also repalloc,\n> GetMemoryChunkSpace and GetMemoryChunkContext too.\n\nSure, and all of those should assert out / crash if done with the allocation\nfunction.\n\n\n> I am interested in a bump allocator for tuplesort.c. There it would be\n> used in isolation and all the code which would touch pointers\n> allocated by the bump allocator would be self-contained to the\n> tuplesorting code.\n> \n> What use case do you have in mind?\n\nE.g. the whole executor state tree (and likely also the plan tree) should be\nallocated that way. They're never individually freed. But we also allocate\nother things in the same context, and those do need to be individually\nfreeable. We could use a separate memory context, but that'd increase memory\nusage in many cases, because there'd be two different blocks being allocated\nfrom at the same time.\n\nTo me opting into this on a per-allocation basis seems likely to make this\nmore widely usable than requiring a distinct memory context type.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Feb 2023 11:46:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to\n Alleviate FDW-related Performance Degradations"
},
{
"msg_contents": "On Sat, 18 Feb 2023 at 06:52, Andres Freund <andres@anarazel.de> wrote:\n> And did you increase ALLOCSET_DEFAULT_INITSIZE everywhere, or just passed a\n> larger block size in CreateExecutorState()? If the latter,the context\n> freelist wouldn't even come into play.\n\nI think this piece of information is critical to confirm what the issue is.\n\n> A 8MB max block size is pretty darn small if you have a workload that ends up\n> with a gigabytes worth of blocks.\n\nWe should probably review that separately. These kinds of definitions\ndon't age well. The current ones appear about 23 years old now, so we\nmight be overdue to reconsider what they're set to.\n\n2002-12-15 21:01:34 +0000 150) #define ALLOCSET_DEFAULT_MINSIZE 0\n2000-06-28 03:33:33 +0000 151) #define ALLOCSET_DEFAULT_INITSIZE (8 * 1024)\n2000-06-28 03:33:33 +0000 152) #define ALLOCSET_DEFAULT_MAXSIZE (8 *\n1024 * 1024)\n\n... I recall having a desktop with 256MBs of RAM back then...\n\nLet's get to the bottom of where the problem is here before we\nconsider adjusting those. If the problem is unrelated to that then we\nshouldn't be discussing that here.\n\n> And the problem also could just be that the default initial blocks size takes\n> too long to ramp up to a reasonable block size. I think it's 20 blocks to get\n> from ALLOCSET_DEFAULT_INITSIZE to ALLOCSET_DEFAULT_MAXSIZE. Even if you\n> allocate a good bit more than 8MB, having to additionally go through 20\n> smaller chunks is going to be noticable until you reach a good bit higher\n> number of blocks.\n\nWell, let's try to help Johan get the information to us. I've attached\na quickly put together patch which adds some debug stuff to aset.c.\nJohan, if you have a suitable test instance to try this on, can you\nsend us the filtered DEBUG output from the log messages starting with\n\"AllocSet\" with and without your change? Just the output for just the\n2nd execution of the query in question is fine. The first execution\nis not useful as the cache of MemoryContexts may not be populated by\nthat time. It sounds like it's the foreign server that would need to\nbe patched with this to test it.\n\nIf you can send that in two files we should be able to easily see what\nhas changed in terms of malloc() calls between the two runs.\n\nDavid",
"msg_date": "Wed, 22 Feb 2023 14:45:39 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to Alleviate\n FDW-related Performance Degradations"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 2:46 AM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2023-02-21 08:33:22 +1300, David Rowley wrote:\n> > I am interested in a bump allocator for tuplesort.c. There it would be\n> > used in isolation and all the code which would touch pointers\n> > allocated by the bump allocator would be self-contained to the\n> > tuplesorting code.\n> >\n> > What use case do you have in mind?\n>\n> E.g. the whole executor state tree (and likely also the plan tree) should\nbe\n> allocated that way. They're never individually freed. But we also allocate\n> other things in the same context, and those do need to be individually\n> freeable. We could use a separate memory context, but that'd increase\nmemory\n> usage in many cases, because there'd be two different blocks being\nallocated\n> from at the same time.\n\nThat reminds me of this thread I recently stumbled across about memory\nmanagement of prepared statements:\n\nhttps://www.postgresql.org/message-id/20190726004124.prcb55bp43537vyw%40alap3.anarazel.de\n\nI recently heard of a technique for relative pointers that could enable\ntree structures within a single allocation.\n\nIf \"a\" needs to store the location of \"b\" relative to \"a\", it would be\ncalculated like\n\na = (char *) &b - (char *) &a;\n\n...then to find b again, do\n\ntypeof_b* b_ptr;\nb_ptr = (typeof_b* ) ((char *) &a + a);\n\nOne issue with this naive sketch is that zero would point to one's self,\nand it would be better if zero still meant \"invalid pointer\" so that\nmemset(0) does the right thing.\n\nUsing signed byte-sized offsets as an example, the range is -128 to 127, so\nwe can call -128 the invalid pointer, or in binary 0b1000_0000.\n\nTo interpret a raw zero as invalid, we need an encoding, and here we can\njust XOR it:\n\n#define Encode(a) a^0b1000_0000;\n#define Decode(a) a^0b1000_0000;\n\nThen, encode(-128) == 0 and decode(0) == -128, and memset(0) will do the\nright thing and that value will be decoded as invalid.\n\nConversely, this preserves the ability to point to self, if needed:\n\nencode(0) == -128 and decode(-128) == 0\n\n...so we can store any relative offset in the range -127..127, as well as\n\"invalid offset\". This extends to larger signed integer types in the\nobvious way.\n\nPutting the above two calculations together, the math ends up like this,\nwhich can be put into macros:\n\nabsolute to relative:\na = Encode((int32) (char *) &b - (char *) &a);\n\nrelative to absolute:\ntypeof_b* b_ptr;\nb_ptr = (typeof_b* ) ((char *) &a + Decode(a));\n\nI'm not yet familiar enough with parse/plan/execute trees to know if this\nwould work or not, but that might be a good thing to look into next cycle.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Feb 21, 2023 at 2:46 AM Andres Freund <andres@anarazel.de> wrote:> On 2023-02-21 08:33:22 +1300, David Rowley wrote:> > I am interested in a bump allocator for tuplesort.c. There it would be> > used in isolation and all the code which would touch pointers> > allocated by the bump allocator would be self-contained to the> > tuplesorting code.> >> > What use case do you have in mind?>> E.g. the whole executor state tree (and likely also the plan tree) should be> allocated that way. They're never individually freed. But we also allocate> other things in the same context, and those do need to be individually> freeable. We could use a separate memory context, but that'd increase memory> usage in many cases, because there'd be two different blocks being allocated> from at the same time.That reminds me of this thread I recently stumbled across about memory management of prepared statements:https://www.postgresql.org/message-id/20190726004124.prcb55bp43537vyw%40alap3.anarazel.deI recently heard of a technique for relative pointers that could enable tree structures within a single allocation.If \"a\" needs to store the location of \"b\" relative to \"a\", it would be calculated likea = (char *) &b - (char *) &a;...then to find b again, dotypeof_b* b_ptr;b_ptr = (typeof_b* ) ((char *) &a + a);One issue with this naive sketch is that zero would point to one's self, and it would be better if zero still meant \"invalid pointer\" so that memset(0) does the right thing.Using signed byte-sized offsets as an example, the range is -128 to 127, so we can call -128 the invalid pointer, or in binary 0b1000_0000.To interpret a raw zero as invalid, we need an encoding, and here we can just XOR it:#define Encode(a) a^0b1000_0000;#define Decode(a) a^0b1000_0000;Then, encode(-128) == 0 and decode(0) == -128, and memset(0) will do the right thing and that value will be decoded as invalid.Conversely, this preserves the ability to point to self, if needed:encode(0) == -128 and decode(-128) == 0...so we can store any relative offset in the range -127..127, as well as \"invalid offset\". This extends to larger signed integer types in the obvious way.Putting the above two calculations together, the math ends up like this, which can be put into macros:absolute to relative:a = Encode((int32) (char *) &b - (char *) &a);relative to absolute:typeof_b* b_ptr;b_ptr = (typeof_b* ) ((char *) &a + Decode(a));I'm not yet familiar enough with parse/plan/execute trees to know if this would work or not, but that might be a good thing to look into next cycle.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 25 Feb 2023 13:26:58 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing System Allocator Thrashing of ExecutorState to Alleviate\n FDW-related Performance Degradations"
}
] |
[
{
"msg_contents": "While working on [1] to make improvements in the query planner around\nthe speed to find EquivalenceMembers in an EquivalenceClass, because\nthat patch does have a large impact in terms of performance\nimprovements, some performance tests with that patch started to\nhighlight some other places that bottleneck the planner's performance.\n\nOne of those places is in generate_orderedappend_paths() when we find\nthat the required sort order is the same as the reverse of the\npartition order. In this case, we build a list of paths for each\npartition using the lcons() function. Since Lists are now arrays in\nPostgreSQL, lcons() isn't as efficient as it once was and it must\nmemmove the entire existing contents of the list up one element to\nmake way to prepend the new element. This is effectively quadratic and\nbecomes noticeable with a large number of partitions.\n\nOne way we could solve that is to just lappend() the new item and then\njust reverse the order of the list only when we need to. This has the\nadded advantage of removing a bunch of semi-duplicated code from\ngenerate_orderedappend_paths(). It also has a noticeable impact on the\nplanner's performance.\n\nI did a quick test with:\n\ncreate table lp (a int, b int) partition by list(a);\nselect 'create table lp'||x::text||' partition of lp for values\nin('||x::text||');' from generate_Series(1,10000)x;\n\\gexec\ncreate index on lp(a);\n\nUsing: psql -c \"explain (analyze, timing off) select * from lp order\nby a desc\" postgres | grep \"Planning Time\"\n\nmaster:\nPlanning Time: 6034.765 ms\nPlanning Time: 5919.914 ms\nPlanning Time: 5720.529 ms\n\nmaster + eclass idx (from [1]) (yes, it really is this much faster)\nPlanning Time: 549.262 ms\nPlanning Time: 489.023 ms\nPlanning Time: 497.803 ms\n\nmaster + eclass idx + list_reverse (attached)\nPlanning Time: 517.067 ms\nPlanning Time: 463.613 ms\nPlanning Time: 463.036 ms\n\nI suspect there won't be much controversy here and there's certainly\nnot much complexity, so in absence of anyone voicing an opinion here,\nI'm inclined to not waste too much time on this one and just get it\ndone. I'll leave it for a few days.\n\nDavid\n\n[1] https://postgr.es/m/flat/CAJ2pMkZNCgoUKSE+_5LthD+KbXKvq6h2hQN8Esxpxd+cxmgomg@mail.gmail.com",
"msg_date": "Fri, 17 Feb 2023 11:36:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Introduce list_reverse() to make lcons() usage less inefficient"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-17 11:36:40 +1300, David Rowley wrote:\n> While working on [1] to make improvements in the query planner around\n> the speed to find EquivalenceMembers in an EquivalenceClass, because\n> that patch does have a large impact in terms of performance\n> improvements, some performance tests with that patch started to\n> highlight some other places that bottleneck the planner's performance.\n> \n> One of those places is in generate_orderedappend_paths() when we find\n> that the required sort order is the same as the reverse of the\n> partition order. In this case, we build a list of paths for each\n> partition using the lcons() function. Since Lists are now arrays in\n> PostgreSQL, lcons() isn't as efficient as it once was and it must\n> memmove the entire existing contents of the list up one element to\n> make way to prepend the new element. This is effectively quadratic and\n> becomes noticeable with a large number of partitions.\n\nI have wondered before if we eventually ought to switch to embedded lists for\nsome planner structures, including paths. add_path() inserts/deletes at points\nin the middle of the list, which isn't great.\n\n\n> One way we could solve that is to just lappend() the new item and then\n> just reverse the order of the list only when we need to.\n\nThat's not generally the same as lcons() ing, but I guess it's fine here,\nbecause we build the lists from scratch, so the reversing actually yields the\ncorrect result.\n\nBut wouldn't an even cheaper way here be to iterate over the children in\nreverse order when match_partition_order_desc? We can do that efficiently\nnow. Looks like we don't have a readymade helper for it, but it'd be easy\nenough to add or open code.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 16:23:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Introduce list_reverse() to make lcons() usage less inefficient"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-17 11:36:40 +1300, David Rowley wrote:\n>> One of those places is in generate_orderedappend_paths() when we find\n>> that the required sort order is the same as the reverse of the\n>> partition order. In this case, we build a list of paths for each\n>> partition using the lcons() function. Since Lists are now arrays in\n>> PostgreSQL, lcons() isn't as efficient as it once was and it must\n>> memmove the entire existing contents of the list up one element to\n>> make way to prepend the new element. This is effectively quadratic and\n>> becomes noticeable with a large number of partitions.\n\n> I have wondered before if we eventually ought to switch to embedded lists for\n> some planner structures, including paths. add_path() inserts/deletes at points\n> in the middle of the list, which isn't great.\n\nI'm not hugely excited about that, because it presumes that paths appear\nin only one list, which isn't true. We could perhaps privilege\nRelOptInfo.pathlist over other cases, but that'd be asymmetrical and\nprobably bug-inducing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Feb 2023 21:19:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Introduce list_reverse() to make lcons() usage less inefficient"
},
{
"msg_contents": "On Fri, 17 Feb 2023 at 13:23, Andres Freund <andres@anarazel.de> wrote:\n> But wouldn't an even cheaper way here be to iterate over the children in\n> reverse order when match_partition_order_desc? We can do that efficiently\n> now. Looks like we don't have a readymade helper for it, but it'd be easy\n> enough to add or open code.\n\nThat seems fair. I think open coding is a better option. I had a go\nat foreach_reverse recently and decided to keep clear of it due to\nbehavioural differences with foreach_delete_current().\n\nI've attached a patch for this. It seems to have similar performance\nto the list_reverse()\n\n$ psql -c \"explain (analyze, timing off) select * from lp order by a\ndesc\" postgres | grep \"Planning Time\"\n Planning Time: 522.554 ms <- cold relcache\n Planning Time: 467.776 ms\n Planning Time: 466.424 ms\n\nDavid",
"msg_date": "Fri, 17 Feb 2023 16:35:41 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce list_reverse() to make lcons() usage less inefficient"
},
{
"msg_contents": "On Fri, 17 Feb 2023 at 16:35, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 17 Feb 2023 at 13:23, Andres Freund <andres@anarazel.de> wrote:\n> > But wouldn't an even cheaper way here be to iterate over the children in\n> > reverse order when match_partition_order_desc? We can do that efficiently\n> > now. Looks like we don't have a readymade helper for it, but it'd be easy\n> > enough to add or open code.\n>\n> That seems fair. I think open coding is a better option. I had a go\n> at foreach_reverse recently and decided to keep clear of it due to\n> behavioural differences with foreach_delete_current().\n\nI've pushed a patch for this now. Thank you for the idea.\n\nDavid\n\n\n",
"msg_date": "Mon, 20 Feb 2023 22:51:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce list_reverse() to make lcons() usage less inefficient"
}
] |
[
{
"msg_contents": "Hi hackers!\n\n From time to time I want to collect some stats from locks, activity\nand other stat views into one table from different time points. In\nthis case the \\watch psql command is very handy. However, it's not\ncurrently possible to specify the number of times a query is\nperformed.\nAlso, if we do not provide a timespan, 2 seconds are selected. But if\nwe provide an incorrect argument - 1 second is selected.\nPFA the patch that adds iteration count argument and makes timespan\nargument more consistent.\nWhat do you think?\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Thu, 16 Feb 2023 15:33:07 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On 17.02.23 00:33, Andrey Borodin wrote:\n> From time to time I want to collect some stats from locks, activity\n> and other stat views into one table from different time points. In\n> this case the \\watch psql command is very handy. However, it's not\n> currently possible to specify the number of times a query is\n> performed.\n\nThe watch command on my OS has a lot of options, but this is not one of \nthem. So probably no one has really needed it so far.\n\n> Also, if we do not provide a timespan, 2 seconds are selected. But if\n> we provide an incorrect argument - 1 second is selected.\n> PFA the patch that adds iteration count argument and makes timespan\n> argument more consistent.\n\nThat should probably be fixed.\n\n\n\n",
"msg_date": "Mon, 20 Feb 2023 15:23:27 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@enterprisedb.com) wrote:\n> On 17.02.23 00:33, Andrey Borodin wrote:\n> > From time to time I want to collect some stats from locks, activity\n> > and other stat views into one table from different time points. In\n> > this case the \\watch psql command is very handy. However, it's not\n> > currently possible to specify the number of times a query is\n> > performed.\n> \n> The watch command on my OS has a lot of options, but this is not one of\n> them. So probably no one has really needed it so far.\n\nwatch doesn't ... but top does, and I can certainly see how our watch\nhaving an iterations count could be helpful in much the same way as\ntop's batch mode does.\n\n> > Also, if we do not provide a timespan, 2 seconds are selected. But if\n> > we provide an incorrect argument - 1 second is selected.\n> > PFA the patch that adds iteration count argument and makes timespan\n> > argument more consistent.\n> \n> That should probably be fixed.\n\nAnd should probably be independent patches.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 20 Feb 2023 09:40:18 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "> > > Also, if we do not provide a timespan, 2 seconds are selected. But if\n> > > we provide an incorrect argument - 1 second is selected.\n> > > PFA the patch that adds iteration count argument and makes timespan\n> > > argument more consistent.\n> >\n> > That should probably be fixed.\n>\n> And should probably be independent patches.\n>\n\nPFA 2 independent patches.\n\nAlso, I've fixed a place to break after an iteration. Now if we have\ne.g. 2 iterations - there will be only 1 sleep time.\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Mon, 20 Feb 2023 10:45:53 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "At Mon, 20 Feb 2023 10:45:53 -0800, Andrey Borodin <amborodin86@gmail.com> wrote in \n> > > > Also, if we do not provide a timespan, 2 seconds are selected. But if\n> > > > we provide an incorrect argument - 1 second is selected.\n> > > > PFA the patch that adds iteration count argument and makes timespan\n> > > > argument more consistent.\n> > >\n> > > That should probably be fixed.\n> >\n> > And should probably be independent patches.\n> >\n> \n> PFA 2 independent patches.\n> \n> Also, I've fixed a place to break after an iteration. Now if we have\n> e.g. 2 iterations - there will be only 1 sleep time.\n\nIMHO the current behavior for digit inputs looks fine to me. I feel\nthat the command should selently fix the input to the default in the\ncase of digits inputs like '-1'. But that may not be the case for\neveryone. FWIW the patch still accepts an incorrect parameter '1abc'\nby ignoring any trailing garbage.\n\nIn any case, I reckon the error message should be more specific. In\nother words, it would be better if it suggests the expected input\nformat and range.\n\nRegarding the second patch, if we want \\watch to throw an error\nmessage for the garbage trailing to sleep times, I think we should do\nthe same for iteration counts. Additionally, we need to update the\ndocumentation.\n\n\nBy the way, when I looked this patch, I noticed that\nexec_command_bind() doesn't free the malloc'ed return strings from\npsql_scan_slash_option(). The same mistake is seen in some other\nplaces. I'll take a closer look and get back in another thread.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 21 Feb 2023 11:14:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "Thanks for looking into this!\n\nOn Mon, Feb 20, 2023 at 6:15 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> FWIW the patch still accepts an incorrect parameter '1abc'\n> by ignoring any trailing garbage.\nIndeed, fixed.\n>\n> In any case, I reckon the error message should be more specific. In\n> other words, it would be better if it suggests the expected input\n> format and range.\n+1.\nNot a range, actually, because upper limits have no sense for a user.\n\n>\n> Regarding the second patch, if we want \\watch to throw an error\n> message for the garbage trailing to sleep times, I think we should do\n> the same for iteration counts.\n+1, done.\n\n> Additionally, we need to update the\n> documentation.\nDone.\n\nThanks for the review!\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 26 Feb 2023 20:55:45 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "+1 for adding an iteration count argument to \\watch.\n\n+\t\t\tchar *opt_end;\n+\t\t\tsleep = strtod(opt, &opt_end);\n+\t\t\tif (sleep <= 0 || *opt_end)\n+\t\t\t{\n+\t\t\t\tpg_log_error(\"Watch period must be positive number, but argument is '%s'\", opt);\n+\t\t\t\tfree(opt);\n+\t\t\t\tresetPQExpBuffer(query_buf);\n+\t\t\t\treturn PSQL_CMD_ERROR;\n+\t\t\t}\n\nIs there any reason to disallow 0 for the sleep argument? I often use\ncommands like \"\\watch .1\" to run statements repeatedly with very little\ntime in between, and I'd use \"\\watch 0\" instead if it was available.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 8 Mar 2023 10:49:44 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 10:49 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Is there any reason to disallow 0 for the sleep argument? I often use\n> commands like \"\\watch .1\" to run statements repeatedly with very little\n> time in between, and I'd use \"\\watch 0\" instead if it was available.\n>\n\nYes, that makes sense! Thanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 8 Mar 2023 16:22:47 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "+\t\t\t\tpg_log_error(\"Watch period must be non-negative number, but argument is '%s'\", opt);\n\nAfter looking around at the other error messages in this file, I think we\nshould make this more concise. Maybe something like\n\n\tpg_log_error(\"\\\\watch: invalid delay interval: %s\", opt);\n\n+\t\t\t\tfree(opt);\n+\t\t\t\tresetPQExpBuffer(query_buf);\n+\t\t\t\treturn PSQL_CMD_ERROR;\n\nIs this missing psql_scan_reset(scan_state)?\n\nI haven't had a chance to look closely at 0002 yet.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 9 Mar 2023 11:25:26 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 11:25 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> + pg_log_error(\"Watch period must be non-negative number, but argument is '%s'\", opt);\n>\n> After looking around at the other error messages in this file, I think we\n> should make this more concise. Maybe something like\n>\n> pg_log_error(\"\\\\watch: invalid delay interval: %s\", opt);\nIn the review above Kyotaro-san suggested that message should contain\ninformation on what it expects... So, maybe then\npg_log_error(\"\\\\watch interval must be non-negative number, but\nargument is '%s'\", opt); ?\nOr perhaps with articles? pg_log_error(\"\\\\watch interval must be a\nnon-negative number, but the argument is '%s'\", opt);\n\n>\n> + free(opt);\n> + resetPQExpBuffer(query_buf);\n> + return PSQL_CMD_ERROR;\n>\n> Is this missing psql_scan_reset(scan_state)?\nYes, fixed.\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 12 Mar 2023 13:05:39 -0700",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Sun, Mar 12, 2023 at 01:05:39PM -0700, Andrey Borodin wrote:\n> In the review above Kyotaro-san suggested that message should contain\n> information on what it expects... So, maybe then\n> pg_log_error(\"\\\\watch interval must be non-negative number, but\n> argument is '%s'\", opt); ?\n> Or perhaps with articles? pg_log_error(\"\\\\watch interval must be a\n> non-negative number, but the argument is '%s'\", opt);\n\n- HELP0(\" \\\\watch [SEC] execute query every SEC seconds\\n\");\n+ HELP0(\" \\\\watch [SEC [N]] execute query every SEC seconds N times\\n\");\n\nIs that really the interface we'd want to work with in the long-term?\nFor one, this does not give the option to specify only an interval\nwhile relying on the default number of seconds. This may be fine, but\nit does not strike me as the best choice. How about doing something\nmore extensible, for example:\n\\watch [ (option=value [, option=value] .. ) ] [SEC]\n\nI am not sure that this will be the last option we'll ever add to\n\\watch, so I'd rather have us choose a design more flexible than\nwhat's proposed here, in a way similar to \\g or \\gx.\n--\nMichael",
"msg_date": "Mon, 13 Mar 2023 10:17:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 10:17:12AM +0900, Michael Paquier wrote:\n> I am not sure that this will be the last option we'll ever add to\n> \\watch, so I'd rather have us choose a design more flexible than\n> what's proposed here, in a way similar to \\g or \\gx.\n\nWhile on it, I have some comments about 0001.\n\n- sleep = strtod(opt, NULL);\n- if (sleep <= 0)\n- sleep = 1;\n+ char *opt_end;\n+ sleep = strtod(opt, &opt_end);\n+ if (sleep < 0 || *opt_end)\n+ {\n+ pg_log_error(\"\\\\watch interval must be non-negative number, \"\n+ \"but argument is '%s'\", opt);\n+ free(opt);\n+ resetPQExpBuffer(query_buf);\n+ psql_scan_reset(scan_state);\n+ return PSQL_CMD_ERROR;\n+ }\n\nOkay by me to make this behavior a bit better, though it is not\nsomething I would backpatch as it can influence existing workflows,\neven if they worked in an inappropriate way.\n\nAnyway, are you sure that this is actually OK? It seems to me that\nthis needs to check for three things:\n- If sleep is a negative value.\n- errno should be non-zero.\n- *opt_end == opt.\n\nSo this needs three different error messages to show the exact error\nto the user? Wouldn't it be better to have a couple of regression\ntests, as well?\n--\nMichael",
"msg_date": "Mon, 13 Mar 2023 10:26:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "Michael, thanks for reviewing this!\n\nOn Sun, Mar 12, 2023 at 6:17 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Mar 12, 2023 at 01:05:39PM -0700, Andrey Borodin wrote:\n> > In the review above Kyotaro-san suggested that message should contain\n> > information on what it expects... So, maybe then\n> > pg_log_error(\"\\\\watch interval must be non-negative number, but\n> > argument is '%s'\", opt); ?\n> > Or perhaps with articles? pg_log_error(\"\\\\watch interval must be a\n> > non-negative number, but the argument is '%s'\", opt);\n>\n> - HELP0(\" \\\\watch [SEC] execute query every SEC seconds\\n\");\n> + HELP0(\" \\\\watch [SEC [N]] execute query every SEC seconds N times\\n\");\n>\n> Is that really the interface we'd want to work with in the long-term?\n> For one, this does not give the option to specify only an interval\n> while relying on the default number of seconds. This may be fine, but\n> it does not strike me as the best choice. How about doing something\n> more extensible, for example:\n> \\watch [ (option=value [, option=value] .. ) ] [SEC]\n>\n> I am not sure that this will be the last option we'll ever add to\n> \\watch, so I'd rather have us choose a design more flexible than\n> what's proposed here, in a way similar to \\g or \\gx.\nI've attached an implementation of this proposed interface (no tests\nand help message yet, though, sorry).\nI tried it a little bit, and it works for me.\nfire query 3 times\nSELECT 1;\\watch c=3 0\nor with 200ms interval\nSELECT 1;\\watch i=.2 c=3\nnonsense, but correct\nSELECT 1;\\watch i=1e-100 c=1\n\nActually Nik was asking for the feature. Nik, what do you think?\n\nOn Sun, Mar 12, 2023 at 6:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> While on it, I have some comments about 0001.\n>\n> - sleep = strtod(opt, NULL);\n> - if (sleep <= 0)\n> - sleep = 1;\n> + char *opt_end;\n> + sleep = strtod(opt, &opt_end);\n> + if (sleep < 0 || *opt_end)\n> + {\n> + pg_log_error(\"\\\\watch interval must be non-negative number, \"\n> + \"but argument is '%s'\", opt);\n> + free(opt);\n> + resetPQExpBuffer(query_buf);\n> + psql_scan_reset(scan_state);\n> + return PSQL_CMD_ERROR;\n> + }\n>\n> Okay by me to make this behavior a bit better, though it is not\n> something I would backpatch as it can influence existing workflows,\n> even if they worked in an inappropriate way.\n+1\n\n> Anyway, are you sure that this is actually OK? It seems to me that\n> this needs to check for three things:\n> - If sleep is a negative value.\n> - errno should be non-zero.\nI think we can treat errno and negative values equally.\n> - *opt_end == opt.\n>\n> So this needs three different error messages to show the exact error\n> to the user?\nI've tried this approach, but could not come up with sufficiently\ndifferent error messages...\n\n> Wouldn't it be better to have a couple of regression\n> tests, as well?\nAdded two tests.\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 12 Mar 2023 20:59:44 -0700",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Sun, Mar 12, 2023 at 08:59:44PM -0700, Andrey Borodin wrote:\n> I've tried this approach, but could not come up with sufficiently\n> different error messages...\n> \n>> Wouldn't it be better to have a couple of regression\n>> tests, as well?\n> Added two tests.\n\nIt should have three tests with one for ERANGE on top of the other\ntwo. Passing down a value like \"10e400\" should be enough to cause\nstrtod() to fail, as far as I know.\n\n+ if (sleep == 0)\n+ continue;\n\nWhile on it, forgot to comment on this one.. Indeed, this choice to\nauthorize 0 and not wait between two commands is more natural.\n\nI have tweaked things as bit as of the attached, and ran pgindent.\nWhat do you think?\n--\nMichael",
"msg_date": "Tue, 14 Mar 2023 09:26:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 5:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> I have tweaked things as bit as of the attached, and ran pgindent.\n> What do you think?\n>\n\nLooks good to me.\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Mon, 13 Mar 2023 18:14:18 -0700",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 06:14:18PM -0700, Andrey Borodin wrote:\n> Looks good to me.\n\nOk, thanks for looking. Let's wait a bit and see if others have an\nopinion to offer. At least, the CI is green.\n--\nMichael",
"msg_date": "Tue, 14 Mar 2023 11:36:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "At Tue, 14 Mar 2023 11:36:17 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Ok, thanks for looking. Let's wait a bit and see if others have an\n> opinion to offer. At least, the CI is green.\n\n+\t\t\t\tif (*opt_end)\n+\t\t\t\t\tpg_log_error(\"\\\\watch: incorrect interval value '%s'\", opt);\n+\t\t\t\telse if (errno == ERANGE)\n+\t\t\t\t\tpg_log_error(\"\\\\watch: out-of-range interval value '%s'\", opt);\n+\t\t\t\telse\n+\t\t\t\t\tpg_log_error(\"\\\\watch: interval value '%s' less than zero\", opt);\n\nI'm not sure if we need error messages for that resolution and I'm a\nbit happier to have fewer messages to translate:p. Merging the cases\nof ERANGE and negative values might be better. And I think we usually\nrefer to unparsable input as \"invalid\".\n\n\tif (*opt_end)\n\t pg_log_error(\"\\\\watch: invalid interval value '%s'\", opt);\n\telse\n\t pg_log_error(\"\\\\watch: interval value '%s' out of range\", opt);\n\n\nIt looks good other than that.\n\nBy the way, I noticed that \\watch erases the query buffer. That\nbehavior differs from other commands, such as \\g. And the difference\nis not documented. Why do we erase the query buffer only in the case\nof \\watch?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 14 Mar 2023 13:58:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 01:58:59PM +0900, Kyotaro Horiguchi wrote:\n> +\t\t\t\tif (*opt_end)\n> +\t\t\t\t\tpg_log_error(\"\\\\watch: incorrect interval value '%s'\", opt);\n> +\t\t\t\telse if (errno == ERANGE)\n> +\t\t\t\t\tpg_log_error(\"\\\\watch: out-of-range interval value '%s'\", opt);\n> +\t\t\t\telse\n> +\t\t\t\t\tpg_log_error(\"\\\\watch: interval value '%s' less than zero\", opt);\n> \n> I'm not sure if we need error messages for that resolution and I'm a\n> bit happier to have fewer messages to translate:p. Merging the cases\n> of ERANGE and negative values might be better. And I think we usually\n> refer to unparsable input as \"invalid\".\n> \n> \tif (*opt_end)\n> \t pg_log_error(\"\\\\watch: invalid interval value '%s'\", opt);\n> \telse\n> \t pg_log_error(\"\\\\watch: interval value '%s' out of range\", opt);\n\n+1, I don't think it's necessary to complicate these error messages too\nmuch. This code hasn't reported errors for nearly 10 years, and I'm not\naware of any complaints. I ѕtill think we could simplify this to \"\\watch:\ninvalid delay interval: %s\" and call it a day.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Mar 2023 12:03:00 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "At Tue, 14 Mar 2023 12:03:00 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Tue, Mar 14, 2023 at 01:58:59PM +0900, Kyotaro Horiguchi wrote:\n> > +\t\t\t\tif (*opt_end)\n> > +\t\t\t\t\tpg_log_error(\"\\\\watch: incorrect interval value '%s'\", opt);\n> > +\t\t\t\telse if (errno == ERANGE)\n> > +\t\t\t\t\tpg_log_error(\"\\\\watch: out-of-range interval value '%s'\", opt);\n> > +\t\t\t\telse\n> > +\t\t\t\t\tpg_log_error(\"\\\\watch: interval value '%s' less than zero\", opt);\n> > \n> > I'm not sure if we need error messages for that resolution and I'm a\n> > bit happier to have fewer messages to translate:p. Merging the cases\n> > of ERANGE and negative values might be better. And I think we usually\n> > refer to unparsable input as \"invalid\".\n> > \n> > \tif (*opt_end)\n> > \t pg_log_error(\"\\\\watch: invalid interval value '%s'\", opt);\n> > \telse\n> > \t pg_log_error(\"\\\\watch: interval value '%s' out of range\", opt);\n> \n> +1, I don't think it's necessary to complicate these error messages too\n> much. This code hasn't reported errors for nearly 10 years, and I'm not\n> aware of any complaints. I till think we could simplify this to \"\\watch:\n> invalid delay interval: %s\" and call it a day.\n\nI hesitated to propose such a level of simplification, but basically I\nwas alsothinking the same thing.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Mar 2023 10:19:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 10:19:28AM +0900, Kyotaro Horiguchi wrote:\n> I hesitated to propose such a level of simplification, but basically I\n> was alsothinking the same thing.\n\nOkay, fine by me to use one single message. I'd rather still keep the\nthree tests, though, as they check the three conditions upon which the\nerror would be triggered.\n--\nMichael",
"msg_date": "Wed, 15 Mar 2023 10:24:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 6:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Mar 15, 2023 at 10:19:28AM +0900, Kyotaro Horiguchi wrote:\n> > I hesitated to propose such a level of simplification, but basically I\n> > was alsothinking the same thing.\n+1\n\n> Okay, fine by me to use one single message. I'd rather still keep the\n> three tests, though, as they check the three conditions upon which the\n> error would be triggered.\n\nPFA v8. Thanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 14 Mar 2023 20:20:23 -0700",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 08:20:23PM -0700, Andrey Borodin wrote:\n> PFA v8. Thanks!\n\nLooks OK to me. I've looked as well at resetting query_buffer on\nfailure, which I guess is better this way because this is an\naccumulation of the previous results, right?\n--\nMichael",
"msg_date": "Wed, 15 Mar 2023 13:09:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "+\t\t\tsleep = strtod(opt, &opt_end);\n+\t\t\tif (sleep < 0 || *opt_end || errno == ERANGE)\n\nShould we set errno to 0 before calling strtod()?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Mar 2023 21:23:48 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 09:23:48PM -0700, Nathan Bossart wrote:\n> +\t\t\tsleep = strtod(opt, &opt_end);\n> +\t\t\tif (sleep < 0 || *opt_end || errno == ERANGE)\n> \n> Should we set errno to 0 before calling strtod()?\n\nYep. You are right.\n--\nMichael",
"msg_date": "Wed, 15 Mar 2023 16:58:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 04:58:49PM +0900, Michael Paquier wrote:\n> Yep. You are right.\n\nFixed that and applied 0001.\n\n+ valptr++;\n+ if (strncmp(\"i\", opt, strlen(\"i\")) == 0 ||\n+ strncmp(\"interval\", opt, strlen(\"interval\")) == 0)\n+ {\n\nDid you look at process_command_g_options() and if some consolidation\nwas possible? It would be nice to have APIs shaped so as more\nsub-commands could rely on the same facility in the future.\n\n- <term><literal>\\watch [ <replaceable class=\"parameter\">seconds</replaceable> ]</literal></term>\n+ <term><literal>\\watch [ <replaceable class=\"parameter\">seconds</replaceable> [ <replaceable class=\"parameter\">iterations</replaceable> ] ]</literal></term>\n\nThis set of changes is not reflected in the documentation.\n\nWith an interval in place, we could now automate some tests with\n\\watch where it does not fail. What do you think about adding a test\nwith a simple query, an interval of 0s and one iteration?\n--\nMichael",
"msg_date": "Thu, 16 Mar 2023 09:54:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 5:54 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Mar 15, 2023 at 04:58:49PM +0900, Michael Paquier wrote:\n> > Yep. You are right.\n>\n> Fixed that and applied 0001.\nGreat, thanks!\n\n>\n> + valptr++;\n> + if (strncmp(\"i\", opt, strlen(\"i\")) == 0 ||\n> + strncmp(\"interval\", opt, strlen(\"interval\")) == 0)\n> + {\n>\n> Did you look at process_command_g_options() and if some consolidation\n> was possible? It would be nice to have APIs shaped so as more\n> sub-commands could rely on the same facility in the future.\nI've tried, but they behave so differently. I could reuse only the\n\"char *valptr = strchr(opt, '=');\" thing from there :)\nAnd process_command_g_options() changes data in-place...\nActually, I'm not sure having \"i\" == \"interval\" and \"c\"==\"count\" is a\ngood idea here too. I mean I like it, but is it coherent?\nAlso I do not like repeating 4 times this 5 lines\n+ pg_log_error(\"\\\\watch: incorrect interval value '%s'\", valptr);\n+ free(opt);\n+ resetPQExpBuffer(query_buf);\n+ psql_scan_reset(scan_state);\n+ return PSQL_CMD_ERROR;\nBut I hesitate defining a new function for this...\n\n>\n> - <term><literal>\\watch [ <replaceable class=\"parameter\">seconds</replaceable> ]</literal></term>\n> + <term><literal>\\watch [ <replaceable class=\"parameter\">seconds</replaceable> [ <replaceable class=\"parameter\">iterations</replaceable> ] ]</literal></term>\n>\n> This set of changes is not reflected in the documentation.\nDone.\n\n> With an interval in place, we could now automate some tests with\n> \\watch where it does not fail. What do you think about adding a test\n> with a simple query, an interval of 0s and one iteration?\nDone. Also found a bug that we actually were doing N+1 iterations.\n\nThank you for working on this!\n\nBest regards, Andrey Borodin.",
"msg_date": "Thu, 16 Mar 2023 21:15:30 -0700",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "Hello,\n\nOn Thu, 16 Mar 2023 21:15:30 -0700\nAndrey Borodin <amborodin86@gmail.com> wrote:\n\n> On Wed, Mar 15, 2023 at 5:54 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Mar 15, 2023 at 04:58:49PM +0900, Michael Paquier wrote:\n> > > Yep. You are right.\n> >\n> > Fixed that and applied 0001.\n> Great, thanks!\n> \n> >\n> > + valptr++;\n> > + if (strncmp(\"i\", opt, strlen(\"i\")) == 0 ||\n> > + strncmp(\"interval\", opt, strlen(\"interval\")) == 0)\n> > + {\n> >\n> > Did you look at process_command_g_options() and if some consolidation\n> > was possible? It would be nice to have APIs shaped so as more\n> > sub-commands could rely on the same facility in the future.\n> I've tried, but they behave so differently. I could reuse only the\n> \"char *valptr = strchr(opt, '=');\" thing from there :)\n> And process_command_g_options() changes data in-place...\n> Actually, I'm not sure having \"i\" == \"interval\" and \"c\"==\"count\" is a\n> good idea here too. I mean I like it, but is it coherent?\n> Also I do not like repeating 4 times this 5 lines\n> + pg_log_error(\"\\\\watch: incorrect interval value '%s'\", valptr);\n> + free(opt);\n> + resetPQExpBuffer(query_buf);\n> + psql_scan_reset(scan_state);\n> + return PSQL_CMD_ERROR;\n> But I hesitate defining a new function for this...\n> \n> >\n> > - <term><literal>\\watch [ <replaceable class=\"parameter\">seconds</replaceable> ]</literal></term>\n> > + <term><literal>\\watch [ <replaceable class=\"parameter\">seconds</replaceable> [ <replaceable class=\"parameter\">iterations</replaceable> ] ]</literal></term>\n> >\n> > This set of changes is not reflected in the documentation.\n> Done.\n> \n> > With an interval in place, we could now automate some tests with\n> > \\watch where it does not fail. What do you think about adding a test\n> > with a simple query, an interval of 0s and one iteration?\n> Done. Also found a bug that we actually were doing N+1 iterations.\n\nHere is my review on the v9 patch.\n\n+ /* we do not prevent numerous names iterations like i=1 i=1 i=1 */\n+ have_sleep = true;\n\nWhy this is allowed here? I am not sure there is any reason to allow to specify\nmultiple \"interval\" options. (I would apologize it if I missed past discussion.)\n\n+ if (sleep < 0 || *opt_end || errno == ERANGE || have_sleep)\n+ {\n+ pg_log_error(\"\\\\watch: incorrect interval value '%s'\", \n\nHere, specifying an explicit \"interval\" option before an interval second without\noption is prohibited.\n\n postgres=# select 1 \\watch interval=3 4\n \\watch: incorrect interval value '4'\n\nI think it is ok, but this error message seems not user-friendly because,\nin the above example, interval values itself is correct, but it seems just\na syntax error. I wonder it is better to use \"watch interval must be specified\nonly once\" or such here, as the past patch.\n\n+ <para>\n+ If number of iterations is specified - query will be executed only\n+ given number of times.\n+ </para>\n\nIs it common to use \"-\" here? I think using comma like \n\"If number of iterations is specified, \"\nis natural.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 24 Mar 2023 14:15:41 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 10:15 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> Here is my review on the v9 patch.\n>\n> + /* we do not prevent numerous names iterations like i=1 i=1 i=1 */\n> + have_sleep = true;\n>\n> Why this is allowed here? I am not sure there is any reason to allow to specify\n> multiple \"interval\" options. (I would apologize it if I missed past discussion.)\nI do not know, it just seems normal to me. I've fixed this.\n\n> postgres=# select 1 \\watch interval=3 4\n> \\watch: incorrect interval value '4'\n>\n> I think it is ok, but this error message seems not user-friendly because,\n> in the above example, interval values itself is correct, but it seems just\n> a syntax error. I wonder it is better to use \"watch interval must be specified\n> only once\" or such here, as the past patch.\nDone.\n\n>\n> + <para>\n> + If number of iterations is specified - query will be executed only\n> + given number of times.\n> + </para>\n>\n> Is it common to use \"-\" here? I think using comma like\n> \"If number of iterations is specified, \"\n> is natural.\nDone.\n\nThank for the review!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 24 Mar 2023 19:31:52 -0700",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 10:32 PM Andrey Borodin <amborodin86@gmail.com>\nwrote:\n\n> On Thu, Mar 23, 2023 at 10:15 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >\n> > Here is my review on the v9 patch.\n> >\n> > + /* we do not prevent numerous names iterations like\n> i=1 i=1 i=1 */\n> > + have_sleep = true;\n> >\n> > Why this is allowed here? I am not sure there is any reason to allow to\n> specify\n> > multiple \"interval\" options. (I would apologize it if I missed past\n> discussion.)\n> I do not know, it just seems normal to me. I've fixed this.\n>\n> > postgres=# select 1 \\watch interval=3 4\n> > \\watch: incorrect interval value '4'\n> >\n> > I think it is ok, but this error message seems not user-friendly because,\n> > in the above example, interval values itself is correct, but it seems\n> just\n> > a syntax error. I wonder it is better to use \"watch interval must be\n> specified\n> > only once\" or such here, as the past patch.\n> Done.\n>\n> >\n> > + <para>\n> > + If number of iterations is specified - query will be executed\n> only\n> > + given number of times.\n> > + </para>\n> >\n> > Is it common to use \"-\" here? I think using comma like\n> > \"If number of iterations is specified, \"\n> > is natural.\n> Done.\n>\n> Thank for the review!\n>\n>\n> Best regards, Andrey Borodin.\n>\n\nOkay, I tested this. It handles bad param names, values correctly. Nice\nFeature, especially if you leave a 1hr task running and you need to step\naway...\nBuilt/Reviewed the Docs. They are correct.\nReviewed \\? command. It has the parameters updated, shown as optional\n\nMarked as Ready for Committer.\n\nOn Fri, Mar 24, 2023 at 10:32 PM Andrey Borodin <amborodin86@gmail.com> wrote:On Thu, Mar 23, 2023 at 10:15 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>\n> Here is my review on the v9 patch.\n>\n> + /* we do not prevent numerous names iterations like i=1 i=1 i=1 */\n> + have_sleep = true;\n>\n> Why this is allowed here? I am not sure there is any reason to allow to specify\n> multiple \"interval\" options. (I would apologize it if I missed past discussion.)\nI do not know, it just seems normal to me. I've fixed this.\n\n> postgres=# select 1 \\watch interval=3 4\n> \\watch: incorrect interval value '4'\n>\n> I think it is ok, but this error message seems not user-friendly because,\n> in the above example, interval values itself is correct, but it seems just\n> a syntax error. I wonder it is better to use \"watch interval must be specified\n> only once\" or such here, as the past patch.\nDone.\n\n>\n> + <para>\n> + If number of iterations is specified - query will be executed only\n> + given number of times.\n> + </para>\n>\n> Is it common to use \"-\" here? I think using comma like\n> \"If number of iterations is specified, \"\n> is natural.\nDone.\n\nThank for the review!\n\n\nBest regards, Andrey Borodin.Okay, I tested this. It handles bad param names, values correctly. Nice Feature, especially if you leave a 1hr task running and you need to step away...Built/Reviewed the Docs. They are correct.Reviewed \\? command. It has the parameters updated, shown as optionalMarked as Ready for Committer.",
"msg_date": "Tue, 4 Apr 2023 19:22:59 -0400",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "Kirk Wolak <wolakk@gmail.com> writes:\n> Marked as Ready for Committer.\n\nPushed with a pretty fair number of cosmetic changes.\n\nOne non-cosmetic change I made is that I didn't agree with your\ninterpretation of the execution count. IMO this ought to produce\nthree executions:\n\nregression=# select 1 \\watch c=3\nThu Apr 6 13:17:50 2023 (every 2s)\n\n ?column? \n----------\n 1\n(1 row)\n\nThu Apr 6 13:17:52 2023 (every 2s)\n\n ?column? \n----------\n 1\n(1 row)\n\nThu Apr 6 13:17:54 2023 (every 2s)\n\n ?column? \n----------\n 1\n(1 row)\n\nregression=# \n\nIf you write a semicolon first, you get four, but it's the semicolon\nproducing the first result not \\watch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Apr 2023 13:22:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On Thu, Apr 6, 2023 at 10:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Kirk Wolak <wolakk@gmail.com> writes:\n> > Marked as Ready for Committer.\n>\n> Pushed with a pretty fair number of cosmetic changes.\n\nGreat, thank you!\n\n> If you write a semicolon first, you get four, but it's the semicolon\n> producing the first result not \\watch.\n\nI did not know that. Well, I knew it in parts, but did not understand\nas a whole. Thanks!\n\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Thu, 6 Apr 2023 22:56:17 +0500",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "On 13.03.23 02:17, Michael Paquier wrote:\n> On Sun, Mar 12, 2023 at 01:05:39PM -0700, Andrey Borodin wrote:\n>> In the review above Kyotaro-san suggested that message should contain\n>> information on what it expects... So, maybe then\n>> pg_log_error(\"\\\\watch interval must be non-negative number, but\n>> argument is '%s'\", opt); ?\n>> Or perhaps with articles? pg_log_error(\"\\\\watch interval must be a\n>> non-negative number, but the argument is '%s'\", opt);\n> \n> - HELP0(\" \\\\watch [SEC] execute query every SEC seconds\\n\");\n> + HELP0(\" \\\\watch [SEC [N]] execute query every SEC seconds N times\\n\");\n> \n> Is that really the interface we'd want to work with in the long-term?\n> For one, this does not give the option to specify only an interval\n> while relying on the default number of seconds. This may be fine, but\n> it does not strike me as the best choice. How about doing something\n> more extensible, for example:\n> \\watch [ (option=value [, option=value] .. ) ] [SEC]\n> \n> I am not sure that this will be the last option we'll ever add to\n> \\watch, so I'd rather have us choose a design more flexible than\n> what's proposed here, in a way similar to \\g or \\gx.\n\nOn the other hand, we also have option syntax in \\connect that is like \n-foo. Would that be a better match here? We should maybe decide before \nwe diverge and propagate two different option syntaxes in backslash \ncommands.\n\n\n\n",
"msg_date": "Tue, 9 May 2023 17:11:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 13.03.23 02:17, Michael Paquier wrote:\n>> I am not sure that this will be the last option we'll ever add to\n>> \\watch, so I'd rather have us choose a design more flexible than\n>> what's proposed here, in a way similar to \\g or \\gx.\n\n> On the other hand, we also have option syntax in \\connect that is like \n> -foo. Would that be a better match here? We should maybe decide before \n> we diverge and propagate two different option syntaxes in backslash \n> commands.\n\nReasonable point to raise, but I think \\connect's -reuse-previous\nis in the minority. \\connect itself can use option=value syntax\nin the conninfo string (in fact, I guess -reuse-previous was spelled\nthat way in hopes of not being confusable with a conninfo option).\nWe also have option=value in the \\g and \\gx commands. I don't see\nany other psql metacommands that use options spelled like -foo.\n\nIn short, I'm satisfied with the current answer. There's still\ntime to debate it though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 May 2023 11:55:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql \\watch 2nd argument: iteration count"
}
] |
[
{
"msg_contents": "Hi,\n\nThe thread around https://postgr.es/m/CADUqk8Uqw5QaUqLdd-0SBCvZVncrE3JMJB9+yDwO_uMv_hTYCg@mail.gmail.com\nreminded me of the following:\n\nISTM that we really shouldn't use ALLOCSET_DEFAULT_SIZES for expression\ncontexts, as they most commonly see only a few small, or no, allocations.\n\nThat's true for OLTPish queries, but is is very often true even for analytics\nqueries.\n\nJust because I had it loaded, here's the executor state for TPCH-Q01, which is\npretty expression heavy:\n\nExecutorState: 65536 total in 4 blocks; 42512 free (11 chunks); 23024 used\n TupleSort main: 32832 total in 2 blocks; 7320 free (7 chunks); 25512 used\n TupleSort sort: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n Caller tuples: 8192 total in 1 blocks (9 chunks); 6488 free (0 chunks); 1704 used\n ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\nGrand total: 139328 bytes in 11 blocks; 88032 free (18 chunks); 51296 used\n\nAs you can see very little was allocated in the ExprContext's.\n\n\nISTM that we could save a reasonable amount of memory by using a smaller\ninitial size.\n\nNot so sure if a smaller max size should be used though.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 20:01:31 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Should CreateExprContext() be using ALLOCSET_DEFAULT_SIZES?"
},
{
"msg_contents": "> On 17 Feb 2023, at 05:01, Andres Freund <andres@anarazel.de> wrote:\n\n> ISTM that we really shouldn't use ALLOCSET_DEFAULT_SIZES for expression\n> contexts, as they most commonly see only a few small, or no, allocations.\n\nLooking into this I think you are correct.\n\n> ISTM that we could save a reasonable amount of memory by using a smaller\n> initial size.\n\nI experimented with the below trivial patch in CreateExprContext:\n\n- return CreateExprContextInternal(estate, ALLOCSET_DEFAULT_SIZES);\n+ return CreateExprContextInternal(estate, ALLOCSET_START_SMALL_SIZES);\n\nAcross various (unscientific) benchmarks, including expression heavy TPC-H\nqueries, I can see consistent reductions in memory use and tiny (within the\nmargin of error) increases in performance. More importantly, I didn't see a\ncase of slowdowns with this applied or any adverse effects in terms of memory\nuse. Whenever the initial size isn't big enough the expr runtime is likely\nexceeding the overhead from growing the allocation?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 5 May 2023 15:10:24 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should CreateExprContext() be using ALLOCSET_DEFAULT_SIZES?"
}
] |
[
{
"msg_contents": "Hi\n\nmore times I needed to get the extension's assigned namespace. There is\nalready a cooked function get_extension_schema, but it is static.\n\nI need to find a function with a known name, but possibly an unknown schema\nfrom a known extension.\n\nRegards\n\nPavel\n\nHimore times I needed to get the extension's assigned namespace. There is already a cooked function get_extension_schema, but it is static. I need to find a function with a known name, but possibly an unknown schema from a known extension.RegardsPavel",
"msg_date": "Fri, 17 Feb 2023 06:45:40 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "shoud be get_extension_schema visible?"
},
{
"msg_contents": "Hi\n\n\npá 17. 2. 2023 v 6:45 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> more times I needed to get the extension's assigned namespace. There is\n> already a cooked function get_extension_schema, but it is static.\n>\n> I need to find a function with a known name, but possibly an unknown\n> schema from a known extension.\n>\n\nHere is an patch\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>",
"msg_date": "Sun, 19 Feb 2023 06:40:39 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: shoud be get_extension_schema visible?"
},
{
"msg_contents": "Hi,\n\nOn Sun, Feb 19, 2023 at 06:40:39AM +0100, Pavel Stehule wrote:\n>\n> p� 17. 2. 2023 v 6:45 odes�latel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n> > more times I needed to get the extension's assigned namespace. There is\n> > already a cooked function get_extension_schema, but it is static.\n> >\n> > I need to find a function with a known name, but possibly an unknown\n> > schema from a known extension.\n> >\n>\n> Here is an patch\n\nThe patch is trivial so I don't have much to say about it, and it also seems\nquite reasonable generally.\n\nNote for other reviewers / committers: this is a something actually already\nwanted for 3rd party code. As an example, here's Pavel's code in plpgsql_check\nextension that internally has to duplicate this function (and deal with\ncompatibility):\nhttps://github.com/okbob/plpgsql_check/blob/master/src/catalog.c#L205\n\nI'm marking this entry as Ready For Committer.\n\n\n",
"msg_date": "Mon, 6 Mar 2023 15:33:31 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shoud be get_extension_schema visible?"
},
{
"msg_contents": "po 6. 3. 2023 v 8:33 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Sun, Feb 19, 2023 at 06:40:39AM +0100, Pavel Stehule wrote:\n> >\n> > pá 17. 2. 2023 v 6:45 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> > napsal:\n> >\n> > > more times I needed to get the extension's assigned namespace. There is\n> > > already a cooked function get_extension_schema, but it is static.\n> > >\n> > > I need to find a function with a known name, but possibly an unknown\n> > > schema from a known extension.\n> > >\n> >\n> > Here is an patch\n>\n> The patch is trivial so I don't have much to say about it, and it also\n> seems\n> quite reasonable generally.\n>\n> Note for other reviewers / committers: this is a something actually already\n> wanted for 3rd party code. As an example, here's Pavel's code in\n> plpgsql_check\n> extension that internally has to duplicate this function (and deal with\n> compatibility):\n> https://github.com/okbob/plpgsql_check/blob/master/src/catalog.c#L205\n>\n> I'm marking this entry as Ready For Committer.\n>\n\nThank you very much\n\nPavel\n\npo 6. 3. 2023 v 8:33 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Sun, Feb 19, 2023 at 06:40:39AM +0100, Pavel Stehule wrote:\n>\n> pá 17. 2. 2023 v 6:45 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n> > more times I needed to get the extension's assigned namespace. There is\n> > already a cooked function get_extension_schema, but it is static.\n> >\n> > I need to find a function with a known name, but possibly an unknown\n> > schema from a known extension.\n> >\n>\n> Here is an patch\n\nThe patch is trivial so I don't have much to say about it, and it also seems\nquite reasonable generally.\n\nNote for other reviewers / committers: this is a something actually already\nwanted for 3rd party code. As an example, here's Pavel's code in plpgsql_check\nextension that internally has to duplicate this function (and deal with\ncompatibility):\nhttps://github.com/okbob/plpgsql_check/blob/master/src/catalog.c#L205\n\nI'm marking this entry as Ready For Committer.Thank you very muchPavel",
"msg_date": "Mon, 6 Mar 2023 08:34:49 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: shoud be get_extension_schema visible?"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 08:34:49AM +0100, Pavel Stehule wrote:\n>> Note for other reviewers / committers: this is a something actually already\n>> wanted for 3rd party code. As an example, here's Pavel's code in\n>> plpgsql_check\n>> extension that internally has to duplicate this function (and deal with\n>> compatibility):\n>> https://github.com/okbob/plpgsql_check/blob/master/src/catalog.c#L205\n\nI can see why you'd want that, so OK from here to provide this routine\nfor external consumption. Let's first wait a bit and see if others\nhave any kind of objections or comments.\n--\nMichael",
"msg_date": "Mon, 6 Mar 2023 16:44:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: shoud be get_extension_schema visible?"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 04:44:59PM +0900, Michael Paquier wrote:\n> I can see why you'd want that, so OK from here to provide this routine\n> for external consumption. Let's first wait a bit and see if others\n> have any kind of objections or comments.\n\nDone this one as of e20b1ea.\n--\nMichael",
"msg_date": "Wed, 8 Mar 2023 10:04:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: shoud be get_extension_schema visible?"
},
{
"msg_contents": "st 8. 3. 2023 v 2:04 odesílatel Michael Paquier <michael@paquier.xyz>\nnapsal:\n\n> On Mon, Mar 06, 2023 at 04:44:59PM +0900, Michael Paquier wrote:\n> > I can see why you'd want that, so OK from here to provide this routine\n> > for external consumption. Let's first wait a bit and see if others\n> > have any kind of objections or comments.\n>\n> Done this one as of e20b1ea.\n>\n\nThank you very much\n\nPavel\n\n\n> --\n> Michael\n>\n\nst 8. 3. 2023 v 2:04 odesílatel Michael Paquier <michael@paquier.xyz> napsal:On Mon, Mar 06, 2023 at 04:44:59PM +0900, Michael Paquier wrote:\n> I can see why you'd want that, so OK from here to provide this routine\n> for external consumption. Let's first wait a bit and see if others\n> have any kind of objections or comments.\n\nDone this one as of e20b1ea.Thank you very muchPavel \n--\nMichael",
"msg_date": "Wed, 8 Mar 2023 06:08:32 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: shoud be get_extension_schema visible?"
}
] |
[
{
"msg_contents": "Hi,\n\nI was working on testing a major upgrade scenario using a mix of physical and\nlogical replication when I faced some unexpected problem leading to missing\nrows. Note that my motivation is to rely on physical replication / physical\nbackup to avoid recreating a node from scratch using logical replication, as\nthe initial sync with logical replication is much more costly and impacting\ncompared to pg_basebackup / restoring a physical backup, but the same problem\nexist if you just pg_upgrade a node that has subscriptions.\n\nThe problem is that pg_upgrade creates the subscriptions on the newly upgraded\nnode using \"WITH (connect = false)\", which seems expected as you obviously\ndon't want to try to connect to the publisher at that point. But then once the\nnewly upgraded node is restarted and ready to replace the previous one, unless\nI'm missing something there's absolutely no possibility to use the created\nsubscriptions without losing some data from the publisher.\n\nThe reason is that the subscription doesn't have a local list of relation to\nprocess until you refresh the subscription, but you can't refresh the\nsubscription without enabling it (and you can't enable it in a transaction),\nwhich means that you have to let the logical worker start, consume and ignore\nall changes that happened on the publisher side until the refresh happens.\n\nAn easy workaround that I tried is to allow something like\n\nALTER SUBSCRIPTION ... ENABLE WITH (refresh = true, copy_data = false)\n\nso that the refresh internally happens before the apply worker is started and\nyou just keep consuming the delta, which works on naive scenario.\n\nOne concern I have with this approach is that the default values for both\n\"refresh\" and \"copy_data\" for all other subcommands is \"true, but we would\nprobably need a different default value in that exact scenario (as we know we\nalready have the data). I think that it would otherwise be safe in my very\nspecific scenario, assuming that you created the slot beforehand and moved the\nslot's LSN at the promotion point, as even if you add non-empty tables to the\npublication you will only need the delta whether those were initially empty or\nnot given your initial physical replica state. Any other scenario would make\nthis new option dangerous, if not entirely useless, but not more than any of\nthe current commands that lead to refreshing a subscription and have the same\noptions I guess.\n\nAll in all, currently the only way to somewhat safely resume logical\nreplication after a pg_upgrade is to drop all the subscriptions that were\ntransferred during pg_upgrade on all databases and recreate them (using the\nexisting slots on the publisher side obviously), allowing the initial\nconnection. But this approach only works in the exact scenario I mentioned\n(physical to logical replication, or at least a case where *all* the tables\nwhere logically replicated prior to the pg_ugprade), otherwise you have to\nrecreate the follower node from scratch using logical repication.\n\nIs that indeed the current behavior, or did I miss something?\n\nIs this \"resume logical replication on pg_upgraded node\" something we want to\nsupport better? I was thinking that we could add a new pg_dump mode (maybe\nonly usable during pg_upgrade) that also restores the pg_subscription_rel\ncontent in each subscription or something like that. If not, should pg_upgrade\nkeep preserving the subscriptions as it doesn't seem safe to use them, or at\nleast document the hazards (I didn't find anything about it in the\ndocumentation)?\n\n\n",
"msg_date": "Fri, 17 Feb 2023 15:54:33 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 1:24 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> I was working on testing a major upgrade scenario using a mix of physical and\n> logical replication when I faced some unexpected problem leading to missing\n> rows. Note that my motivation is to rely on physical replication / physical\n> backup to avoid recreating a node from scratch using logical replication, as\n> the initial sync with logical replication is much more costly and impacting\n> compared to pg_basebackup / restoring a physical backup, but the same problem\n> exist if you just pg_upgrade a node that has subscriptions.\n>\n> The problem is that pg_upgrade creates the subscriptions on the newly upgraded\n> node using \"WITH (connect = false)\", which seems expected as you obviously\n> don't want to try to connect to the publisher at that point. But then once the\n> newly upgraded node is restarted and ready to replace the previous one, unless\n> I'm missing something there's absolutely no possibility to use the created\n> subscriptions without losing some data from the publisher.\n>\n> The reason is that the subscription doesn't have a local list of relation to\n> process until you refresh the subscription, but you can't refresh the\n> subscription without enabling it (and you can't enable it in a transaction),\n> which means that you have to let the logical worker start, consume and ignore\n> all changes that happened on the publisher side until the refresh happens.\n>\n> An easy workaround that I tried is to allow something like\n>\n> ALTER SUBSCRIPTION ... ENABLE WITH (refresh = true, copy_data = false)\n>\n> so that the refresh internally happens before the apply worker is started and\n> you just keep consuming the delta, which works on naive scenario.\n>\n> One concern I have with this approach is that the default values for both\n> \"refresh\" and \"copy_data\" for all other subcommands is \"true, but we would\n> probably need a different default value in that exact scenario (as we know we\n> already have the data). I think that it would otherwise be safe in my very\n> specific scenario, assuming that you created the slot beforehand and moved the\n> slot's LSN at the promotion point, as even if you add non-empty tables to the\n> publication you will only need the delta whether those were initially empty or\n> not given your initial physical replica state.\n>\n\nThis point is not very clear. Why would one just need delta even for new tables?\n\n> Any other scenario would make\n> this new option dangerous, if not entirely useless, but not more than any of\n> the current commands that lead to refreshing a subscription and have the same\n> options I guess.\n>\n> All in all, currently the only way to somewhat safely resume logical\n> replication after a pg_upgrade is to drop all the subscriptions that were\n> transferred during pg_upgrade on all databases and recreate them (using the\n> existing slots on the publisher side obviously), allowing the initial\n> connection. But this approach only works in the exact scenario I mentioned\n> (physical to logical replication, or at least a case where *all* the tables\n> where logically replicated prior to the pg_ugprade), otherwise you have to\n> recreate the follower node from scratch using logical repication.\n>\n\nI think if you dropped and recreated the subscriptions by retaining\nold slots, the replication should resume from where it left off before\nthe upgrade. Which scenario are you concerned about?\n\n> Is that indeed the current behavior, or did I miss something?\n>\n> Is this \"resume logical replication on pg_upgraded node\" something we want to\n> support better? I was thinking that we could add a new pg_dump mode (maybe\n> only usable during pg_upgrade) that also restores the pg_subscription_rel\n> content in each subscription or something like that. If not, should pg_upgrade\n> keep preserving the subscriptions as it doesn't seem safe to use them, or at\n> least document the hazards (I didn't find anything about it in the\n> documentation)?\n>\n>\n\nThere is a mention of this in pg_dump docs. See [1] (When dumping\nlogical replication subscriptions ...)\n\n[1] - https://www.postgresql.org/docs/devel/app-pgdump.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 17 Feb 2023 16:12:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Hi,\n\nOn Fri, Feb 17, 2023 at 04:12:54PM +0530, Amit Kapila wrote:\n> On Fri, Feb 17, 2023 at 1:24 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > An easy workaround that I tried is to allow something like\n> >\n> > ALTER SUBSCRIPTION ... ENABLE WITH (refresh = true, copy_data = false)\n> >\n> > so that the refresh internally happens before the apply worker is started and\n> > you just keep consuming the delta, which works on naive scenario.\n> >\n> > One concern I have with this approach is that the default values for both\n> > \"refresh\" and \"copy_data\" for all other subcommands is \"true, but we would\n> > probably need a different default value in that exact scenario (as we know we\n> > already have the data). I think that it would otherwise be safe in my very\n> > specific scenario, assuming that you created the slot beforehand and moved the\n> > slot's LSN at the promotion point, as even if you add non-empty tables to the\n> > publication you will only need the delta whether those were initially empty or\n> > not given your initial physical replica state.\n> >\n>\n> This point is not very clear. Why would one just need delta even for new tables?\n\nBecause in my scenario I'm coming from physical replication, so I know that I\ndid replicate everything until the promotion LSN. Any table later added in the\npublication is either already fully replicated until that LSN on the upgraded\nnode, so only the delta is needed, or has been created after that LSN. In the\nlatter case, the entirety of the table will be replicated with the logical\nreplication as a delta right?\n\n> > Any other scenario would make\n> > this new option dangerous, if not entirely useless, but not more than any of\n> > the current commands that lead to refreshing a subscription and have the same\n> > options I guess.\n> >\n> > All in all, currently the only way to somewhat safely resume logical\n> > replication after a pg_upgrade is to drop all the subscriptions that were\n> > transferred during pg_upgrade on all databases and recreate them (using the\n> > existing slots on the publisher side obviously), allowing the initial\n> > connection. But this approach only works in the exact scenario I mentioned\n> > (physical to logical replication, or at least a case where *all* the tables\n> > where logically replicated prior to the pg_ugprade), otherwise you have to\n> > recreate the follower node from scratch using logical repication.\n> >\n>\n> I think if you dropped and recreated the subscriptions by retaining\n> old slots, the replication should resume from where it left off before\n> the upgrade. Which scenario are you concerned about?\n\nI'm concerned about people not coming from physical replication. If you just\nhad some \"normal\" logical replication, you can't assume that you already have\nall the data from the upstream subscription. If it was modified and a non\nempty table is added, you might need to copy the data of part of the tables and\nkeep replicating for the rest. It's hard to be sure from a user point of view,\nand even if you knew you have no way to express it.\n\n> > Is that indeed the current behavior, or did I miss something?\n> >\n> > Is this \"resume logical replication on pg_upgraded node\" something we want to\n> > support better? I was thinking that we could add a new pg_dump mode (maybe\n> > only usable during pg_upgrade) that also restores the pg_subscription_rel\n> > content in each subscription or something like that. If not, should pg_upgrade\n> > keep preserving the subscriptions as it doesn't seem safe to use them, or at\n> > least document the hazards (I didn't find anything about it in the\n> > documentation)?\n> >\n> >\n>\n> There is a mention of this in pg_dump docs. See [1] (When dumping\n> logical replication subscriptions ...)\n\nIndeed, but it's barely saying \"It is then up to the user to reactivate the\nsubscriptions in a suitable way\" and \"It might also be appropriate to truncate\nthe target tables before initiating a new full table copy\". As I mentioned, I\ndon't think there's a suitable way to reactivate the subscription, at least if\nyou don't want to miss some records, so truncating all target tables is the\nonly fully safe way to proceed. It seems quite silly to have to do so just\nbecause pg_upgrade doesn't retain the list of relation per subscription.\n\n\n",
"msg_date": "Fri, 17 Feb 2023 23:35:01 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 9:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Feb 17, 2023 at 04:12:54PM +0530, Amit Kapila wrote:\n> > On Fri, Feb 17, 2023 at 1:24 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > An easy workaround that I tried is to allow something like\n> > >\n> > > ALTER SUBSCRIPTION ... ENABLE WITH (refresh = true, copy_data = false)\n> > >\n> > > so that the refresh internally happens before the apply worker is started and\n> > > you just keep consuming the delta, which works on naive scenario.\n> > >\n> > > One concern I have with this approach is that the default values for both\n> > > \"refresh\" and \"copy_data\" for all other subcommands is \"true, but we would\n> > > probably need a different default value in that exact scenario (as we know we\n> > > already have the data). I think that it would otherwise be safe in my very\n> > > specific scenario, assuming that you created the slot beforehand and moved the\n> > > slot's LSN at the promotion point, as even if you add non-empty tables to the\n> > > publication you will only need the delta whether those were initially empty or\n> > > not given your initial physical replica state.\n> > >\n> >\n> > This point is not very clear. Why would one just need delta even for new tables?\n>\n> Because in my scenario I'm coming from physical replication, so I know that I\n> did replicate everything until the promotion LSN. Any table later added in the\n> publication is either already fully replicated until that LSN on the upgraded\n> node, so only the delta is needed, or has been created after that LSN. In the\n> latter case, the entirety of the table will be replicated with the logical\n> replication as a delta right?\n>\n\nThat makes sense to me.\n\n> > > Any other scenario would make\n> > > this new option dangerous, if not entirely useless, but not more than any of\n> > > the current commands that lead to refreshing a subscription and have the same\n> > > options I guess.\n> > >\n> > > All in all, currently the only way to somewhat safely resume logical\n> > > replication after a pg_upgrade is to drop all the subscriptions that were\n> > > transferred during pg_upgrade on all databases and recreate them (using the\n> > > existing slots on the publisher side obviously), allowing the initial\n> > > connection. But this approach only works in the exact scenario I mentioned\n> > > (physical to logical replication, or at least a case where *all* the tables\n> > > where logically replicated prior to the pg_ugprade), otherwise you have to\n> > > recreate the follower node from scratch using logical repication.\n> > >\n> >\n> > I think if you dropped and recreated the subscriptions by retaining\n> > old slots, the replication should resume from where it left off before\n> > the upgrade. Which scenario are you concerned about?\n>\n> I'm concerned about people not coming from physical replication. If you just\n> had some \"normal\" logical replication, you can't assume that you already have\n> all the data from the upstream subscription. If it was modified and a non\n> empty table is added, you might need to copy the data of part of the tables and\n> keep replicating for the rest. It's hard to be sure from a user point of view,\n> and even if you knew you have no way to express it.\n>\n\nCan't the user create a separate publication for such newly added\ntables and a corresponding new subscription on the downstream node?\nNow, I think it would be a bit tricky if the user already has a\npublication defined with FOR ALL TABLES. In that case, we probably\nneed some way to specify FOR ALL TABLES EXCEPT (list of tables) which\nwe currently don't have.\n\n> > > Is that indeed the current behavior, or did I miss something?\n> > >\n> > > Is this \"resume logical replication on pg_upgraded node\" something we want to\n> > > support better? I was thinking that we could add a new pg_dump mode (maybe\n> > > only usable during pg_upgrade) that also restores the pg_subscription_rel\n> > > content in each subscription or something like that. If not, should pg_upgrade\n> > > keep preserving the subscriptions as it doesn't seem safe to use them, or at\n> > > least document the hazards (I didn't find anything about it in the\n> > > documentation)?\n> > >\n> > >\n> >\n> > There is a mention of this in pg_dump docs. See [1] (When dumping\n> > logical replication subscriptions ...)\n>\n> Indeed, but it's barely saying \"It is then up to the user to reactivate the\n> subscriptions in a suitable way\" and \"It might also be appropriate to truncate\n> the target tables before initiating a new full table copy\". As I mentioned, I\n> don't think there's a suitable way to reactivate the subscription, at least if\n> you don't want to miss some records, so truncating all target tables is the\n> only fully safe way to proceed. It seems quite silly to have to do so just\n> because pg_upgrade doesn't retain the list of relation per subscription.\n>\n\nI also don't know if there is any other safe way for newly added\ntables apart from the above suggestion to create separate publications\nbut that can work only in specific cases.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 18 Feb 2023 09:31:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sat, Feb 18, 2023 at 09:31:30AM +0530, Amit Kapila wrote:\n> On Fri, Feb 17, 2023 at 9:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > I'm concerned about people not coming from physical replication. If you just\n> > had some \"normal\" logical replication, you can't assume that you already have\n> > all the data from the upstream subscription. If it was modified and a non\n> > empty table is added, you might need to copy the data of part of the tables and\n> > keep replicating for the rest. It's hard to be sure from a user point of view,\n> > and even if you knew you have no way to express it.\n> >\n>\n> Can't the user create a separate publication for such newly added\n> tables and a corresponding new subscription on the downstream node?\n\nYes that seems like a safe way to go, but it relies on users being very careful\nif they don't want to get corrupted logical standby, and I think it's\nimpossible to run any check to make sure that the subscription is adequate?\n\n> Now, I think it would be a bit tricky if the user already has a\n> publication defined with FOR ALL TABLES. In that case, we probably\n> need some way to specify FOR ALL TABLES EXCEPT (list of tables) which\n> we currently don't have.\n\nYes, and note that I rely on FOR ALL TABLES for my original physical to logical\nuse case.\n\n> >\n> > Indeed, but it's barely saying \"It is then up to the user to reactivate the\n> > subscriptions in a suitable way\" and \"It might also be appropriate to truncate\n> > the target tables before initiating a new full table copy\". As I mentioned, I\n> > don't think there's a suitable way to reactivate the subscription, at least if\n> > you don't want to miss some records, so truncating all target tables is the\n> > only fully safe way to proceed. It seems quite silly to have to do so just\n> > because pg_upgrade doesn't retain the list of relation per subscription.\n> >\n>\n> I also don't know if there is any other safe way for newly added\n> tables apart from the above suggestion to create separate publications\n> but that can work only in specific cases.\n\nI might be missing something, but what could go wrong if pg_upgrade could emit\na bunch of commands like:\n\nALTER SUBSCRIPTION subname ADD RELATION relid STATE 'x' LSN 'X/Y';\n\npg_upgrade already preserves the relation's oid, so we could restore the\nexact original state and then enabling the subscription would just work?\n\nWe could restrict this form to --binary only so we don't provide a way for\nusers to mess the data.\n\n\n",
"msg_date": "Sat, 18 Feb 2023 13:51:08 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sat, Feb 18, 2023 at 11:21 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Feb 18, 2023 at 09:31:30AM +0530, Amit Kapila wrote:\n> > On Fri, Feb 17, 2023 at 9:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > I'm concerned about people not coming from physical replication. If you just\n> > > had some \"normal\" logical replication, you can't assume that you already have\n> > > all the data from the upstream subscription. If it was modified and a non\n> > > empty table is added, you might need to copy the data of part of the tables and\n> > > keep replicating for the rest. It's hard to be sure from a user point of view,\n> > > and even if you knew you have no way to express it.\n> > >\n> >\n> > Can't the user create a separate publication for such newly added\n> > tables and a corresponding new subscription on the downstream node?\n>\n> Yes that seems like a safe way to go, but it relies on users being very careful\n> if they don't want to get corrupted logical standby, and I think it's\n> impossible to run any check to make sure that the subscription is adequate?\n>\n\nI can't think of any straightforward way but one can probably take of\ndump of data on both nodes using pg_dump and then compare it.\n\n> > Now, I think it would be a bit tricky if the user already has a\n> > publication defined with FOR ALL TABLES. In that case, we probably\n> > need some way to specify FOR ALL TABLES EXCEPT (list of tables) which\n> > we currently don't have.\n>\n> Yes, and note that I rely on FOR ALL TABLES for my original physical to logical\n> use case.\n>\n\nOkay, but if we would have functionality like EXCEPT (list of tables),\none could do ALTER PUBLICATION .. before doing REFRESH on the\nsubscriber-side.\n\n> > >\n> > > Indeed, but it's barely saying \"It is then up to the user to reactivate the\n> > > subscriptions in a suitable way\" and \"It might also be appropriate to truncate\n> > > the target tables before initiating a new full table copy\". As I mentioned, I\n> > > don't think there's a suitable way to reactivate the subscription, at least if\n> > > you don't want to miss some records, so truncating all target tables is the\n> > > only fully safe way to proceed. It seems quite silly to have to do so just\n> > > because pg_upgrade doesn't retain the list of relation per subscription.\n> > >\n> >\n> > I also don't know if there is any other safe way for newly added\n> > tables apart from the above suggestion to create separate publications\n> > but that can work only in specific cases.\n>\n> I might be missing something, but what could go wrong if pg_upgrade could emit\n> a bunch of commands like:\n>\n> ALTER SUBSCRIPTION subname ADD RELATION relid STATE 'x' LSN 'X/Y';\n>\n\nHow will we know the STATE and LSN of each relation? But I think even\nif know that what is the guarantee that publisher side still has still\nretained the corresponding slots?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 18 Feb 2023 16:12:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sat, Feb 18, 2023 at 04:12:52PM +0530, Amit Kapila wrote:\n> On Sat, Feb 18, 2023 at 11:21 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > > Now, I think it would be a bit tricky if the user already has a\n> > > publication defined with FOR ALL TABLES. In that case, we probably\n> > > need some way to specify FOR ALL TABLES EXCEPT (list of tables) which\n> > > we currently don't have.\n> >\n> > Yes, and note that I rely on FOR ALL TABLES for my original physical to logical\n> > use case.\n> >\n>\n> Okay, but if we would have functionality like EXCEPT (list of tables),\n> one could do ALTER PUBLICATION .. before doing REFRESH on the\n> subscriber-side.\n\nHonestly I'm not a huge fan of this approach. It feels hacky to have such a\nfeature, and doesn't even solve the problem on its own as you still lose\nrecords when reactivating the subscription unless you also provide an ALTER\nSUBSCRIPTION ENABLE WITH (refresh = true, copy_data = false), which will\nprobably require different defaults than the rest of the ALTER SUBSCRIPTION\nsubcommands that handle a refresh.\n\n> > > > Indeed, but it's barely saying \"It is then up to the user to reactivate the\n> > > > subscriptions in a suitable way\" and \"It might also be appropriate to truncate\n> > > > the target tables before initiating a new full table copy\". As I mentioned, I\n> > > > don't think there's a suitable way to reactivate the subscription, at least if\n> > > > you don't want to miss some records, so truncating all target tables is the\n> > > > only fully safe way to proceed. It seems quite silly to have to do so just\n> > > > because pg_upgrade doesn't retain the list of relation per subscription.\n> > > >\n> > >\n> > > I also don't know if there is any other safe way for newly added\n> > > tables apart from the above suggestion to create separate publications\n> > > but that can work only in specific cases.\n> >\n> > I might be missing something, but what could go wrong if pg_upgrade could emit\n> > a bunch of commands like:\n> >\n> > ALTER SUBSCRIPTION subname ADD RELATION relid STATE 'x' LSN 'X/Y';\n> >\n>\n> How will we know the STATE and LSN of each relation?\n\nIn the pg_subscription_rel catalog of the upgraded server? I didn't look in\ndetail on how information are updated but I'm assuming that if logical\nreplication survives after a database restart it shouldn't be a problem to also\nfully dump it during pg_upgrade.\n\n> But I think even\n> if know that what is the guarantee that publisher side still has still\n> retained the corresponding slots?\n\nNo guarantee, but if you're just doing a pg_upgrade of a logical replica why\nwould you drop the replication slot? In any case the warning you mentioned in\npg_dump documentation would still apply and you would have to reenable it as\nneeded, the only difference is that you would actually be able to keep your\nlogical replication after a pg_upgrade if you need. If you dropped the\nreplication slot on the publisher side, then simply remove the publications on\nthe upgraded node too, or create a new one, exactly as you would do with the\ncurrent pg_upgrade workflow.\n\n\n",
"msg_date": "Sun, 19 Feb 2023 08:01:09 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sun, Feb 19, 2023 at 5:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Feb 18, 2023 at 04:12:52PM +0530, Amit Kapila wrote:\n> > > > >\n> > > >\n> > > > I also don't know if there is any other safe way for newly added\n> > > > tables apart from the above suggestion to create separate publications\n> > > > but that can work only in specific cases.\n> > >\n> > > I might be missing something, but what could go wrong if pg_upgrade could emit\n> > > a bunch of commands like:\n> > >\n> > > ALTER SUBSCRIPTION subname ADD RELATION relid STATE 'x' LSN 'X/Y';\n> > >\n> >\n> > How will we know the STATE and LSN of each relation?\n>\n> In the pg_subscription_rel catalog of the upgraded server? I didn't look in\n> detail on how information are updated but I'm assuming that if logical\n> replication survives after a database restart it shouldn't be a problem to also\n> fully dump it during pg_upgrade.\n>\n> > But I think even\n> > if know that what is the guarantee that publisher side still has still\n> > retained the corresponding slots?\n>\n> No guarantee, but if you're just doing a pg_upgrade of a logical replica why\n> would you drop the replication slot? In any case the warning you mentioned in\n> pg_dump documentation would still apply and you would have to reenable it as\n> needed, the only difference is that you would actually be able to keep your\n> logical replication after a pg_upgrade if you need. If you dropped the\n> replication slot on the publisher side, then simply remove the publications on\n> the upgraded node too, or create a new one, exactly as you would do with the\n> current pg_upgrade workflow.\n>\n\nI think the current mechanism tries to provide more flexibility to the\nusers. OTOH, in some of the cases where users don't want to change\nanything in the logical replication (both upstream and downstream\nfunction as it is) after the upgrade then they need to do more work. I\nthink ideally there should be some option in pg_dump that allows us to\ndump the contents of pg_subscription_rel as well, so that is easier\nfor users to continue replication after the upgrade. We can then use\nit for binary-upgrade mode as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Feb 2023 11:07:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 11:07:42AM +0530, Amit Kapila wrote:\n> On Sun, Feb 19, 2023 at 5:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > > >\n> > > > I might be missing something, but what could go wrong if pg_upgrade could emit\n> > > > a bunch of commands like:\n> > > >\n> > > > ALTER SUBSCRIPTION subname ADD RELATION relid STATE 'x' LSN 'X/Y';\n> > > >\n> > >\n> > > How will we know the STATE and LSN of each relation?\n> >\n> > In the pg_subscription_rel catalog of the upgraded server? I didn't look in\n> > detail on how information are updated but I'm assuming that if logical\n> > replication survives after a database restart it shouldn't be a problem to also\n> > fully dump it during pg_upgrade.\n> >\n> > > But I think even\n> > > if know that what is the guarantee that publisher side still has still\n> > > retained the corresponding slots?\n> >\n> > No guarantee, but if you're just doing a pg_upgrade of a logical replica why\n> > would you drop the replication slot? In any case the warning you mentioned in\n> > pg_dump documentation would still apply and you would have to reenable it as\n> > needed, the only difference is that you would actually be able to keep your\n> > logical replication after a pg_upgrade if you need. If you dropped the\n> > replication slot on the publisher side, then simply remove the publications on\n> > the upgraded node too, or create a new one, exactly as you would do with the\n> > current pg_upgrade workflow.\n> >\n> \n> I think the current mechanism tries to provide more flexibility to the\n> users. OTOH, in some of the cases where users don't want to change\n> anything in the logical replication (both upstream and downstream\n> function as it is) after the upgrade then they need to do more work. I\n> think ideally there should be some option in pg_dump that allows us to\n> dump the contents of pg_subscription_rel as well, so that is easier\n> for users to continue replication after the upgrade. We can then use\n> it for binary-upgrade mode as well.\n\nIs there really a use case for dumping the content of pg_subscription_rel\noutside of pg_upgrade? I'm not particularly worried about the publisher going\naway or changing while pg_upgrade is running , but for a normal pg_dump /\npg_restore I don't really see how anyone would actually want to resume logical\nreplication from a pg_dump, especially since it's almost guaranteed that the\nnode will already have consumed data from the publication that won't be in the\ndump in the first place.\n\nAre you ok with the suggested syntax above (probably with extra parens to avoid\nadding new keywords), or do you have some better suggestion? I'm a bit worried\nabout adding some O(n) commands, as it can add some noticeable slow-down for\npg_upgrade-ing logical replica, but I don't really see how to avoid that. Note\nthat if we make this option available to end-users, we will have to use the\nrelation name rather than its oid, which will make this option even more\nexpensive when restoring due to the extra lookups.\n\nFor the pg_upgrade use-case, do you see any reason to not restore the\npg_subscription_rel by default? Maybe having an option to not restore it would\nmake sense if it indeed add noticeable overhead when publications have a lot of\ntables?\n\n\n",
"msg_date": "Mon, 20 Feb 2023 15:07:37 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 03:07:37PM +0800, Julien Rouhaud wrote:\n> On Mon, Feb 20, 2023 at 11:07:42AM +0530, Amit Kapila wrote:\n> >\n> > I think the current mechanism tries to provide more flexibility to the\n> > users. OTOH, in some of the cases where users don't want to change\n> > anything in the logical replication (both upstream and downstream\n> > function as it is) after the upgrade then they need to do more work. I\n> > think ideally there should be some option in pg_dump that allows us to\n> > dump the contents of pg_subscription_rel as well, so that is easier\n> > for users to continue replication after the upgrade. We can then use\n> > it for binary-upgrade mode as well.\n>\n> Is there really a use case for dumping the content of pg_subscription_rel\n> outside of pg_upgrade? I'm not particularly worried about the publisher going\n> away or changing while pg_upgrade is running , but for a normal pg_dump /\n> pg_restore I don't really see how anyone would actually want to resume logical\n> replication from a pg_dump, especially since it's almost guaranteed that the\n> node will already have consumed data from the publication that won't be in the\n> dump in the first place.\n>\n> Are you ok with the suggested syntax above (probably with extra parens to avoid\n> adding new keywords), or do you have some better suggestion? I'm a bit worried\n> about adding some O(n) commands, as it can add some noticeable slow-down for\n> pg_upgrade-ing logical replica, but I don't really see how to avoid that. Note\n> that if we make this option available to end-users, we will have to use the\n> relation name rather than its oid, which will make this option even more\n> expensive when restoring due to the extra lookups.\n>\n> For the pg_upgrade use-case, do you see any reason to not restore the\n> pg_subscription_rel by default? Maybe having an option to not restore it would\n> make sense if it indeed add noticeable overhead when publications have a lot of\n> tables?\n\nSince I didn't hear any objection I worked on a POC patch with this approach.\n\nFor now when pg_dump is invoked with --binary, it will always emit extra\ncommands to restore the relation list. This command is only allowed when the\nserver is started in binary upgrade mode.\n\nThe new command is of the form\n\nALTER SUBSCRIPTION name ADD TABLE (relid = X, state = 'Y', lsn = 'Z/Z')\n\nwith the lsn part being optional. I'm not sure if there should be some new\nregression test for that, as it would be a bit costly. Note that pg_upgrade of\na logical replica isn't covered by any regression test that I could find.\n\nI did test it manually though, and it fixes my original problem, allowing me to\nsafely resume logical replication by just re-enabling it. I didn't do any\nbenchmarking to see how much overhead it adds.",
"msg_date": "Wed, 22 Feb 2023 14:43:18 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 12:13 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Feb 20, 2023 at 03:07:37PM +0800, Julien Rouhaud wrote:\n> > On Mon, Feb 20, 2023 at 11:07:42AM +0530, Amit Kapila wrote:\n> > >\n> > > I think the current mechanism tries to provide more flexibility to the\n> > > users. OTOH, in some of the cases where users don't want to change\n> > > anything in the logical replication (both upstream and downstream\n> > > function as it is) after the upgrade then they need to do more work. I\n> > > think ideally there should be some option in pg_dump that allows us to\n> > > dump the contents of pg_subscription_rel as well, so that is easier\n> > > for users to continue replication after the upgrade. We can then use\n> > > it for binary-upgrade mode as well.\n> >\n> > Is there really a use case for dumping the content of pg_subscription_rel\n> > outside of pg_upgrade?\n\nI think the users who want to take a dump and restore the entire\ncluster may need it there for the same reason as pg_upgrade needs it.\nTBH, I have not seen such a request but this is what I imagine one\nwould expect if we provide this functionality via pg_upgrade.\n\n> > I'm not particularly worried about the publisher going\n> > away or changing while pg_upgrade is running , but for a normal pg_dump /\n> > pg_restore I don't really see how anyone would actually want to resume logical\n> > replication from a pg_dump, especially since it's almost guaranteed that the\n> > node will already have consumed data from the publication that won't be in the\n> > dump in the first place.\n> >\n> > Are you ok with the suggested syntax above (probably with extra parens to avoid\n> > adding new keywords), or do you have some better suggestion? I'm a bit worried\n> > about adding some O(n) commands, as it can add some noticeable slow-down for\n> > pg_upgrade-ing logical replica, but I don't really see how to avoid that. Note\n> > that if we make this option available to end-users, we will have to use the\n> > relation name rather than its oid, which will make this option even more\n> > expensive when restoring due to the extra lookups.\n> >\n> > For the pg_upgrade use-case, do you see any reason to not restore the\n> > pg_subscription_rel by default?\n\nAs I said earlier, one can very well say that giving more flexibility\n(in terms of where the publications will be after restore) after a\nrestore is a better idea. Also, we are doing the same till now without\nany major complaints about the same, so it makes sense to keep the\ncurrent behavior as default.\n\n> > Maybe having an option to not restore it would\n> > make sense if it indeed add noticeable overhead when publications have a lot of\n> > tables?\n\nYeah, that could be another reason to not do it default.\n\n>\n> Since I didn't hear any objection I worked on a POC patch with this approach.\n>\n> For now when pg_dump is invoked with --binary, it will always emit extra\n> commands to restore the relation list. This command is only allowed when the\n> server is started in binary upgrade mode.\n>\n> The new command is of the form\n>\n> ALTER SUBSCRIPTION name ADD TABLE (relid = X, state = 'Y', lsn = 'Z/Z')\n>\n> with the lsn part being optional.\n>\n\nBTW, do we restore the origin and its LSN after the upgrade? Because\nwithout that this won't be sufficient as that is required for apply\nworker to ensure that it is in sync with table sync workers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 25 Feb 2023 11:24:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 11:24:17AM +0530, Amit Kapila wrote:\n> On Wed, Feb 22, 2023 at 12:13 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > > Is there really a use case for dumping the content of pg_subscription_rel\n> > > outside of pg_upgrade?\n>\n> I think the users who want to take a dump and restore the entire\n> cluster may need it there for the same reason as pg_upgrade needs it.\n> TBH, I have not seen such a request but this is what I imagine one\n> would expect if we provide this functionality via pg_upgrade.\n\nBut the pg_subscription_rel data are only needed if you want to resume logical\nreplication from the exact previous state, otherwise you can always refresh the\nsubscription and it will retrieve the list of relations automatically (dealing\nwith initial sync and so on). It's hard to see how it could be happening with\na plain pg_dump.\n\nThe only usable scenario I can see would be to disable all subscriptions on the\nlogical replica, maybe make sure that no one does any write those tables if you\nwant to eventually switch over on the restored node, do a pg_dump(all), restore\nit and then resume the logical replication / subscription(s) on the restored\nserver. That's a lot of constraints for something that pg_upgrade deals with\nso much more efficiently. Maybe one plausible use case would be to split a\nsingle logical replica to N servers, one per database / publication or\nsomething like that. In that case pg_upgrade won't be that useful and if each\ntarget subset is small enough a pg_dump/pg_restore may be a viable option. But\nif that's a viable option then surely creating the logical replica from scratch\nusing normal logical table sync should be an even better option.\n\nI'm really worried that it's going to be a giant foot-gun that any user should\nreally avoid.\n\n> > > For the pg_upgrade use-case, do you see any reason to not restore the\n> > > pg_subscription_rel by default?\n>\n> As I said earlier, one can very well say that giving more flexibility\n> (in terms of where the publications will be after restore) after a\n> restore is a better idea. Also, we are doing the same till now without\n> any major complaints about the same, so it makes sense to keep the\n> current behavior as default.\n\nI'm a bit dubious that anyone actually tried to run pg_upgrade on a logical\nreplica and then kept using logical replication, as it's currently impossible\nto safely resume replication without truncating all target relations.\n\nAs I mentioned before, if we keep the current behavior as a default there\nshould be an explicit warning in the documentation stating that you need to\ntruncate all target relations before resuming logical replication as otherwise\nyou have a guarantee that you will lose data.\n\n> > > Maybe having an option to not restore it would\n> > > make sense if it indeed add noticeable overhead when publications have a lot of\n> > > tables?\n>\n> Yeah, that could be another reason to not do it default.\n\nI will do some benchmark with various number of relations, from high to\nunreasonable.\n\n> >\n> > Since I didn't hear any objection I worked on a POC patch with this approach.\n> >\n> > For now when pg_dump is invoked with --binary, it will always emit extra\n> > commands to restore the relation list. This command is only allowed when the\n> > server is started in binary upgrade mode.\n> >\n> > The new command is of the form\n> >\n> > ALTER SUBSCRIPTION name ADD TABLE (relid = X, state = 'Y', lsn = 'Z/Z')\n> >\n> > with the lsn part being optional.\n> >\n>\n> BTW, do we restore the origin and its LSN after the upgrade? Because\n> without that this won't be sufficient as that is required for apply\n> worker to ensure that it is in sync with table sync workers.\n\nWe currently don't, which is yet another sign that no one actually tried to\nresume logical replication after a pg_upgrade. That being said, trying to\npg_upgrade a node that's currently syncing relations seems like a bad idea\n(I didn't even think to try), but I guess it should also be supported. I will\nwork on that too. Assuming we add a new option for controlling either plain\npg_dump and/or pg_upgrade behavior, should this option control both\npg_subscription_rel and replication origins and their data or do we need more\ngranularity?\n\n\n",
"msg_date": "Sun, 26 Feb 2023 11:05:18 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 8:35 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Feb 25, 2023 at 11:24:17AM +0530, Amit Kapila wrote:\n> > >\n> > > The new command is of the form\n> > >\n> > > ALTER SUBSCRIPTION name ADD TABLE (relid = X, state = 'Y', lsn = 'Z/Z')\n> > >\n> > > with the lsn part being optional.\n> > >\n> >\n> > BTW, do we restore the origin and its LSN after the upgrade? Because\n> > without that this won't be sufficient as that is required for apply\n> > worker to ensure that it is in sync with table sync workers.\n>\n> We currently don't, which is yet another sign that no one actually tried to\n> resume logical replication after a pg_upgrade. That being said, trying to\n> pg_upgrade a node that's currently syncing relations seems like a bad idea\n> (I didn't even think to try), but I guess it should also be supported. I will\n> work on that too. Assuming we add a new option for controlling either plain\n> pg_dump and/or pg_upgrade behavior, should this option control both\n> pg_subscription_rel and replication origins and their data or do we need more\n> granularity?\n>\n\nMy vote would be to have one option for both. BTW, thinking some more\non this, how will we allow to continue replication after upgrading the\npublisher? During upgrade, we don't retain slots, so the replication\nwon't continue. I think after upgrading subscriber-node, user will\nneed to upgrade the publisher as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Feb 2023 15:39:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 03:39:18PM +0530, Amit Kapila wrote:\n>\n> BTW, thinking some more\n> on this, how will we allow to continue replication after upgrading the\n> publisher? During upgrade, we don't retain slots, so the replication\n> won't continue. I think after upgrading subscriber-node, user will\n> need to upgrade the publisher as well.\n\nThe scenario I'm interested in is to rely on logical replication only for the\nupgrade, so the end state (and start state) is to go back to physical\nreplication. In that case, I would just create new physical replica from the\npg_upgrade'd server and failover to that node, or rsync the previous publisher\nnode to make it a physical replica.\n\nBut even if you want to only rely on logical replication, I'm not sure why you\nwould want to keep the publisher node as a publisher node? I think that doing\nit this way will lead to a longer downtime compared to doing a failover on the\npg_upgrade'd node, make it a publisher and then move the former publisher node\nto a subscriber.\n\n\n",
"msg_date": "Tue, 28 Feb 2023 10:25:48 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 7:55 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Feb 27, 2023 at 03:39:18PM +0530, Amit Kapila wrote:\n> >\n> > BTW, thinking some more\n> > on this, how will we allow to continue replication after upgrading the\n> > publisher? During upgrade, we don't retain slots, so the replication\n> > won't continue. I think after upgrading subscriber-node, user will\n> > need to upgrade the publisher as well.\n>\n> The scenario I'm interested in is to rely on logical replication only for the\n> upgrade, so the end state (and start state) is to go back to physical\n> replication. In that case, I would just create new physical replica from the\n> pg_upgrade'd server and failover to that node, or rsync the previous publisher\n> node to make it a physical replica.\n>\n> But even if you want to only rely on logical replication, I'm not sure why you\n> would want to keep the publisher node as a publisher node? I think that doing\n> it this way will lead to a longer downtime compared to doing a failover on the\n> pg_upgrade'd node, make it a publisher and then move the former publisher node\n> to a subscriber.\n>\n\nI am not sure if this is usually everyone follows because it sounds\nlike a lot of work to me. IIUC, to achieve this, one needs to recreate\nall the publications and subscriptions after changing the roles of\npublisher and subscriber. Can you please write steps to show exactly\nwhat you have in mind to avoid any misunderstanding?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 28 Feb 2023 08:56:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 08:56:37AM +0530, Amit Kapila wrote:\n> On Tue, Feb 28, 2023 at 7:55 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> >\n> > The scenario I'm interested in is to rely on logical replication only for the\n> > upgrade, so the end state (and start state) is to go back to physical\n> > replication. In that case, I would just create new physical replica from the\n> > pg_upgrade'd server and failover to that node, or rsync the previous publisher\n> > node to make it a physical replica.\n> >\n> > But even if you want to only rely on logical replication, I'm not sure why you\n> > would want to keep the publisher node as a publisher node? I think that doing\n> > it this way will lead to a longer downtime compared to doing a failover on the\n> > pg_upgrade'd node, make it a publisher and then move the former publisher node\n> > to a subscriber.\n> >\n>\n> I am not sure if this is usually everyone follows because it sounds\n> like a lot of work to me. IIUC, to achieve this, one needs to recreate\n> all the publications and subscriptions after changing the roles of\n> publisher and subscriber. Can you please write steps to show exactly\n> what you have in mind to avoid any misunderstanding?\n\nWell, as I mentioned I'm *not* interested in a logical-replication-only\nscenario. Logical replication is nice but it will always be less efficient\nthan physical replication, and some workloads also don't really play well with\nit. So while it can be a huge asset in some cases I'm for now looking at\nleveraging logical replication for the purpose of major upgrade only for a\nphysical replication cluster, so the publications and subscriptions are only\ntemporary and trashed after use.\n\nThat being said I was only saying that if I had to do a major upgrade of a\nlogical replication cluster this is probably how I would try to do it, to\nminimize downtime, even if there are probably *a lot* difficulties to\novercome.\n\n\n",
"msg_date": "Tue, 28 Feb 2023 12:48:27 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 7:35 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Any table later added in the\n> publication is either already fully replicated until that LSN on the upgraded\n> node, so only the delta is needed, or has been created after that LSN. In the\n> latter case, the entirety of the table will be replicated with the logical\n> replication as a delta right?\n\n\nWhat if we consider a slightly adjusted procedure?\n\n0. Temporarily, forbid running any DDL on the source cluster.\n1. On the source, create publication, replication slot and remember\nthe LSN for it\n2. Restore the target cluster to that LSN using restore_target_lsn (PITR)\n3. Run pg_upgrade on the target cluster\n4. Only now, create subscription to target\n5. Wait until logical replication catches up\n6. Perform a switchover to the new cluster taking care of lags in sequences, etc\n7. Resume DDL when needed\n\nDo you see any data loss happening in this approach?\n\n\n",
"msg_date": "Tue, 28 Feb 2023 08:02:13 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 08:02:13AM -0800, Nikolay Samokhvalov wrote:\n> On Fri, Feb 17, 2023 at 7:35 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Any table later added in the\n> > publication is either already fully replicated until that LSN on the upgraded\n> > node, so only the delta is needed, or has been created after that LSN. In the\n> > latter case, the entirety of the table will be replicated with the logical\n> > replication as a delta right?\n>\n> What if we consider a slightly adjusted procedure?\n>\n> 0. Temporarily, forbid running any DDL on the source cluster.\n\nThis is (at least for me) a non starter, as I want an approach that doesn't\nimpact the primary node, at least not too much.\n\nAlso, how would you do that? If you need some new infrastructure it means that\nyou can only upgrade nodes starting from pg16+, while my approach can upgrade\nany node that supports publications as long as the target version is pg16+.\n\nIt also raises some concerns: why prevent any DDL while e.g. creating a\ntemporary table shouldn't not be a problem, same for renaming some underlying\nobject, adding indexes... You would have to curate a list of what exactly is\nallowed which is never great.\n\nAlso, how exactly would you ensure that indeed DDL were forbidden since a long\nenough point in time rather than just \"currently\" forbidden at the time you do\nsome check?\n\n\n",
"msg_date": "Wed, 1 Mar 2023 08:43:10 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 10:18 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Feb 28, 2023 at 08:56:37AM +0530, Amit Kapila wrote:\n> > On Tue, Feb 28, 2023 at 7:55 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > >\n> > > The scenario I'm interested in is to rely on logical replication only for the\n> > > upgrade, so the end state (and start state) is to go back to physical\n> > > replication. In that case, I would just create new physical replica from the\n> > > pg_upgrade'd server and failover to that node, or rsync the previous publisher\n> > > node to make it a physical replica.\n> > >\n> > > But even if you want to only rely on logical replication, I'm not sure why you\n> > > would want to keep the publisher node as a publisher node? I think that doing\n> > > it this way will lead to a longer downtime compared to doing a failover on the\n> > > pg_upgrade'd node, make it a publisher and then move the former publisher node\n> > > to a subscriber.\n> > >\n> >\n> > I am not sure if this is usually everyone follows because it sounds\n> > like a lot of work to me. IIUC, to achieve this, one needs to recreate\n> > all the publications and subscriptions after changing the roles of\n> > publisher and subscriber. Can you please write steps to show exactly\n> > what you have in mind to avoid any misunderstanding?\n>\n> Well, as I mentioned I'm *not* interested in a logical-replication-only\n> scenario. Logical replication is nice but it will always be less efficient\n> than physical replication, and some workloads also don't really play well with\n> it. So while it can be a huge asset in some cases I'm for now looking at\n> leveraging logical replication for the purpose of major upgrade only for a\n> physical replication cluster, so the publications and subscriptions are only\n> temporary and trashed after use.\n>\n> That being said I was only saying that if I had to do a major upgrade of a\n> logical replication cluster this is probably how I would try to do it, to\n> minimize downtime, even if there are probably *a lot* difficulties to\n> overcome.\n>\n\nOkay, but it would be better if you list out your detailed steps. It\nwould be useful to support the new mechanism in this area if others\nalso find your steps to upgrade useful.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Mar 2023 11:51:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 11:51:49AM +0530, Amit Kapila wrote:\n> On Tue, Feb 28, 2023 at 10:18 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Well, as I mentioned I'm *not* interested in a logical-replication-only\n> > scenario. Logical replication is nice but it will always be less efficient\n> > than physical replication, and some workloads also don't really play well with\n> > it. So while it can be a huge asset in some cases I'm for now looking at\n> > leveraging logical replication for the purpose of major upgrade only for a\n> > physical replication cluster, so the publications and subscriptions are only\n> > temporary and trashed after use.\n> >\n> > That being said I was only saying that if I had to do a major upgrade of a\n> > logical replication cluster this is probably how I would try to do it, to\n> > minimize downtime, even if there are probably *a lot* difficulties to\n> > overcome.\n> >\n>\n> Okay, but it would be better if you list out your detailed steps. It\n> would be useful to support the new mechanism in this area if others\n> also find your steps to upgrade useful.\n\nSure. Here are the overly detailed steps:\n\n 1) setup a normal physical replication cluster (pg_basebackup, restoring PITR,\n whatever), let's call the primary node \"A\" and replica node \"B\"\n 2) ensure WAL level is \"logical\" on the primary node A\n 3) create a logical replication slot on every (connectable) database (or just\n the one you're interested in if you don't want to preserve everything) on A\n 4) create a FOR ALL TABLE publication (again for every databases or just the\n one you're interested in)\n 5) wait for replication to be reasonably if not entirely up to date\n 6) promote the standby node B\n 7) retrieve the promotion LSN (from the XXXXXXXX.history file,\n pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn()...)\n 8) call pg_replication_slot_advance() with that LSN for all previously created\n logical replication slots on A\n 9) create a normal subscription on all wanted databases on the promoted node\n10) wait for it to catchup if needed on B\n12) stop the node B\n13) run pg_upgrade on B, creating the new node C\n14) start C, run the global ANALYZE and any sanity check needed (hopefully you\n would have validated that your application is compatible with that new\n version before this point)\n15) re-enable the subscription on C. This is currently not possible without\n losing data, the patch fixes that\n16) wait for it to catchup if needed\n17) create any missing relation and do the ALTER SUBSCRIPTION ... REFRESH if\n needed\n18) trash B\n19) create new nodes D, E... as physical replica from C if needed, possibly\nusing cheaper approach like pg_start_backup() / rsync / pg_stop_backup if\nneeded\n20) switchover to C and trash A (or convert it to another replica if you want)\n21) trash the publications on C on all databases\n\nAs noted the step 15 is currently problematic, and is also problematic in any\nvariation of that scenario that doesn't require you to entirely recreate the\nnode C from scratch using logical replication, which is what I want to avoid.\n\nThis isn't terribly complicated but requires to be really careful if you don't\nwant to end up with an incorrect node C. This approach is also currently not\nentirely ideal, but hopefully logical replication of sequences and DDL will\nremove the main sources of downtime when upgrading using logical replication.\n\nMy ultimate goal is to provide some tooling to do that in a much simpler way.\nMaybe a new \"promote to logical\" action that would take care of steps 2 to 9.\nUsers would therefore only have to do this \"promotion to logical\", and then run\npg_upgrade and create a new physical replication cluster if they want.\n\n\n",
"msg_date": "Wed, 1 Mar 2023 14:55:27 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 4:43 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Feb 28, 2023 at 08:02:13AM -0800, Nikolay Samokhvalov wrote:\n> > 0. Temporarily, forbid running any DDL on the source cluster.\n>\n> This is (at least for me) a non starter, as I want an approach that doesn't\n> impact the primary node, at least not too much.\n...\n> Also, how exactly would you ensure that indeed DDL were forbidden since a long\n> enough point in time rather than just \"currently\" forbidden at the time you do\n> some check?\n\nThanks for your response. I didn't expect that DDL part would attract\nattention, my message was not about DDL... – the DDL part was there\njust to show that the recipe I described is possible for any PG\nversion that supports logical replication.\n\nUsually, people perform upgrades involving logical using full\ninitialization at logical level – at least all posts and articles I\ncould talk about that. Meanwhile, on one hand, for large DBs, logical\ncopying is hard (slow, holding xmin horizon, etc.), and on the other\nhand, physical replica can be transformed to logical (using the trick\nwith recover_target_lsn, syncing the state with the slot's LSN) and\ninitialization at physical level works much better for large\ndatabases. But there is a problem with logical replication when we run\npg_upgrade – as discussed in this thread. So I just wanted to mention\nthat if we change the order of actions and first run pg_upgrade, and\nonly then create publication, there should not be a problem anymore.\n\n\n",
"msg_date": "Wed, 1 Mar 2023 07:56:47 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 07:56:47AM -0800, Nikolay Samokhvalov wrote:\n> On Tue, Feb 28, 2023 at 4:43 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Tue, Feb 28, 2023 at 08:02:13AM -0800, Nikolay Samokhvalov wrote:\n> > > 0. Temporarily, forbid running any DDL on the source cluster.\n> >\n> > This is (at least for me) a non starter, as I want an approach that doesn't\n> > impact the primary node, at least not too much.\n> ...\n> > Also, how exactly would you ensure that indeed DDL were forbidden since a long\n> > enough point in time rather than just \"currently\" forbidden at the time you do\n> > some check?\n>\n> Thanks for your response. I didn't expect that DDL part would attract\n> attention, my message was not about DDL... – the DDL part was there\n> just to show that the recipe I described is possible for any PG\n> version that supports logical replication.\n\nWell, yes but I already mentioned that in my original email as \"dropping all\nsubscriptions and recreating them\" is obviously the same as simply creating\nthem later. I don't even think that preventing DDL is necessary.\n\nOne really important detail you forgot though is that you need to create the\nsubscription using \"copy_data = false\". Not hard to do, but that's not the\ndefault so it's yet another trap users can fall into when trying to do a major\nversion upgrade that can lead to a corrupted logical replica.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 09:23:50 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 12:25 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Mar 01, 2023 at 11:51:49AM +0530, Amit Kapila wrote:\n> > On Tue, Feb 28, 2023 at 10:18 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> >\n> > Okay, but it would be better if you list out your detailed steps. It\n> > would be useful to support the new mechanism in this area if others\n> > also find your steps to upgrade useful.\n>\n> Sure. Here are the overly detailed steps:\n>\n> 1) setup a normal physical replication cluster (pg_basebackup, restoring PITR,\n> whatever), let's call the primary node \"A\" and replica node \"B\"\n> 2) ensure WAL level is \"logical\" on the primary node A\n> 3) create a logical replication slot on every (connectable) database (or just\n> the one you're interested in if you don't want to preserve everything) on A\n> 4) create a FOR ALL TABLE publication (again for every databases or just the\n> one you're interested in)\n> 5) wait for replication to be reasonably if not entirely up to date\n> 6) promote the standby node B\n> 7) retrieve the promotion LSN (from the XXXXXXXX.history file,\n> pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn()...)\n> 8) call pg_replication_slot_advance() with that LSN for all previously created\n> logical replication slots on A\n>\n\nHow are these slots used? Do subscriptions use these slots?\n\n> 9) create a normal subscription on all wanted databases on the promoted node\n> 10) wait for it to catchup if needed on B\n> 12) stop the node B\n> 13) run pg_upgrade on B, creating the new node C\n> 14) start C, run the global ANALYZE and any sanity check needed (hopefully you\n> would have validated that your application is compatible with that new\n> version before this point)\n> 15) re-enable the subscription on C. This is currently not possible without\n> losing data, the patch fixes that\n> 16) wait for it to catchup if needed\n> 17) create any missing relation and do the ALTER SUBSCRIPTION ... REFRESH if\n> needed\n> 18) trash B\n> 19) create new nodes D, E... as physical replica from C if needed, possibly\n> using cheaper approach like pg_start_backup() / rsync / pg_stop_backup if\n> needed\n> 20) switchover to C and trash A (or convert it to another replica if you want)\n> 21) trash the publications on C on all databases\n>\n> As noted the step 15 is currently problematic, and is also problematic in any\n> variation of that scenario that doesn't require you to entirely recreate the\n> node C from scratch using logical replication, which is what I want to avoid.\n>\n> This isn't terribly complicated but requires to be really careful if you don't\n> want to end up with an incorrect node C. This approach is also currently not\n> entirely ideal, but hopefully logical replication of sequences and DDL will\n> remove the main sources of downtime when upgrading using logical replication.\n>\n\nI think there are good chances that one can make mistakes following\nall the above steps unless she is an expert.\n\n> My ultimate goal is to provide some tooling to do that in a much simpler way.\n> Maybe a new \"promote to logical\" action that would take care of steps 2 to 9.\n> Users would therefore only have to do this \"promotion to logical\", and then run\n> pg_upgrade and create a new physical replication cluster if they want.\n>\n\nWhy don't we try to support the direct upgrade of logical replication\nnodes? Have you tried to analyze what are the obstacles and whether we\ncan have solutions for those? For example, one of the challenges is to\nsupport the upgrade of slots, can we copy (from the old cluster) and\nrecreate them in the new cluster by resetting LSNs? We can also reset\norigins during the upgrade of subscribers and recommend to first\nupgrade the subscriber node.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 15:47:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Mar 02, 2023 at 03:47:53PM +0530, Amit Kapila wrote:\n> On Wed, Mar 1, 2023 at 12:25 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > 1) setup a normal physical replication cluster (pg_basebackup, restoring PITR,\n> > whatever), let's call the primary node \"A\" and replica node \"B\"\n> > 2) ensure WAL level is \"logical\" on the primary node A\n> > 3) create a logical replication slot on every (connectable) database (or just\n> > the one you're interested in if you don't want to preserve everything) on A\n> > 4) create a FOR ALL TABLE publication (again for every databases or just the\n> > one you're interested in)\n> > 5) wait for replication to be reasonably if not entirely up to date\n> > 6) promote the standby node B\n> > 7) retrieve the promotion LSN (from the XXXXXXXX.history file,\n> > pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn()...)\n> > 8) call pg_replication_slot_advance() with that LSN for all previously created\n> > logical replication slots on A\n> >\n>\n> How are these slots used? Do subscriptions use these slots?\n\nYes, as this is the only way to make sure that you replicate everything since\nthe promotion, and only once. To be more precise, something like that:\n\nCREATE SUBSCRIPTION db_xxx_subscription\n CONNECTION 'dbname=db_xxx user=...'\n PUBLICATION sub_for_db_xxx\n WITH (create_slot = false,\n slot_name = 'slot_for_db_xxx',\n copy_data = false);\n\n> > 9) create a normal subscription on all wanted databases on the promoted node\n> > 10) wait for it to catchup if needed on B\n> > 12) stop the node B\n> > 13) run pg_upgrade on B, creating the new node C\n> > 14) start C, run the global ANALYZE and any sanity check needed (hopefully you\n> > would have validated that your application is compatible with that new\n> > version before this point)\n> > 15) re-enable the subscription on C. This is currently not possible without\n> > losing data, the patch fixes that\n> > 16) wait for it to catchup if needed\n> > 17) create any missing relation and do the ALTER SUBSCRIPTION ... REFRESH if\n> > needed\n> > 18) trash B\n> > 19) create new nodes D, E... as physical replica from C if needed, possibly\n> > using cheaper approach like pg_start_backup() / rsync / pg_stop_backup if\n> > needed\n> > 20) switchover to C and trash A (or convert it to another replica if you want)\n> > 21) trash the publications on C on all databases\n> >\n> > As noted the step 15 is currently problematic, and is also problematic in any\n> > variation of that scenario that doesn't require you to entirely recreate the\n> > node C from scratch using logical replication, which is what I want to avoid.\n> >\n> > This isn't terribly complicated but requires to be really careful if you don't\n> > want to end up with an incorrect node C. This approach is also currently not\n> > entirely ideal, but hopefully logical replication of sequences and DDL will\n> > remove the main sources of downtime when upgrading using logical replication.\n> >\n>\n> I think there are good chances that one can make mistakes following\n> all the above steps unless she is an expert.\n\nAssuming we do fix pg_upgrade behavior with subscriptions, there isn't much\nroom for error compared to other scenario:\n\n- pg_upgrade has been there for ages and contains a lot of sanity checks.\n People already use it and AFAIK it's not a major pain point, apart from the\n cases where it can be slow\n- ALTER SUBSCRIPTIOn ... REFRESH will complain if tables are missing locally\n- similarly, the logical replica will complain if you're missing some other DDL\n locally\n- you only create replica if you had some in the first place, so it's something\n you should already know how to do. If not, you didn't have any before the\n upgrade and you still won't have after\n\n> > My ultimate goal is to provide some tooling to do that in a much simpler way.\n> > Maybe a new \"promote to logical\" action that would take care of steps 2 to 9.\n> > Users would therefore only have to do this \"promotion to logical\", and then run\n> > pg_upgrade and create a new physical replication cluster if they want.\n> >\n>\n> Why don't we try to support the direct upgrade of logical replication\n> nodes? Have you tried to analyze what are the obstacles and whether we\n> can have solutions for those? For example, one of the challenges is to\n> support the upgrade of slots, can we copy (from the old cluster) and\n> recreate them in the new cluster by resetting LSNs? We can also reset\n> origins during the upgrade of subscribers and recommend to first\n> upgrade the subscriber node.\n\nI'm not sure I get your question. This whole thread is about direct upgrade of\nlogical replication nodes, at least the subscribers, and what is currently\npreventing it.\n\nFor the publisher nodes, that may be something nice to support (I'm assuming it\ncould be useful for more complex replication setups) but I'm not interested in\nthat at the moment as my goal is to reduce downtime for major upgrade of\nphysical replica, thus *not* doing pg_upgrade of the primary node, whether\nphysical or logical. I don't see why it couldn't be done later on, if/when\nsomeone has a use case for it.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 18:50:56 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 4:21 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Mar 02, 2023 at 03:47:53PM +0530, Amit Kapila wrote:\n> >\n> > Why don't we try to support the direct upgrade of logical replication\n> > nodes? Have you tried to analyze what are the obstacles and whether we\n> > can have solutions for those? For example, one of the challenges is to\n> > support the upgrade of slots, can we copy (from the old cluster) and\n> > recreate them in the new cluster by resetting LSNs? We can also reset\n> > origins during the upgrade of subscribers and recommend to first\n> > upgrade the subscriber node.\n>\n> I'm not sure I get your question. This whole thread is about direct upgrade of\n> logical replication nodes, at least the subscribers, and what is currently\n> preventing it.\n>\n\nIt is only about subscribers and nothing about publishers.\n\n> For the publisher nodes, that may be something nice to support (I'm assuming it\n> could be useful for more complex replication setups) but I'm not interested in\n> that at the moment as my goal is to reduce downtime for major upgrade of\n> physical replica, thus *not* doing pg_upgrade of the primary node, whether\n> physical or logical. I don't see why it couldn't be done later on, if/when\n> someone has a use case for it.\n>\n\nI thought there is value if we provide a way to upgrade both publisher\nand subscriber. Now, you came up with a use case linking it to a\nphysical replica where allowing an upgrade of only subscriber nodes is\nuseful. It is possible that users find your steps easy to perform and\ndidn't find them error-prone but it may be better to get some\nauthentication of the same. I haven't yet analyzed all the steps in\ndetail but let's see what others think.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 4 Mar 2023 11:43:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sat, 4 Mar 2023, 14:13 Amit Kapila, <amit.kapila16@gmail.com> wrote:\n\n>\n> > For the publisher nodes, that may be something nice to support (I'm\n> assuming it\n> > could be useful for more complex replication setups) but I'm not\n> interested in\n> > that at the moment as my goal is to reduce downtime for major upgrade of\n> > physical replica, thus *not* doing pg_upgrade of the primary node,\n> whether\n> > physical or logical. I don't see why it couldn't be done later on,\n> if/when\n> > someone has a use case for it.\n> >\n>\n> I thought there is value if we provide a way to upgrade both publisher\n> and subscriber.\n\n\nit's still unclear to me whether it's actually achievable on the publisher\nside, as running pg_upgrade leaves a \"hole\" in the WAL stream and resets\nthe timeline, among other possible difficulties. Now I don't know much\nabout logical replication internals so I'm clearly not the best person to\nanswer those questions.\n\nNow, you came up with a use case linking it to a\n> physical replica where allowing an upgrade of only subscriber nodes is\n> useful. It is possible that users find your steps easy to perform and\n> didn't find them error-prone but it may be better to get some\n> authentication of the same. I haven't yet analyzed all the steps in\n> detail but let's see what others think.\n>\n\nIt's been quite some time since and no one seemed to chime in or object.\nIMO doing a major version upgrade with limited downtime (so something\nfaster than stopping postgres and running pg_upgrade) has always been\ndifficult and never prevented anyone from doing it, so I don't think that\nit should be a blocker for what I'm suggesting here, especially since the\ncurrent behavior of pg_upgrade on a subscriber node is IMHO broken.\n\nIs there something that can be done for pg16? I was thinking that having a\nfix for the normal and easy case could be acceptable: only allowing\npg_upgrade to optionally, and not by default, preserve the subscription\nrelations IFF all subscriptions only have tables in ready state. Different\nstates should be transient, and it's easy to check as a user beforehand and\nalso easy to check during pg_upgrade, so it seems like an acceptable\nlimitations (which I personally see as a good sanity check, but YMMV). It\ncould be lifted in later releases if wanted anyway.\n\nIt's unclear to me whether this limited scope would also require to\npreserve the replication origins, but having looked at the code I don't\nthink it would be much of a problem as the local LSN doesn't have to be\npreserved. In both cases I would prefer a single option (e. g.\n--preserve-logical-subscription-state or something like that) to avoid too\nmuch complications. Similarly, I still don't see any sensible use case for\nallowing such option in a normal pg_dump so I'd rather not expose that.\n\n>\n\nOn Sat, 4 Mar 2023, 14:13 Amit Kapila, <amit.kapila16@gmail.com> wrote:\n\n> For the publisher nodes, that may be something nice to support (I'm assuming it\n> could be useful for more complex replication setups) but I'm not interested in\n> that at the moment as my goal is to reduce downtime for major upgrade of\n> physical replica, thus *not* doing pg_upgrade of the primary node, whether\n> physical or logical. I don't see why it couldn't be done later on, if/when\n> someone has a use case for it.\n>\n\nI thought there is value if we provide a way to upgrade both publisher\nand subscriber.it's still unclear to me whether it's actually achievable on the publisher side, as running pg_upgrade leaves a \"hole\" in the WAL stream and resets the timeline, among other possible difficulties. Now I don't know much about logical replication internals so I'm clearly not the best person to answer those questions. Now, you came up with a use case linking it to a\nphysical replica where allowing an upgrade of only subscriber nodes is\nuseful. It is possible that users find your steps easy to perform and\ndidn't find them error-prone but it may be better to get some\nauthentication of the same. I haven't yet analyzed all the steps in\ndetail but let's see what others think.It's been quite some time since and no one seemed to chime in or object. IMO doing a major version upgrade with limited downtime (so something faster than stopping postgres and running pg_upgrade) has always been difficult and never prevented anyone from doing it, so I don't think that it should be a blocker for what I'm suggesting here, especially since the current behavior of pg_upgrade on a subscriber node is IMHO broken.Is there something that can be done for pg16? I was thinking that having a fix for the normal and easy case could be acceptable: only allowing pg_upgrade to optionally, and not by default, preserve the subscription relations IFF all subscriptions only have tables in ready state. Different states should be transient, and it's easy to check as a user beforehand and also easy to check during pg_upgrade, so it seems like an acceptable limitations (which I personally see as a good sanity check, but YMMV). It could be lifted in later releases if wanted anyway.It's unclear to me whether this limited scope would also require to preserve the replication origins, but having looked at the code I don't think it would be much of a problem as the local LSN doesn't have to be preserved. In both cases I would prefer a single option (e. g. --preserve-logical-subscription-state or something like that) to avoid too much complications. Similarly, I still don't see any sensible use case for allowing such option in a normal pg_dump so I'd rather not expose that.",
"msg_date": "Wed, 8 Mar 2023 14:56:14 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 12:26 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, 4 Mar 2023, 14:13 Amit Kapila, <amit.kapila16@gmail.com> wrote:\n>>\n>>\n>> > For the publisher nodes, that may be something nice to support (I'm assuming it\n>> > could be useful for more complex replication setups) but I'm not interested in\n>> > that at the moment as my goal is to reduce downtime for major upgrade of\n>> > physical replica, thus *not* doing pg_upgrade of the primary node, whether\n>> > physical or logical. I don't see why it couldn't be done later on, if/when\n>> > someone has a use case for it.\n>> >\n>>\n>> I thought there is value if we provide a way to upgrade both publisher\n>> and subscriber.\n>\n>\n> it's still unclear to me whether it's actually achievable on the publisher side, as running pg_upgrade leaves a \"hole\" in the WAL stream and resets the timeline, among other possible difficulties. Now I don't know much about logical replication internals so I'm clearly not the best person to answer those questions.\n>\n\nI think that is the part we need to analyze and see what are the\nchallenges there. One part of the challenge is that we need to\npreserve slots that have some WAL locations like restart_lsn,\nconfirmed_flush and we need WAL from those locations for decoding. I\nhaven't analyzed this but isn't it possible to that on clean shutdown\nwe confirm that all the WAL has been sent and confirmed by the logical\nsubscriber in which case I think truncating WAL in pg_upgrade\nshouldn't be a problem?\n\n>> Now, you came up with a use case linking it to a\n>> physical replica where allowing an upgrade of only subscriber nodes is\n>> useful. It is possible that users find your steps easy to perform and\n>> didn't find them error-prone but it may be better to get some\n>> authentication of the same. I haven't yet analyzed all the steps in\n>> detail but let's see what others think.\n>\n>\n> It's been quite some time since and no one seemed to chime in or object. IMO doing a major version upgrade with limited downtime (so something faster than stopping postgres and running pg_upgrade) has always been difficult and never prevented anyone from doing it, so I don't think that it should be a blocker for what I'm suggesting here, especially since the current behavior of pg_upgrade on a subscriber node is IMHO broken.\n>\n> Is there something that can be done for pg16? I was thinking that having a fix for the normal and easy case could be acceptable: only allowing pg_upgrade to optionally, and not by default, preserve the subscription relations IFF all subscriptions only have tables in ready state. Different states should be transient, and it's easy to check as a user beforehand and also easy to check during pg_upgrade, so it seems like an acceptable limitations (which I personally see as a good sanity check, but YMMV). It could be lifted in later releases if wanted anyway.\n>\n> It's unclear to me whether this limited scope would also require to preserve the replication origins, but having looked at the code I don't think it would be much of a problem as the local LSN doesn't have to be preserved.\n>\n\nI think we need to preserve replication origins as they help us to\ndetermine the WAL location from where to start the streaming after the\nupgrade. If we don't preserve those then from which location will the\nsubscriber start streaming? We don't want to replicate the WAL which\nhas already been sent.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Mar 2023 12:05:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Hi,\n\nOn Thu, Mar 09, 2023 at 12:05:36PM +0530, Amit Kapila wrote:\n> On Wed, Mar 8, 2023 at 12:26 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Is there something that can be done for pg16? I was thinking that having a\n> > fix for the normal and easy case could be acceptable: only allowing\n> > pg_upgrade to optionally, and not by default, preserve the subscription\n> > relations IFF all subscriptions only have tables in ready state. Different\n> > states should be transient, and it's easy to check as a user beforehand and\n> > also easy to check during pg_upgrade, so it seems like an acceptable\n> > limitations (which I personally see as a good sanity check, but YMMV). It\n> > could be lifted in later releases if wanted anyway.\n> >\n> > It's unclear to me whether this limited scope would also require to\n> > preserve the replication origins, but having looked at the code I don't\n> > think it would be much of a problem as the local LSN doesn't have to be\n> > preserved.\n> >\n>\n> I think we need to preserve replication origins as they help us to\n> determine the WAL location from where to start the streaming after the\n> upgrade. If we don't preserve those then from which location will the\n> subscriber start streaming?\n\nIt would start from the slot's information on the publisher side, but I guess\nthere's no guarantee that this will be accurate in all cases.\n\n> We don't want to replicate the WAL which\n> has already been sent.\n\nYeah I agree. I added support to also preserve the subscription's replication\norigin information, a new --preserve-subscription-state (better naming welcome)\ndocumented option for pg_upgrade to optionally ask for this new mode, and a\nsimilar (but undocumented) option for pg_dump that only works with\n--binary-upgrade and added a check in pg_upgrade that all relations are in 'r'\n(ready) mode. Patch v2 attached.",
"msg_date": "Thu, 9 Mar 2023 16:34:56 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 3:55 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Mar 01, 2023 at 11:51:49AM +0530, Amit Kapila wrote:\n> > On Tue, Feb 28, 2023 at 10:18 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > Well, as I mentioned I'm *not* interested in a logical-replication-only\n> > > scenario. Logical replication is nice but it will always be less efficient\n> > > than physical replication, and some workloads also don't really play well with\n> > > it. So while it can be a huge asset in some cases I'm for now looking at\n> > > leveraging logical replication for the purpose of major upgrade only for a\n> > > physical replication cluster, so the publications and subscriptions are only\n> > > temporary and trashed after use.\n> > >\n> > > That being said I was only saying that if I had to do a major upgrade of a\n> > > logical replication cluster this is probably how I would try to do it, to\n> > > minimize downtime, even if there are probably *a lot* difficulties to\n> > > overcome.\n> > >\n> >\n> > Okay, but it would be better if you list out your detailed steps. It\n> > would be useful to support the new mechanism in this area if others\n> > also find your steps to upgrade useful.\n>\n> Sure. Here are the overly detailed steps:\n>\n> 1) setup a normal physical replication cluster (pg_basebackup, restoring PITR,\n> whatever), let's call the primary node \"A\" and replica node \"B\"\n> 2) ensure WAL level is \"logical\" on the primary node A\n> 3) create a logical replication slot on every (connectable) database (or just\n> the one you're interested in if you don't want to preserve everything) on A\n> 4) create a FOR ALL TABLE publication (again for every databases or just the\n> one you're interested in)\n> 5) wait for replication to be reasonably if not entirely up to date\n> 6) promote the standby node B\n> 7) retrieve the promotion LSN (from the XXXXXXXX.history file,\n> pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn()...)\n> 8) call pg_replication_slot_advance() with that LSN for all previously created\n> logical replication slots on A\n> 9) create a normal subscription on all wanted databases on the promoted node\n> 10) wait for it to catchup if needed on B\n> 12) stop the node B\n> 13) run pg_upgrade on B, creating the new node C\n> 14) start C, run the global ANALYZE and any sanity check needed (hopefully you\n> would have validated that your application is compatible with that new\n> version before this point)\n\nI might be missing something but is there any reason why you created a\nsubscription before pg_upgrade?\n\nSteps like doing pg_upgrade, then creating missing tables, and then\ncreating a subscription (with copy_data = false) could be an\nalternative way to support upgrading the server from the physical\nstandby?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 23 Mar 2023 16:27:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Hi,\n\nOn Thu, Mar 23, 2023 at 04:27:28PM +0900, Masahiko Sawada wrote:\n>\n> I might be missing something but is there any reason why you created a\n> subscription before pg_upgrade?\n>\n> Steps like doing pg_upgrade, then creating missing tables, and then\n> creating a subscription (with copy_data = false) could be an\n> alternative way to support upgrading the server from the physical\n> standby?\n\nAs I already answered to Nikolay, and explained in my very first email, yes\nit's possible to create the subscriptions after running pg_upgrade. I\npersonally prefer to do it first to make sure that the logical replication is\nactually functional, so I can still easily do a pg_rewind or something to fix\nthings without having to trash the newly built (and promoted) replica.\n\nBut that exact scenario is a corner case, as in any other scenario pg_upgrade\nleaves the subscription in an unrecoverable state, where you have to truncate\nall the underlying tables first and start from scratch doing an initial sync.\nThis kind of defeats the purpose of pg_upgrade.\n\n\n",
"msg_date": "Thu, 23 Mar 2023 15:41:54 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Hi,\n\nOn Thu, Mar 09, 2023 at 04:34:56PM +0800, Julien Rouhaud wrote:\n> \n> Yeah I agree. I added support to also preserve the subscription's replication\n> origin information, a new --preserve-subscription-state (better naming welcome)\n> documented option for pg_upgrade to optionally ask for this new mode, and a\n> similar (but undocumented) option for pg_dump that only works with\n> --binary-upgrade and added a check in pg_upgrade that all relations are in 'r'\n> (ready) mode. Patch v2 attached.\n\nI'm attaching a v3 to fix a recent conflict with pg_dump due to a563c24c9574b7\n(Allow pg_dump to include/exclude child tables automatically). While at it I\nalso tried to improve the documentation, explaining how that option could be\nuseful and what is the drawback of not using it (linking to the pg_dump note\nabout the same) if you plan to reactivate subscription(s) after an upgrade.",
"msg_date": "Mon, 27 Mar 2023 16:49:55 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Julien,\n\n> I'm attaching a v3 to fix a recent conflict with pg_dump due to a563c24c9574b7\n> (Allow pg_dump to include/exclude child tables automatically).\n\nThank you for making the patch.\nFYI - it could not be applied due to recent commits. SUBOPT_* and attributes\nin SubscriptionInfo was added these days.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Thu, 6 Apr 2023 04:49:59 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "Hi,\n\nOn Thu, Apr 06, 2023 at 04:49:59AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> Dear Julien,\n>\n> > I'm attaching a v3 to fix a recent conflict with pg_dump due to a563c24c9574b7\n> > (Allow pg_dump to include/exclude child tables automatically).\n>\n> Thank you for making the patch.\n> FYI - it could not be applied due to recent commits. SUBOPT_* and attributes\n> in SubscriptionInfo was added these days.\n\nThanks a lot for warning me!\n\nWhile rebasing and testing the patch, I realized that I forgot to git-add a\nchunk, so I want ahead and added some minimal TAP tests to make sure that the\nfeature and various checks work as expected, also demonstrating that you can\nsafely resume after running pg_upgrade a logical replication setup where only\nsome of the tables are added to a publication, where new rows and new tables\nare added to the publication while pg_upgrade is running (for the new table you\nobviously need to make sure that the same relation exist on the subscriber side\nbut that's orthogonal to this patch).\n\nWhile doing so, I also realized that the subscription's underlying replication\norigin remote LSN is only set after some activity is seen *after* the initial\nsync, so I also added a new check in pg_upgrade to make sure that all remote\norigin tied to a subscription have a valid remote_lsn when the new option is\nused. Documentation is updated to cover that, same for the TAP tests.\n\nv4 attached.",
"msg_date": "Fri, 7 Apr 2023 10:28:02 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Julien,\n\nThank you for updating the patch. I checked yours.\nFollowings are general or non-minor questions:\n\n1.\nFeature freeze for PG16 has already come. So I think there is no reason to rush\nmaking the patch. Based on above, could you allow to upgrade while synchronizing\ndata? Personally it can be added as 0002 patch which extends the feature. Or\nhave you already found any problem?\n\n2.\nI have a questions about the SQL interface:\n\nALTER SUBSCRIPTION name ADD TABLE (relid = XYZ, state = 'x' [, lsn = 'X/Y'])\n\nHere the oid of the table is directly specified, but is it really kept between\nold and new node? Similar command ALTER PUBLICATION requires the name of table,\nnot the oid.\n\n3.\nCurrently getSubscriptionRels() is called from the getSubscriptions(), but I could\nnot find the reason why we must do like that. Other functions like\ngetPublicationTables() is directly called from getSchemaData(), so they should\nbe followed. Additionaly, I found two problems.\n\n* Only tables that to be dumped should be included. See getPublicationTables().\n* dropStmt for subscription relations seems not to be needed.\n* Maybe security label and comments should be also dumped.\n\nFollowings are minor comments.\n\n\n4. parse_subscription_options\n\n```\n+ opts->state = defGetString(defel)[0];\n```\n\n[0] is not needed.\n\n5. AlterSubscription\n\n```\n+ supported_opts = SUBOPT_RELID | SUBOPT_STATE | SUBOPT_LSN;\n+ parse_subscription_options(pstate, stmt->options,\n+ supported_opts, &opts);\n+\n+ /* relid and state should always be provided. */\n+ Assert(IsSet(opts.specified_opts, SUBOPT_RELID));\n+ Assert(IsSet(opts.specified_opts, SUBOPT_STATE));\n+\n```\n\nSUBOPT_LSN accepts \"none\" string, which means InvalidLSN. Isn't it better to\nreject it?\n\n6. dumpSubscription()\n\n```\n+ if (dopt->binary_upgrade && dopt->preserve_subscriptions &&\n+ subinfo->suboriginremotelsn)\n+ {\n+ appendPQExpBuffer(query, \", lsn = '%s'\", subinfo->suboriginremotelsn);\n+ }\n```\n\n{} is not needed.\n\n7. pg_dump.h\n\n```\n+/*\n+ * The SubRelInfo struct is used to represent subscription relation.\n+ */\n+typedef struct _SubRelInfo\n+{\n+ Oid srrelid;\n+ char srsubstate;\n+ char *srsublsn;\n+} SubRelInfo;\n```\n\nThis typedef must be added to typedefs.list.\n\n8. check_for_subscription_state\n\n```\n\t\t\tnb = atooid(PQgetvalue(res, 0, 0));\n\t\t\tif (nb != 0)\n\t\t\t{\n\t\t\t\tis_error = true;\n\t\t\t\tpg_log(PG_WARNING,\n\t\t\t\t\t \"\\nWARNING: %d subscription have invalid remote_lsn\",\n\t\t\t\t\t nb);\n\t\t\t}\n```\n\nI think no need to use atooid. Additionaly, isn't it better to show the name of\nsubscriptions which have invalid remote_lsn?\n\n```\n\t\tnb = atooid(PQgetvalue(res, 0, 0));\n\t\tif (nb != 0)\n\t\t{\n\t\t\tis_error = true;\n\t\t\tpg_log(PG_WARNING,\n\t\t\t\t \"\\nWARNING: database \\\"%s\\\" has %d subscription \"\n\t\t\t\t \"relations(s) in non-ready state\", active_db->db_name, nb);\n\t\t}\n```\n\nSame as above.\n\n9. parseCommandLine\n\n```\n+ user_opts.preserve_subscriptions = false;\n```\n\nI think this initialization is not needed because it is default.\n\nAnd maybe you missed to run pgindent.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Wed, 12 Apr 2023 09:48:15 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "Here are some review comments for patch v4-0001 (not the test code)\n\n(There are some overlaps here with what Kuroda-san already posted\nyesterday because we were looking at the same patch code. Also, a few\nof my comments might become moot points if refactoring will be done\naccording to Kuroda-san's \"general\" questions).\n\n======\nCommit message\n\n1.\nTo fix this problem, this patch teaches pg_dump in binary upgrade mode to emit\nadditional commands to be able to restore the content of pg_subscription_rel,\nand addition LSN parameter in the subscription creation to restore the\nunderlying replication origin remote LSN. The LSN parameter is only accepted\nin CREATE SUBSCRIPTION in binary upgrade mode.\n\n~\n\nSUGGESTION\nTo fix this problem, this patch teaches pg_dump in binary upgrade mode\nto emit additional ALTER SUBSCRIPTION commands to facilitate restoring\nthe content of pg_subscription_rel, and provides an additional LSN\nparameter for CREATE SUBSCRIPTION to restore the underlying\nreplication origin remote LSN. The new ALTER SUBSCRIPTION syntax and\nnew LSN parameter are not exposed to the user -- they are only\naccepted in binary upgrade mode.\n\n======\nsrc/sgml/ref/pgupgrade.sgml\n\n2.\n+ <varlistentry>\n+ <term><option>--preserve-subscription-state</option></term>\n+ <listitem>\n+ <para>\n+ Fully preserve the logical subscription state if any. That includes\n+ the underlying replication origin with their remote LSN and the list of\n+ relations in each subscription so that replication can be simply\n+ resumed if the subscriptions are reactived.\n+ If that option isn't used, it is up to the user to reactivate the\n+ subscriptions in a suitable way; see the subscription part in <xref\n+ linkend=\"pg-dump-notes\"/> for more information.\n+ If this option is used and any of the subscription on the old cluster\n+ has an unknown <varname>remote_lsn</varname> (0/0), or has any relation\n+ in a state different from <literal>r</literal> (ready), the\n+ <application>pg_upgrade</application> run will error.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\n~\n\n2a.\n\"If that option isn't used\" --> \"If this option isn't used\"\n\n~\n\n2b.\nThe link renders strangely. It just says:\n\nSee the subscription part in the [section called \"Notes\"] for more information.\n\nMaybe the link part can be rewritten so that it renders more nicely,\nand also makes mention of pg_dump.\n\n~\n\n2c.\nMaybe it is more readable to have the \"isn't used\" and \"is used\" parts\nas separate paragraphs?\n\n~\n\n2d.\nTypo /reactived/reactivated/ ??\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n3.\n+#define SUBOPT_RELID 0x00008000\n+#define SUBOPT_STATE 0x00010000\n\nMaybe 'SUBOPT_RELSTATE' is a better name for this per-relation state option?\n\n~~~\n\n4. SubOpts\n\n+ Oid relid;\n+ char state;\n } SubOpts;\n\n(similar to #3)\n\nMaybe 'relstate' is a better name for this per-relation state?\n\n~~~\n\n5. parse_subscription_options\n\n+ else if (IsSet(supported_opts, SUBOPT_STATE) &&\n+ strcmp(defel->defname, \"state\") == 0)\n+ {\n\n(similar to #3)\n\nMaybe called this option \"relstate\".\n\n~\n\n6.\n+ if (strlen(state_str) != 1)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"invalid relation state used\")));\n\nIIUC this is syntax not supposed to be reachable by user input. Maybe\nthere is some merit in making the errors similar looking to the normal\noptions, but OTOH it could also be misleading.\n\nThis might as well just be: Assert(strlen(state_str) == 1 &&\n*state_str == SUBREL_STATE_READY);\nor even simply: Assert(IsBinaryUpgrade);\n\n~~~\n\n7. CreateSubscription\n\n+ if(IsBinaryUpgrade)\n+ supported_opts |= SUBOPT_LSN;\n parse_subscription_options(pstate, stmt->options, supported_opts, &opts);\n\n7a.\nMissing whitespace after the \"if\".\n\n~\n\n7b.\nI wonder if this was deserving of a comment something like \"The LSN\noption is for internal use only\"...\n\n~~~\n\n8. CreateSubscription\n\n+ originid = replorigin_create(originname);\n+\n+ if (IsBinaryUpgrade && IsSet(opts.lsn, SUBOPT_LSN))\n+ replorigin_advance(originid, opts.lsn, InvalidXLogRecPtr,\n+ false /* backward */ ,\n+ false /* WAL log */ );\n\nI think the 'IsBinaryUpgrade' check is redundant here because\nSUBOPT_LSN is not possible to be set unless that is true anyhow.\n\n~~~\n\n9. AlterSubscription\n\n+ AddSubscriptionRelState(subid, opts.relid, opts.state,\n+ opts.lsn);\n\nThis line wrapping of AddSubscriptionRelState seems unnecessary.\n\n======\nsrc/bin/pg_dump/pg_backup.h\n\n10.\n+\n+ bool preserve_subscriptions;\n } DumpOptions;\n\n\nMaybe name this field \"preserve_subscription_state\" for consistency\nwith the option name.\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n11. dumpSubscription\n\n if (subinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)\n+ {\n+ for (i = 0; i < subinfo->nrels; i++)\n+ {\n+ appendPQExpBuffer(query, \"\\nALTER SUBSCRIPTION %s ADD TABLE \"\n+ \"(relid = %u, state = '%c'\",\n+ qsubname,\n+ subinfo->subrels[i].srrelid,\n+ subinfo->subrels[i].srsubstate);\n+\n+ if (subinfo->subrels[i].srsublsn[0] != '\\0')\n+ appendPQExpBuffer(query, \", LSN = '%s'\",\n+ subinfo->subrels[i].srsublsn);\n+\n+ appendPQExpBufferStr(query, \");\");\n+ }\n+\n\nMaybe I misunderstood something -- Shouldn't this new ALTER\nSUBSCRIPTION TABLE cmd only be happening when the option\ndopt->preserve_subscriptions is true?\n\n======\nsrc/bin/pg_dump/pg_dump.h\n\n12. SubRelInfo\n\n+/*\n+ * The SubRelInfo struct is used to represent subscription relation.\n+ */\n+typedef struct _SubRelInfo\n+{\n+ Oid srrelid;\n+ char srsubstate;\n+ char *srsublsn;\n+} SubRelInfo;\n+\n\n12a.\n\"represent subscription relation\" --> \"represent a subscription relation\"\n\n~\n\n12b.\nShould include the indent file typdefs.list in the patch, and add this\nnew typedef to it.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n13. check_for_subscription_state\n\n+/*\n+ * check_for_subscription_state()\n+ *\n+ * Verify that all subscriptions have a valid remote_lsn and doesn't contain\n+ * any table in a state different than ready.\n+ */\n+static void\n+check_for_subscription_state(ClusterInfo *cluster)\n\nSUGGESTION\nVerify that all subscriptions have a valid remote_lsn and do not\ncontain any tables with srsubstate other than READY ('r').\n\n~~~\n\n14.\n+ /* No subscription before pg10. */\n+ if (GET_MAJOR_VERSION(cluster->major_version < 1000))\n+ return;\n\n14a.\nThe existing checking code seems slightly different to this because\nthe other check_XXX calls are guarded by the GET_MAJOR_VERSION before\nbeing called.\n\n~\n\n14b.\nFurthermore, I was confused about the combination when the < PG10 and\nuser_opts.preserve_subscriptions is true. Since this is just a return\n(not an error) won't the subsequent pg_dump still attempt to use that\noption (--preserve-subscriptions) even though we already know it\ncannot work?\n\nWould it be better to give an ERROR saying -preserve-subscriptions is\nincompatible with the old PG version?\n\n~~~\n\n15.\n\n+ pg_log(PG_WARNING,\n+ \"\\nWARNING: %d subscription have invalid remote_lsn\",\n+ nb);\n\n15a.\n\"have invalid\" --> \"has invalid\"\n\n~\n\n15b.\nI guess it would be more useful if the message can include the names\nof the failing subscription and/or the relation that was in the wrong\nstate. Maybe that means moving all this checking logic into the\npg_dump code?\n\n======\nsrc/bin/pg_upgrade/option.c\n\n16. parseCommandLine\n\n user_opts.transfer_mode = TRANSFER_MODE_COPY;\n+ user_opts.preserve_subscriptions = false;\n\nThis initial assignment is not needed because user_opts is static.\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.h\n\n17.\n char *socketdir; /* directory to use for Unix sockets */\n+ bool preserve_subscriptions; /* fully transfer subscription state */\n } UserOpts;\n\nMaybe name this field 'preserve_subscription_state' to match the option.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 13 Apr 2023 12:42:05 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Hi,\n\nOn Wed, Apr 12, 2023 at 09:48:15AM +0000, Hayato Kuroda (Fujitsu) wrote:\n>\n> Thank you for updating the patch. I checked yours.\n> Followings are general or non-minor questions:\n\nThanks!\n\n> 1.\n> Feature freeze for PG16 has already come. So I think there is no reason to rush\n> making the patch. Based on above, could you allow to upgrade while synchronizing\n> data? Personally it can be added as 0002 patch which extends the feature. Or\n> have you already found any problem?\n\nI didn't really look into it, mostly because I don't think it's a sensible\nuse case. Logical sync of a relation is a heavy and time consuming operation\nthat requires to retain the xmin for quite some time. This can already lead to\nsome bad effect on the publisher, so adding a pg_upgrade in the middle of that\nwould just make things worse. Upgrading a subscriber is a rare event that has\nto be well planned (you need to test your application with the new version and\nso on), initial sync of relation shouldn't happen continually, so having to\nwait for the sync to be finished doesn't seem like a source of problem but\nmight instead avoid some for users who may not fully realize the implications.\n\nIf someone has a scenario where running pg_upgrade in the middle of a logical\nsync is mandatory I can try to look at it, but for now I just don't see a good\nreason to add even more complexity to this part of the code, especially since\nadding regression tests seems a bit troublesome.\n\n> 2.\n> I have a questions about the SQL interface:\n>\n> ALTER SUBSCRIPTION name ADD TABLE (relid = XYZ, state = 'x' [, lsn = 'X/Y'])\n>\n> Here the oid of the table is directly specified, but is it really kept between\n> old and new node?\n\nYes, pg_upgrade does need to preserve relation's oid.\n\n> Similar command ALTER PUBLICATION requires the name of table,\n> not the oid.\n\nYes, but those are user facing commands, while ALTER SUBSCRIPTION name ADD\nTABLE is only used internally for pg_upgrade. My goal is to make this command\na bit faster by avoiding an extra cache lookup each time, relying on pg_upgrade\nexisting requirements. If that's really a problem I can use the name instead\nbut I didn't hear any argument against it for now.\n\n> 3.\n> Currently getSubscriptionRels() is called from the getSubscriptions(), but I could\n> not find the reason why we must do like that. Other functions like\n> getPublicationTables() is directly called from getSchemaData(), so they should\n> be followed.\n\nI think you're right, doing a single getSubscriptionRels() rather than once\nper subscription should be more efficient.\n\n> Additionaly, I found two problems.\n>\n> * Only tables that to be dumped should be included. See getPublicationTables().\n\nThis is only done during pg_upgrade where all tables are dumped, so there\nshouldn't be any need to filter the list.\n\n> * dropStmt for subscription relations seems not to be needed.\n\nI'm not sure I understand this one. I agree that a dropStmt isn't needed, and\nthere's no such thing in the patch. Are you saying that you agree with it?\n\n> * Maybe security label and comments should be also dumped.\n\nSubscription's security labels and comments are already dumped (well should be\ndumped, AFAICS pg_dump was never taught to look at shared security label on\nobjects other than databases but still try to emit them, pg_dumpall instead\nhandles pg_authid and pg_tablespace), and we can't add security label or\ncomment on subscription's relations so I don't think this patch is missing\nsomething?\n\nSo unless I'm missing something it looks like shared security label handling is\npartly broken, but that's orthogonal to this patch.\n\n> Followings are minor comments.\n>\n>\n> 4. parse_subscription_options\n>\n> ```\n> + opts->state = defGetString(defel)[0];\n> ```\n>\n> [0] is not needed.\n\nIt still needs to be dereferenced, I personally find [0] a bit clearer in that\nsituation but I'm not opposed to a plain *.\n\n> 5. AlterSubscription\n>\n> ```\n> + supported_opts = SUBOPT_RELID | SUBOPT_STATE | SUBOPT_LSN;\n> + parse_subscription_options(pstate, stmt->options,\n> + supported_opts, &opts);\n> +\n> + /* relid and state should always be provided. */\n> + Assert(IsSet(opts.specified_opts, SUBOPT_RELID));\n> + Assert(IsSet(opts.specified_opts, SUBOPT_STATE));\n> +\n> ```\n>\n> SUBOPT_LSN accepts \"none\" string, which means InvalidLSN. Isn't it better to\n> reject it?\n\nIf you mean have an Assert for that I agree. It's not supposed to be used by\nusers so I don't think having non debug check is sensible, as any user provided\nvalue has no reason to be correct anyway.\n\n> 6. dumpSubscription()\n>\n> ```\n> + if (dopt->binary_upgrade && dopt->preserve_subscriptions &&\n> + subinfo->suboriginremotelsn)\n> + {\n> + appendPQExpBuffer(query, \", lsn = '%s'\", subinfo->suboriginremotelsn);\n> + }\n> ```\n>\n> {} is not needed.\n\nYes, but the condition being on two lines it makes it more readable. I think a\nlot of code uses curly braces in similar case already.\n\n> 7. pg_dump.h\n>\n> ```\n> +/*\n> + * The SubRelInfo struct is used to represent subscription relation.\n> + */\n> +typedef struct _SubRelInfo\n> +{\n> + Oid srrelid;\n> + char srsubstate;\n> + char *srsublsn;\n> +} SubRelInfo;\n> ```\n>\n> This typedef must be added to typedefs.list.\n\nRight!\n\n> 8. check_for_subscription_state\n>\n> ```\n> \t\t\tnb = atooid(PQgetvalue(res, 0, 0));\n> \t\t\tif (nb != 0)\n> \t\t\t{\n> \t\t\t\tis_error = true;\n> \t\t\t\tpg_log(PG_WARNING,\n> \t\t\t\t\t \"\\nWARNING: %d subscription have invalid remote_lsn\",\n> \t\t\t\t\t nb);\n> \t\t\t}\n> ```\n>\n> I think no need to use atooid. Additionaly, isn't it better to show the name of\n> subscriptions which have invalid remote_lsn?\n\nAgreed.\n\n> ```\n> \t\tnb = atooid(PQgetvalue(res, 0, 0));\n> \t\tif (nb != 0)\n> \t\t{\n> \t\t\tis_error = true;\n> \t\t\tpg_log(PG_WARNING,\n> \t\t\t\t \"\\nWARNING: database \\\"%s\\\" has %d subscription \"\n> \t\t\t\t \"relations(s) in non-ready state\", active_db->db_name, nb);\n> \t\t}\n> ```\n>\n> Same as above.\n\nAgreed.\n\n> 9. parseCommandLine\n>\n> ```\n> + user_opts.preserve_subscriptions = false;\n> ```\n>\n> I think this initialization is not needed because it is default.\n\nIt's not strictly needed because of C rules but I think it doesn't really hurt\nto make it explicit and not have to remember what the standard says.\n\n> And maybe you missed to run pgindent.\n\nI indeed haven't. There will probably be a global pgindent done soon so I will\ndo one for this patch afterwards.\n\n\n",
"msg_date": "Thu, 13 Apr 2023 10:51:10 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Here are some review comments for the v4-0001 test code only.\n\n======\n\n1.\nAll the comments look alike, so it is hard to know what is going on.\nIf each of the main test parts could be highlighted then the test code\nwould be easier to read IMO.\n\nSomething like below:\n\n# ==========\n# TEST CASE: Check that pg_upgrade refuses to upgrade a subscription\nwhen the replication origin is not set.\n#\n# replication origin's remote_lsn isn't set if data was not replicated after the\n# initial sync.\n\n...\n\n# ==========\n# TEST CASE: Check that pg_upgrade refuses to upgrade a subscription\nwith non-ready tables.\n\n...\n\n# ==========\n# TEST CASE: Check that pg_upgrade works when all subscription tables are ready.\n\n...\n\n# ==========\n# TEST CASE: Change the publication while the old subscriber is offline.\n#\n# Stop the old subscriber, insert a row in each table while it's down, and add\n# t2 to the publication.\n\n...\n\n# ==========\n# TEST CASE: Enable the subscription.\n\n...\n\n# ==========\n# TEST CASE: Refresh the subscription to get the newly published table t2.\n#\n# Only the missing row on t2 show be replicated.\n\n~~~\n\n2.\n+# replication origin's remote_lsn isn't set if not data is replicated after the\n+# initial sync\n\nwording:\n/if not data is replicated/if data is not replicated/\n\n~~~\n\n3.\n# Make sure the replication origin is set\n\nI was not sure if all of the SELECT COUNT(*) checking is needed\nbecause it just seems normal pub/sub functionality. There is no\npg_upgrade happening, so really it seemed the purpose of this part was\nmainly to set the origin so that it will not be a blocker for\nready-state tests that follow this code. Maybe this can just be\nincorporated into the following test part.\n\n~~~\n\n4.\n# There should be no new replicated rows before enabling the subscription\n$result = $new_sub->safe_psql('postgres',\n \"SELECT count(*) FROM t1\");\nis ($result, qq(2), \"Table t1 should still have 2 rows on the new subscriber\");\n\n4a.\nTBH, I felt it might be easier to follow if the SQL was checking for\nWHERE (text = \"while old_sub is down\") etc, rather than just using\nSELECT COUNT(*), and then trusting the comments to describe what the\ndifferent counts mean.\n\n~\n\n4b.\nAll these messages like \"Table t1 should still have 2 rows on the new\nsubscriber\" don't seem very helpful. e.g. They are not saying anything\nabout WHAT this is testing or WHY it should still have 2 rows.\n\n~~~\n\n5.\n# Refresh the subscription, only the missing row on t2 show be replicated\n\n/show/should/\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Thu, 13 Apr 2023 15:26:56 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 10:51:10AM +0800, Julien Rouhaud wrote:\n>\n> On Wed, Apr 12, 2023 at 09:48:15AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> >\n> > 5. AlterSubscription\n> >\n> > ```\n> > + supported_opts = SUBOPT_RELID | SUBOPT_STATE | SUBOPT_LSN;\n> > + parse_subscription_options(pstate, stmt->options,\n> > + supported_opts, &opts);\n> > +\n> > + /* relid and state should always be provided. */\n> > + Assert(IsSet(opts.specified_opts, SUBOPT_RELID));\n> > + Assert(IsSet(opts.specified_opts, SUBOPT_STATE));\n> > +\n> > ```\n> >\n> > SUBOPT_LSN accepts \"none\" string, which means InvalidLSN. Isn't it better to\n> > reject it?\n>\n> If you mean have an Assert for that I agree. It's not supposed to be used by\n> users so I don't think having non debug check is sensible, as any user provided\n> value has no reason to be correct anyway.\n\nAfter looking at the code I remember that I kept the lsn optional in ALTER\nSUBSCRIPTION name ADD TABLE command processing. For now pg_upgrade checks that\nall subscriptions have a valid remote_lsn so there should indeed always be a\nvalue different from InvalidLSN/none specified, but it's still unclear to me\nwhether this check will eventually be weakened or not, so for now I think it's\nbetter to keep AlterSubscription accept this case, here and in all other code\npaths.\n\nIf there's a hard objection I will just make the lsn mandatory.\n\n> > 9. parseCommandLine\n> >\n> > ```\n> > + user_opts.preserve_subscriptions = false;\n> > ```\n> >\n> > I think this initialization is not needed because it is default.\n>\n> It's not strictly needed because of C rules but I think it doesn't really hurt\n> to make it explicit and not have to remember what the standard says.\n\nSo I looked at nearby code and other option do rely on zero-initialized global\nvariables, so I agree that this initialization should be removed.\n\n\n",
"msg_date": "Thu, 13 Apr 2023 16:45:57 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Hi,\n\nOn Thu, Apr 13, 2023 at 12:42:05PM +1000, Peter Smith wrote:\n> Here are some review comments for patch v4-0001 (not the test code)\n\nThanks!\n\n>\n> (There are some overlaps here with what Kuroda-san already posted\n> yesterday because we were looking at the same patch code. Also, a few\n> of my comments might become moot points if refactoring will be done\n> according to Kuroda-san's \"general\" questions).\n\nOk, for the record, the parts I don't reply to are things I fully agree with\nand already changed locally.\n\n> ======\n> Commit message\n>\n> 1.\n> To fix this problem, this patch teaches pg_dump in binary upgrade mode to emit\n> additional commands to be able to restore the content of pg_subscription_rel,\n> and addition LSN parameter in the subscription creation to restore the\n> underlying replication origin remote LSN. The LSN parameter is only accepted\n> in CREATE SUBSCRIPTION in binary upgrade mode.\n>\n> ~\n>\n> SUGGESTION\n> To fix this problem, this patch teaches pg_dump in binary upgrade mode\n> to emit additional ALTER SUBSCRIPTION commands to facilitate restoring\n> the content of pg_subscription_rel, and provides an additional LSN\n> parameter for CREATE SUBSCRIPTION to restore the underlying\n> replication origin remote LSN. The new ALTER SUBSCRIPTION syntax and\n> new LSN parameter are not exposed to the user -- they are only\n> accepted in binary upgrade mode.\n\nThanks, I eventually adapted a bit more the suggested wording:\n\nTo fix this problem, this patch teaches pg_dump in binary upgrade mode to emit\nadditional ALTER SUBSCRIPTION subcommands that will restore the content of\npg_subscription_rel, and also provides an additional LSN parameter for CREATE\nSUBSCRIPTION to restore the underlying replication origin remote LSN. The new\nALTER SUBSCRIPTION subcommand and the new LSN parameter are not exposed to\nusers and only accepted in binary upgrade mode.\n\nThe new ALTER SUBSCRIPTION subcommand has the following syntax:\n\n> 2b.\n> The link renders strangely. It just says:\n>\n> See the subscription part in the [section called \"Notes\"] for more information.\n>\n> Maybe the link part can be rewritten so that it renders more nicely,\n> and also makes mention of pg_dump.\n\nYes I saw that. I didn't try to look at it yet but that's indeed what I wanted\nto do eventually.\n\n> ======\n> src/backend/commands/subscriptioncmds.c\n>\n> 3.\n> +#define SUBOPT_RELID 0x00008000\n> +#define SUBOPT_STATE 0x00010000\n>\n> Maybe 'SUBOPT_RELSTATE' is a better name for this per-relation state option?\n\nI looked at it but part of the existing code is already using state as a\nvariable name, to be consistent with pg_subscription_rel.srsubstate. I think\nit's better to use the same pattern in this patch.\n\n> 6.\n> + if (strlen(state_str) != 1)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"invalid relation state used\")));\n>\n> IIUC this is syntax not supposed to be reachable by user input. Maybe\n> there is some merit in making the errors similar looking to the normal\n> options, but OTOH it could also be misleading.\n\nIt doesn't cost much and may be helpful for debugging so I will use error\nmessages similar to the error facing ones.\n\n> This might as well just be: Assert(strlen(state_str) == 1 &&\n> *state_str == SUBREL_STATE_READY);\n> or even simply: Assert(IsBinaryUpgrade);\n\nAs I mentioned in a previous email, it's still unclear to me whether the\nrestriction on the srsubstate will be weakened or not, so I prefer to keep such\npart of the code generic and have the restriction centralized in the pg_upgrade\ncheck.\n\nI added some Assert(IsBinaryUpgrade) in those code path as it may not be\nevident in this place that it's a requirement.\n\n\n> 7. CreateSubscription\n>\n> + if(IsBinaryUpgrade)\n> + supported_opts |= SUBOPT_LSN;\n> parse_subscription_options(pstate, stmt->options, supported_opts, &opts);\n> 7b.\n> I wonder if this was deserving of a comment something like \"The LSN\n> option is for internal use only\"...\n\nI was thinking that being valid only for IsBinaryUpgrade would be enough?\n\n> 8. CreateSubscription\n>\n> + originid = replorigin_create(originname);\n> +\n> + if (IsBinaryUpgrade && IsSet(opts.lsn, SUBOPT_LSN))\n> + replorigin_advance(originid, opts.lsn, InvalidXLogRecPtr,\n> + false /* backward */ ,\n> + false /* WAL log */ );\n>\n> I think the 'IsBinaryUpgrade' check is redundant here because\n> SUBOPT_LSN is not possible to be set unless that is true anyhow.\n\nIt's indeed redundant for now, but it's also used as a safeguard if some code\nis changed. Maybe just having an assert(IsBinaryUpgrade) would be better\nthough.\n\nWhile looking at it I noticed that this code was never reached, as I should\nhave checked IsSet(opts.specified_opts, ...). I fixed that and added a TAP\ntest to make sure that the restored remote_lsn is the same as on the old\nsubscription node.\n\n> 9. AlterSubscription\n>\n> + AddSubscriptionRelState(subid, opts.relid, opts.state,\n> + opts.lsn);\n>\n> This line wrapping of AddSubscriptionRelState seems unnecessary.\n\nWithout it the line reaches 81 characters :(\n\n> ======\n> src/bin/pg_dump/pg_backup.h\n>\n> 10.\n> +\n> + bool preserve_subscriptions;\n> } DumpOptions;\n>\n>\n> Maybe name this field \"preserve_subscription_state\" for consistency\n> with the option name.\n\nThat's what I thought when I first wrote that code but I quickly had to use a\nshorter name to avoid bloating the line length everywhere.\n\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> 11. dumpSubscription\n>\n> if (subinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)\n> + {\n> + for (i = 0; i < subinfo->nrels; i++)\n> + {\n> + appendPQExpBuffer(query, \"\\nALTER SUBSCRIPTION %s ADD TABLE \"\n> + \"(relid = %u, state = '%c'\",\n> + qsubname,\n> + subinfo->subrels[i].srrelid,\n> + subinfo->subrels[i].srsubstate);\n> +\n> + if (subinfo->subrels[i].srsublsn[0] != '\\0')\n> + appendPQExpBuffer(query, \", LSN = '%s'\",\n> + subinfo->subrels[i].srsublsn);\n> +\n> + appendPQExpBufferStr(query, \");\");\n> + }\n> +\n>\n> Maybe I misunderstood something -- Shouldn't this new ALTER\n> SUBSCRIPTION TABLE cmd only be happening when the option\n> dopt->preserve_subscriptions is true?\n\nIt indirectly is, as in that case subinfo->nrels is guaranteed to be 0. I just\ntried to keep the code simpler and avoid too many nested conditions.\n\n> 12b.\n> Should include the indent file typdefs.list in the patch, and add this\n> new typedef to it.\n\nFTR I checked and there wasn't too many noise when running pgindent on the\ntouched files, so I already locally added the new typedef and ran pgindent.\n\n> 14.\n> + /* No subscription before pg10. */\n> + if (GET_MAJOR_VERSION(cluster->major_version < 1000))\n> + return;\n>\n> 14a.\n> The existing checking code seems slightly different to this because\n> the other check_XXX calls are guarded by the GET_MAJOR_VERSION before\n> being called.\n\nNo opinion on that, so I moved all the checks on the caller side.\n\n\n> 14b.\n> Furthermore, I was confused about the combination when the < PG10 and\n> user_opts.preserve_subscriptions is true. Since this is just a return\n> (not an error) won't the subsequent pg_dump still attempt to use that\n> option (--preserve-subscriptions) even though we already know it\n> cannot work?\n\nWill it error out though? I haven't tried but I think it will just silently do\nnothing, which maybe isn't ideal, but may be somewhat expected if you try to\npreserve something that doesn't exist.\n\n> Would it be better to give an ERROR saying -preserve-subscriptions is\n> incompatible with the old PG version?\n\nI'm not opposed to adding some error, but I don't really know where it would\nreally be suitable. Maybe in the same code path explicitly error out if the\npreserve subscription option is used with a pg10- source server?\n\n> 15b.\n> I guess it would be more useful if the message can include the names\n> of the failing subscription and/or the relation that was in the wrong\n> state. Maybe that means moving all this checking logic into the\n> pg_dump code?\n\nI think it's better to have the checks only once, so in pg_upgrade, but I'm not\nstrongly opposed to duplicate those tests if there's any complaint. In the\nmeantime I rephrased the warning to give the name of the problematic\nsubscription (but not the list of relation, as it's more likely to be a long\nlist and it's easy to check manually afterwards and/or wait for all sync to\nfinish).\n\n\n",
"msg_date": "Thu, 13 Apr 2023 18:04:19 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Julien,\n\n> I didn't really look into it, mostly because I don't think it's a sensible\n> use case. Logical sync of a relation is a heavy and time consuming operation\n> that requires to retain the xmin for quite some time. This can already lead to\n> some bad effect on the publisher, so adding a pg_upgrade in the middle of that\n> would just make things worse. Upgrading a subscriber is a rare event that has\n> to be well planned (you need to test your application with the new version and\n> so on), initial sync of relation shouldn't happen continually, so having to\n> wait for the sync to be finished doesn't seem like a source of problem but\n> might instead avoid some for users who may not fully realize the implications.\n> \n> If someone has a scenario where running pg_upgrade in the middle of a logical\n> sync is mandatory I can try to look at it, but for now I just don't see a good\n> reason to add even more complexity to this part of the code, especially since\n> adding regression tests seems a bit troublesome.\n\nI do not have any scenarios which run pg_upgrade while synchronization because I\nagree that upgrading can be well planned. So it may be OK not to add it in order\nto keep the patch simpler.\n\n> > Here the oid of the table is directly specified, but is it really kept between\n> > old and new node?\n> \n> Yes, pg_upgrade does need to preserve relation's oid.\n\nI confirmed and agreed. dumpTableSchema() dumps an additional function\npg_catalog.binary_upgrade_set_next_heap_pg_class_oid() before each CREATE TABLE\nstatements. The function force the table to have the specified OID.\n\n> > Similar command ALTER PUBLICATION requires the name of table,\n> > not the oid.\n> \n> Yes, but those are user facing commands, while ALTER SUBSCRIPTION name\n> ADD\n> TABLE is only used internally for pg_upgrade. My goal is to make this command\n> a bit faster by avoiding an extra cache lookup each time, relying on pg_upgrade\n> existing requirements. If that's really a problem I can use the name instead\n> but I didn't hear any argument against it for now.\n\nOK, make sense.\n\n> \n> > 3.\n> > Currently getSubscriptionRels() is called from the getSubscriptions(), but I\n> could\n> > not find the reason why we must do like that. Other functions like\n> > getPublicationTables() is directly called from getSchemaData(), so they should\n> > be followed.\n> \n> I think you're right, doing a single getSubscriptionRels() rather than once\n> per subscription should be more efficient.\n\nYes, we do not have to divide reading pg_subscription_rel per subscriptions.\n\n> > Additionaly, I found two problems.\n> >\n> > * Only tables that to be dumped should be included. See getPublicationTables().\n> \n> This is only done during pg_upgrade where all tables are dumped, so there\n> shouldn't be any need to filter the list.\n> \n> > * dropStmt for subscription relations seems not to be needed.\n> \n> I'm not sure I understand this one. I agree that a dropStmt isn't needed, and\n> there's no such thing in the patch. Are you saying that you agree with it?\n\nSorry for unclear suggestion. I meant to say that we could keep current style even\nif getSubscriptionRels() is called separately. Your understanding which it is not\nneeded is right.\n\n> > * Maybe security label and comments should be also dumped.\n> \n> Subscription's security labels and comments are already dumped (well should be\n> dumped, AFAICS pg_dump was never taught to look at shared security label on\n> objects other than databases but still try to emit them, pg_dumpall instead\n> handles pg_authid and pg_tablespace), and we can't add security label or\n> comment on subscription's relations so I don't think this patch is missing\n> something?\n> \n> So unless I'm missing something it looks like shared security label handling is\n> partly broken, but that's orthogonal to this patch.\n> \n> > Followings are minor comments.\n> >\n> >\n> > 4. parse_subscription_options\n> >\n> > ```\n> > + opts->state = defGetString(defel)[0];\n> > ```\n> >\n> > [0] is not needed.\n> \n> It still needs to be dereferenced, I personally find [0] a bit clearer in that\n> situation but I'm not opposed to a plain *.\n\nSorry, I was confused. You are right.\n\n> > 5. AlterSubscription\n> >\n> > ```\n> > + supported_opts = SUBOPT_RELID |\n> SUBOPT_STATE | SUBOPT_LSN;\n> > + parse_subscription_options(pstate,\n> stmt->options,\n> > +\n> supported_opts, &opts);\n> > +\n> > + /* relid and state should always be\n> provided. */\n> > + Assert(IsSet(opts.specified_opts,\n> SUBOPT_RELID));\n> > + Assert(IsSet(opts.specified_opts,\n> SUBOPT_STATE));\n> > +\n> > ```\n> >\n> > SUBOPT_LSN accepts \"none\" string, which means InvalidLSN. Isn't it better to\n> > reject it?\n> \n> If you mean have an Assert for that I agree. It's not supposed to be used by\n> users so I don't think having non debug check is sensible, as any user provided\n> value has no reason to be correct anyway.\n\nYes, I meant to request to add an Assert. Maybe you can add:\nAssert(IsSet(opts.specified_opts, SUBOPT_LSN) && !XLogRecPtrIsInvalid(opts.lsn));\n\n>\nAfter looking at the code I remember that I kept the lsn optional in ALTER\nSUBSCRIPTION name ADD TABLE command processing. For now pg_upgrade checks that\nall subscriptions have a valid remote_lsn so there should indeed always be a\nvalue different from InvalidLSN/none specified, but it's still unclear to me\nwhether this check will eventually be weakened or not, so for now I think it's\nbetter to keep AlterSubscription accept this case, here and in all other code\npaths.\n\nIf there's a hard objection I will just make the lsn mandatory.\n>\n\nI have tested, but srsublsn became NULL if copy_data was specified as off.\nThis is because when copy_data is false, all tuples in pg_subscription_rels are filled\nas state = 'r' and srsublsn = NULL, and tablesync workers will never boot.\nSee CreateSubscription().\nDoesn't it mean that there is a possibility that LSN option is not specified while\nALTER SUBSCRIPTION ADD TABLE?\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n",
"msg_date": "Fri, 14 Apr 2023 04:19:35 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Julien,\n\nI found a cfbot failure on macOS [1]. According to the log,\n\"SELECT count(*) FROM t2\" was executed before synchronization was done.\n\n```\n[09:24:21.018](0.132s) not ok 18 - Table t2 should now have 3 rows on the new subscriber\n```\n\nWith the patch present, wait_for_catchup() is executed after REFRESH, but\nit may not be sufficient because it does not check pg_subscription_rel.\nwait_for_subscription_sync() seems better for the purpose.\n\n\n[1]: https://cirrus-ci.com/task/6563827802701824\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Tue, 18 Apr 2023 01:40:51 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "Hi,\n\nOn Thu, Apr 13, 2023 at 03:26:56PM +1000, Peter Smith wrote:\n>\n> 1.\n> All the comments look alike, so it is hard to know what is going on.\n> If each of the main test parts could be highlighted then the test code\n> would be easier to read IMO.\n>\n> Something like below:\n> [...]\n\nI added a bit more comments about what's is being tested. I'm not sure that a\nbig TEST CASE prefix is necessary, as it's not really multiple separated test\ncases and other stuff can be tested in between. Also AFAICT no other TAP test\ncurrent needs this kind of banner, even if they're testing more complex\nscenario.\n\n> 2.\n> +# replication origin's remote_lsn isn't set if not data is replicated after the\n> +# initial sync\n>\n> wording:\n> /if not data is replicated/if data is not replicated/\n\nI actually mean \"if no data\", which is a bit different than what you suggest.\nFixed.\n\n> 3.\n> # Make sure the replication origin is set\n>\n> I was not sure if all of the SELECT COUNT(*) checking is needed\n> because it just seems normal pub/sub functionality. There is no\n> pg_upgrade happening, so really it seemed the purpose of this part was\n> mainly to set the origin so that it will not be a blocker for\n> ready-state tests that follow this code. Maybe this can just be\n> incorporated into the following test part.\n\nSince this patch is transferring internal details about subscriptions I prefer\nto be thorough about what is tested, when data is actually being replicated and\nso on so if something is broken (relation added to the wrong subscription,\nwrong oid or something) it should immediately show what's happening.\n\n> 4a.\n> TBH, I felt it might be easier to follow if the SQL was checking for\n> WHERE (text = \"while old_sub is down\") etc, rather than just using\n> SELECT COUNT(*), and then trusting the comments to describe what the\n> different counts mean.\n\nI prefer the plain count as it's a simple way to make sure that the state is\nexactly what's wanted. If for some reason the patch leads to previous row\nbeing replicated again, such a test wouldn't reveal it. Sure, it could be\nbroken enough so that one old row is replicated twice and the new row isn't\nreplicated, but it seems so unlikely that I don't think that testing the whole\ntable content is necessary.\n\n> 4b.\n> All these messages like \"Table t1 should still have 2 rows on the new\n> subscriber\" don't seem very helpful. e.g. They are not saying anything\n> about WHAT this is testing or WHY it should still have 2 rows.\n\nI don't think that those messages are supposed to say what or why something is\ntested, just give a quick context / reference on the test in case it's broken.\nThe comments are there to explain in more details what is tested and/or why.\n\n> 5.\n> # Refresh the subscription, only the missing row on t2 show be replicated\n>\n> /show/should/\n\nFixed.\n\n\n",
"msg_date": "Mon, 24 Apr 2023 14:19:15 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Hi,\n\nOn Fri, Apr 14, 2023 at 04:19:35AM +0000, Hayato Kuroda (Fujitsu) wrote:\n>\n> I have tested, but srsublsn became NULL if copy_data was specified as off.\n> This is because when copy_data is false, all tuples in pg_subscription_rels are filled\n> as state = 'r' and srsublsn = NULL, and tablesync workers will never boot.\n> See CreateSubscription().\n> Doesn't it mean that there is a possibility that LSN option is not specified while\n> ALTER SUBSCRIPTION ADD TABLE?\n\nIt shouldn't be the case for now, as pg_upgrade will check first if there's a\ninvalid remote_lsn and refuse to proceed if that's the case. Also, the\nremote_lsn should be set as soon as some data is replicated, so unless you add\na table that's never modified to a publication you should be able to run\npg_upgrade at some point, once there's replicated DML on such a table.\n\nI'm personally fine with the current restrictions, but I don't really use\nlogical replication in any project so maybe I'm not objective enough. For now\nI'd rather keep things as-is, and later improve on it if some people want to\nlift such restrictions (and such restrictions can actually be lifted).\n\n\n",
"msg_date": "Mon, 24 Apr 2023 14:50:32 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Hi,\n\nOn Tue, Apr 18, 2023 at 01:40:51AM +0000, Hayato Kuroda (Fujitsu) wrote:\n>\n> I found a cfbot failure on macOS [1]. According to the log,\n> \"SELECT count(*) FROM t2\" was executed before synchronization was done.\n>\n> ```\n> [09:24:21.018](0.132s) not ok 18 - Table t2 should now have 3 rows on the new subscriber\n> ```\n>\n> With the patch present, wait_for_catchup() is executed after REFRESH, but\n> it may not be sufficient because it does not check pg_subscription_rel.\n> wait_for_subscription_sync() seems better for the purpose.\n\nFixed, thanks!\n\nv5 attached with all previously mentioned fixes.",
"msg_date": "Mon, 24 Apr 2023 15:22:24 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Julien,\n\nThank you for updating the patch! Followings are my comments.\n\n01. documentation\n\nIn this page steps to upgrade server with pg_upgrade is aligned. Should we write\ndown about subscriber? IIUC, it is sufficient to just add to \"Run pg_upgrade\",\nlike \"Apart from streaming replication standby, subscriber node can be upgrade\nvia pg_upgrade. At that time we strongly recommend to use --preserve-subscription-state\".\n\n02. AlterSubscription\n\nI agreed that oid must be preserved between nodes, but I'm still afraid that\ngiven oid is unconditionally trusted and added to pg_subscription_rel.\nI think we can check the existenec of the relation via SearchSysCache1(RELOID,\nObjectIdGetDatum(relid)). Of cource the check is optional, so it should be\nexecuted only when USE_ASSERT_CHECKING is on. Thought?\n\n03. main\n\nCurrently --preserve-subscription-state and --no-subscriptions can be used\ntogether, but the situation is quite unnatural. Shouldn't we exclude them?\n\n04. getSubscriptionTables\n\n\n```\n+ SubRelInfo *rels = NULL;\n```\n\nThe variable is used only inside the loop, so the definition should be also moved.\n\n05. getSubscriptionTables\n\n```\n+ nrels = atooid(PQgetvalue(res, i, i_nrels));\n```\n\natoi() should be used instead of atooid().\n\n06. getSubscriptionTables\n\n```\n+ subinfo = findSubscriptionByOid(cur_srsubid);\n+\n+ nrels = atooid(PQgetvalue(res, i, i_nrels));\n+ rels = pg_malloc(nrels * sizeof(SubRelInfo));\n+\n+ subinfo->subrels = rels;\n+ subinfo->nrels = nrels;\n```\n\nMaybe it never occurs, but findSubscriptionByOid() can return NULL. At that time\naccesses to their attributes will lead the Segfault. Some handling is needed.\n\n07. dumpSubscription\n\nHmm, SubRelInfos are still dumped at the dumpSubscription(). I think this style\nbreaks the manner of pg_dump. I think another dump function is needed. Please\nsee dumpPublicationTable() and dumpPublicationNamespace(). If you have a reason\nto use the style, some comments to describe it is needed.\n\n08. _SubRelInfo\n\nIf you will address above comment, DumpableObject must be added as new attribute.\n\n09. check_for_subscription_state\n\n```\n+ for (int i = 0; i < ntup; i++)\n+ {\n+ is_error = true;\n+ pg_log(PG_WARNING,\n+ \"\\nWARNING: subscription \\\"%s\\\" has an invalid remote_lsn\",\n+ PQgetvalue(res, 0, 0));\n+ }\n```\n\nThe second argument should be i to report the name of subscription more than 2.\n\n10. 003_subscription.pl\n\n```\n$old_sub->wait_for_subscription_sync($publisher, 'sub');\n\nmy $result = $old_sub->safe_psql('postgres',\n \"SELECT COUNT(*) FROM pg_subscription_rel WHERE srsubstate != 'r'\");\nis ($result, qq(0), \"All tables in pg_subscription_rel should be in ready state\");\n```\n\nI think there is a possibility to cause a timing issue, because the SELECT may\nbe executed before srsubstate is changed from 's' to 'r'. Maybe poll_query_until()\ncan be used instead.\n\n11. 003_subscription.pl\n\n```\ncommand_ok(\n\t[\n\t\t'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n\t\t'-D', $new_sub->data_dir, '-b', $bindir,\n\t\t'-B', $bindir, '-s', $new_sub->host,\n\t\t'-p', $old_sub->port, '-P', $new_sub->port,\n\t\t$mode,\n\t\t'--preserve-subscription-state',\n\t\t'--check',\n\t],\n\t'run of pg_upgrade --check for old instance with correct sub rel');\n```\n\nMissing check of pg_upgrade_output.d?\n\nAnd maybe you missed to run pgperltidy.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Thu, 27 Apr 2023 07:48:02 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "Here are some review comments for the v5-0001 patch code.\n\n======\nGeneral\n\n1. ALTER SUBSCRIPTION name ADD TABLE (relid = XYZ, state = 'x' [, lsn = 'X/Y'])\n\nI was a bit confused by this relation 'state' mentioned in multiple\nplaces. IIUC the pg_upgrade logic is going to reject anything with a\nnon-READY (not 'r') state anyhow, so what is the point of having all\nthe extra grammar/parse_subscription_options etc to handle setting the\nstate when only possible value must be 'r'?\n\n~~~\n\n2. state V relstate\n\nI still feel code readbility suffers a bit by calling some fields/vars\na generic 'state' instead of the more descriptive 'relstate'. Maybe\nit's just me.\n\nPreviously commented same (see [1]#3, #4, #5)\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n3.\n+ <para>\n+ Fully preserve the logical subscription state if any. That includes\n+ the underlying replication origin with their remote LSN and the list of\n+ relations in each subscription so that replication can be simply\n+ resumed if the subscriptions are reactivated.\n+ </para>\n\nI think the \"if any\" part is not necessary. If you remove those words,\nthen the rest of the sentence can be simplified.\n\nSUGGESTION\nFully preserve the logical subscription state, which includes the\nunderlying replication origin's remote LSN, and the list of relations\nin each subscription. This allows replication to simply resume when\nthe subscriptions are reactivated.\n\n~~~\n\n4.\n+ <para>\n+ If this option isn't used, it is up to the user to reactivate the\n+ subscriptions in a suitable way; see the subscription part in <xref\n+ linkend=\"pg-dump-notes\"/> for more information.\n+ </para>\n\nThe link still renders strangely as previously reported (see [1]#2b).\n\n~~~\n\n5.\n+ <para>\n+ If this option is used and any of the subscription on the old cluster\n+ has an unknown <varname>remote_lsn</varname> (0/0), or has any relation\n+ in a state different from <literal>r</literal> (ready), the\n+ <application>pg_upgrade</application> run will error.\n+ </para>\n\n5a.\n/subscription/subscriptions/\n\n~\n\n5b\n\"has any relation in a state different from r\" --> \"has any relation\nwith state other than r\"\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n6.\n+ if (strlen(state_str) != 1)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"invalid relation state: %s\", state_str)));\n\nIs this relation state validation overly simplistic, by only checking\nfor length 1? Shouldn't this just be asserting the relstate must be\n'r'?\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n7. getSubscriptionTables\n\n+/*\n+ * getSubscriptionTables\n+ * get information about the given subscription's relations\n+ */\n+void\n+getSubscriptionTables(Archive *fout)\n+{\n+ SubscriptionInfo *subinfo;\n+ SubRelInfo *rels = NULL;\n+ PQExpBuffer query;\n+ PGresult *res;\n+ int i_srsubid;\n+ int i_srrelid;\n+ int i_srsubstate;\n+ int i_srsublsn;\n+ int i_nrels;\n+ int i,\n+ cur_rel = 0,\n+ ntups,\n+ last_srsubid = InvalidOid;\n\nWhy some above are single int declarations and some are compound int\ndeclarations? Why not make them all consistent?\n\n~\n\n8.\n+ appendPQExpBuffer(query, \"SELECT srsubid, srrelid, srsubstate, srsublsn,\"\n+ \" count(*) OVER (PARTITION BY srsubid) AS nrels\"\n+ \" FROM pg_subscription_rel\"\n+ \" ORDER BY srsubid\");\n\nShould this SQL be schema-qualified like pg_catalog.pg_subscription_rel?\n\n~\n\n9.\n+ for (i = 0; i < ntups; i++)\n+ {\n+ int cur_srsubid = atooid(PQgetvalue(res, i, i_srsubid));\n\nShould 'cur_srsubid' be declared Oid to match the atooid?\n\n~~~\n\n10. getSubscriptions\n\n+ if (PQgetisnull(res, i, i_suboriginremotelsn))\n+ subinfo[i].suboriginremotelsn = NULL;\n+ else\n+ subinfo[i].suboriginremotelsn =\n+ pg_strdup(PQgetvalue(res, i, i_suboriginremotelsn));\n+\n+ /*\n+ * For now assume there's no relation associated with the\n+ * subscription. Later code might update this field and allocate\n+ * subrels as needed.\n+ */\n+ subinfo[i].nrels = 0;\n\nThe wording \"For now assume there's no\" kind of gives an ambiguous\ninterpretation for this comment. IMO it sounds like this is the\n\"current\" logic but some future PG version may behave differently - I\ndon't think that is the intended meaning at all.\n\nSUGGESTION.\nHere we just initialize nrels to say there are 0 relations associated\nwith the subscription. If necessary, subsequent logic will update this\nfield and allocate the subrels.\n\n~~~\n\n11. dumpSubscription\n\n+ for (i = 0; i < subinfo->nrels; i++)\n+ {\n+ appendPQExpBuffer(query, \"\\nALTER SUBSCRIPTION %s ADD TABLE \"\n+ \"(relid = %u, state = '%c'\",\n+ qsubname,\n+ subinfo->subrels[i].srrelid,\n+ subinfo->subrels[i].srsubstate);\n+\n+ if (subinfo->subrels[i].srsublsn[0] != '\\0')\n+ appendPQExpBuffer(query, \", LSN = '%s'\",\n+ subinfo->subrels[i].srsublsn);\n+\n+ appendPQExpBufferStr(query, \");\");\n+ }\n\nI previously asked ([1]#11) about how can this ALTER SUBSCRIPTION\nTABLE code happen unless 'preserve_subscriptions' is true, and you\nconfirmed \"It indirectly is, as in that case subinfo->nrels is\nguaranteed to be 0. I just tried to keep the code simpler and avoid\ntoo many nested conditions.\"\n\n~\n\nIf you are worried about too many nested conditions then a simple\nAssert(dopt->preserve_subscriptions); might be good to have here.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n12. check_and_dump_old_cluster\n\n+ /* PG 10 introduced subscriptions. */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1000 &&\n+ user_opts.preserve_subscriptions)\n+ {\n+ check_for_subscription_state(&old_cluster);\n+ }\n\n12a.\nAll the other checks in this function seem to be in decreasing order\nof PG version so maybe this check should be moved to follow that same\npattern.\n\n~\n\n12b.\nAlso won't it be better to give some error or notice of some kind if\nthe option/version are incompatible? I think this was mentioned in a\nprevious review.\n\ne.g.\n\nif (user_opts.preserve_subscriptions)\n{\n if (GET_MAJOR_VERSION(old_cluster.major_version) < 1000)\n <pg_log or pg_fatal goes here...>;\n check_for_subscription_state(&old_cluster);\n}\n\n~~~\n\n13. check_for_subscription_state\n\n+ for (int i = 0; i < ntup; i++)\n+ {\n+ is_error = true;\n+ pg_log(PG_WARNING,\n+ \"\\nWARNING: subscription \\\"%s\\\" has an invalid remote_lsn\",\n+ PQgetvalue(res, 0, 0));\n+ }\n\n13a.\nThis WARNING does not mention the database, but a similar warning\nlater about the non-ready state does mention the database. Probably\nthey should be consistent.\n\n~\n\n13b.\nSomething seems amiss. Here the is_error is assigned true; But later\nwhen you test is_error that is for logging the ready-state problem.\nIsn't there another missing pg_fatal for this invalid remote_lsn case?\n\n======\nsrc/bin/pg_upgrade/option.c\n\n14. usage\n\n+ printf(_(\" --preserve-subscription-state preserve the subscription\nstate fully\\n\"));\n\nWhy say \"fully\"? How is \"preserve the subscription state fully\"\ndifferent to \"preserve the subscription state\" from the user's POV?\n\n------\n[1] My previous v4 code review -\nhttps://www.postgresql.org/message-id/CAHut%2BPuThBY%3DMSYHRgUa6iv6tyCmnqU78itZ%2Bf4rMM2b124vqQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 10 May 2023 17:59:24 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 4:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Thu, Apr 13, 2023 at 03:26:56PM +1000, Peter Smith wrote:\n> >\n> > 1.\n> > All the comments look alike, so it is hard to know what is going on.\n> > If each of the main test parts could be highlighted then the test code\n> > would be easier to read IMO.\n> >\n> > Something like below:\n> > [...]\n>\n> I added a bit more comments about what's is being tested. I'm not sure that a\n> big TEST CASE prefix is necessary, as it's not really multiple separated test\n> cases and other stuff can be tested in between. Also AFAICT no other TAP test\n> current needs this kind of banner, even if they're testing more complex\n> scenario.\n\nHmm, I think there are plenty of examples of subscription TAP tests\nhaving some kind of highlighted comments as suggested, for better\nreadability.\n\ne.g. See src/test/subscription\nt/014_binary.pl\nt/015_stream.pl\nt/016_stream_subxact.pl\nt/018_stream_subxact_abort.pl\nt/021_twophase.pl\nt/022_twophase_cascade.pl\nt/023_twophase_stream.pl\nt/028_row_filter.pl\nt/030_origin.pl\nt/031_column_list.pl\nt/032_subscribe_use_index.pl\n\nA simple #################### to separate the main test parts is all\nthat is needed.\n\n\n> > 4b.\n> > All these messages like \"Table t1 should still have 2 rows on the new\n> > subscriber\" don't seem very helpful. e.g. They are not saying anything\n> > about WHAT this is testing or WHY it should still have 2 rows.\n>\n> I don't think that those messages are supposed to say what or why something is\n> tested, just give a quick context / reference on the test in case it's broken.\n> The comments are there to explain in more details what is tested and/or why.\n>\n\nBut, why can’t they do both? They can be a quick reference *and* at\nthe same time give some more meaning to the error log. Otherwise,\nthese messages might as well just say ‘ref1’, ‘ref2’, ‘ref3’...\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 10 May 2023 18:08:32 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 24 Apr 2023 at 12:52, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Tue, Apr 18, 2023 at 01:40:51AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> >\n> > I found a cfbot failure on macOS [1]. According to the log,\n> > \"SELECT count(*) FROM t2\" was executed before synchronization was done.\n> >\n> > ```\n> > [09:24:21.018](0.132s) not ok 18 - Table t2 should now have 3 rows on the new subscriber\n> > ```\n> >\n> > With the patch present, wait_for_catchup() is executed after REFRESH, but\n> > it may not be sufficient because it does not check pg_subscription_rel.\n> > wait_for_subscription_sync() seems better for the purpose.\n>\n> Fixed, thanks!\n\nI had a high level look at the patch, few comments:\n1) New ereport style can be used by removing the brackets around errcode:\n1.a)\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"invalid\nrelation identifier used: %s\", rel_str)));\n+ }\n\n1.b)\n+ if (strlen(state_str) != 1)\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"invalid\nrelation state: %s\", state_str)));\n\n1.c)\n+ case ALTER_SUBSCRIPTION_ADD_TABLE:\n+ {\n+ if (!IsBinaryUpgrade)\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_SYNTAX_ERROR)),\n+ errmsg(\"ALTER\nSUBSCRIPTION ... ADD TABLE is not supported\"));\n\n\n2) Since this is a single statement, the braces are not required in this case:\n2.a)\n+ if (!fout->dopt->binary_upgrade ||\n!fout->dopt->preserve_subscriptions ||\n+ fout->remoteVersion < 100000)\n+ {\n+ return;\n+ }\n\n2.b) Similarly here too\n+ if (dopt->binary_upgrade && dopt->preserve_subscriptions &&\n+ subinfo->suboriginremotelsn)\n+ {\n+ appendPQExpBuffer(query, \", lsn = '%s'\",\nsubinfo->suboriginremotelsn);\n+ }\n\n3) Since this comment is a very short comment, this can be changed\ninto a single line comment:\n+ /*\n+ * Get subscription relation fields.\n+ */\n\n4) Since cur_rel will be initialized in \"if (cur_srsubid !=\nlast_srsubid)\", it need not be initialized here:\n+ int i,\n+ cur_rel = 0,\n+ ntups,\n\n5) SubRelInfo should be placed above SubRemoveRels:\n+++ b/src/tools/pgindent/typedefs.list\n@@ -2647,6 +2647,7 @@ SubqueryScan\n SubqueryScanPath\n SubqueryScanState\n SubqueryScanStatus\n+SubRelInfo\n SubscriptExecSetup\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 28 Jun 2023 08:46:48 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 24 Apr 2023 at 12:52, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Tue, Apr 18, 2023 at 01:40:51AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> >\n> > I found a cfbot failure on macOS [1]. According to the log,\n> > \"SELECT count(*) FROM t2\" was executed before synchronization was done.\n> >\n> > ```\n> > [09:24:21.018](0.132s) not ok 18 - Table t2 should now have 3 rows on the new subscriber\n> > ```\n> >\n> > With the patch present, wait_for_catchup() is executed after REFRESH, but\n> > it may not be sufficient because it does not check pg_subscription_rel.\n> > wait_for_subscription_sync() seems better for the purpose.\n>\n> Fixed, thanks!\n>\n> v5 attached with all previously mentioned fixes.\n\nFew comments:\n1) Should we document this command:\n+ case ALTER_SUBSCRIPTION_ADD_TABLE:\n+ {\n+ if (!IsBinaryUpgrade)\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_SYNTAX_ERROR)),\n+ errmsg(\"ALTER\nSUBSCRIPTION ... ADD TABLE is not supported\"));\n+\n+ supported_opts = SUBOPT_RELID |\nSUBOPT_STATE | SUBOPT_LSN;\n+ parse_subscription_options(pstate,\nstmt->options,\n+\n supported_opts, &opts);\n+\n+ /* relid and state should always be provided. */\n+ Assert(IsSet(opts.specified_opts,\nSUBOPT_RELID));\n+ Assert(IsSet(opts.specified_opts,\nSUBOPT_STATE));\n+\n+ AddSubscriptionRelState(subid,\nopts.relid, opts.state,\n+\n opts.lsn);\n+\n\nShould we document something like:\nThis command is for use by in-place upgrade utilities. Its use for\nother purposes is not recommended or supported. The behavior of the\noption may change in future releases without notice.\n\n2) Similarly in pg_dump too:\n@@ -431,6 +431,7 @@ main(int argc, char **argv)\n {\"table-and-children\", required_argument, NULL, 12},\n {\"exclude-table-and-children\", required_argument, NULL, 13},\n {\"exclude-table-data-and-children\", required_argument,\nNULL, 14},\n+ {\"preserve-subscription-state\", no_argument,\n&dopt.preserve_subscriptions, 1},\n\n\nShould we document something like:\nThis command is for use by in-place upgrade utilities. Its use for\nother purposes is not recommended or supported. The behavior of the\noption may change in future releases without notice.\n\n3) This same error is possible for ready state table but with invalid\nremote_lsn, should we include this too in the error message:\n+ if (is_error)\n+ pg_fatal(\"--preserve-subscription-state is incompatible with \"\n+ \"subscription relations in non-ready state\");\n+\n+ check_ok();\n+}\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 1 Jul 2023 10:39:00 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, May 10, 2023 at 05:59:24PM +1000, Peter Smith wrote:\n> 1. ALTER SUBSCRIPTION name ADD TABLE (relid = XYZ, state = 'x' [, lsn = 'X/Y'])\n> \n> I was a bit confused by this relation 'state' mentioned in multiple\n> places. IIUC the pg_upgrade logic is going to reject anything with a\n> non-READY (not 'r') state anyhow, so what is the point of having all\n> the extra grammar/parse_subscription_options etc to handle setting the\n> state when only possible value must be 'r'?\n\nWe are just talking about the handling of an extra DefElem in an\nextensible grammar pattern, so adding the state field does not\nrepresent much maintenance work. I'm OK with the addition of this\nfield in the data set dumped, FWIW, on the ground that it can be\nuseful for debugging purposes when looking at --binary-upgrade dumps,\nand because we aim at copying catalog contents from one cluster to\nanother.\n\nAnyway, I am not convinced that we have any need for a parse-able\ngrammar at all, because anything that's presented on this thread is\naimed at being used only for the internal purpose of an upgrade in a\n--binary-upgrade dump with a direct catalog copy in mind, and having a\ngrammar would encourage abuses of it outside of this context. I think\nthat we should aim for simpler than what's proposed by the patch,\nactually, with either a single SQL function à-la-binary_upgrade() that\nadds the contents of a relation. Or we can be crazier and just create\nINSERT queries for pg_subscription_rel to provide an exact copy of the\ncatalog contents. A SQL function would be more consistent with other\nobjects types that use similar tricks, see\nbinary_upgrade_create_empty_extension() that does something similar\nfor some pg_extension records. So, this function would require in\ninput 4 arguments:\n- The subscription name or OID.\n- The relation OID.\n- Its LSN.\n- Its sync state.\n\n> 2. state V relstate\n> \n> I still feel code readbility suffers a bit by calling some fields/vars\n> a generic 'state' instead of the more descriptive 'relstate'. Maybe\n> it's just me.\n> \n> Previously commented same (see [1]#3, #4, #5)\n\nAgreed to be more careful with the naming here.\n--\nMichael",
"msg_date": "Wed, 19 Jul 2023 16:17:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 12:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 10, 2023 at 05:59:24PM +1000, Peter Smith wrote:\n> > 1. ALTER SUBSCRIPTION name ADD TABLE (relid = XYZ, state = 'x' [, lsn = 'X/Y'])\n> >\n> > I was a bit confused by this relation 'state' mentioned in multiple\n> > places. IIUC the pg_upgrade logic is going to reject anything with a\n> > non-READY (not 'r') state anyhow, so what is the point of having all\n> > the extra grammar/parse_subscription_options etc to handle setting the\n> > state when only possible value must be 'r'?\n>\n> We are just talking about the handling of an extra DefElem in an\n> extensible grammar pattern, so adding the state field does not\n> represent much maintenance work. I'm OK with the addition of this\n> field in the data set dumped, FWIW, on the ground that it can be\n> useful for debugging purposes when looking at --binary-upgrade dumps,\n> and because we aim at copying catalog contents from one cluster to\n> another.\n>\n> Anyway, I am not convinced that we have any need for a parse-able\n> grammar at all, because anything that's presented on this thread is\n> aimed at being used only for the internal purpose of an upgrade in a\n> --binary-upgrade dump with a direct catalog copy in mind, and having a\n> grammar would encourage abuses of it outside of this context. I think\n> that we should aim for simpler than what's proposed by the patch,\n> actually, with either a single SQL function à-la-binary_upgrade() that\n> adds the contents of a relation. Or we can be crazier and just create\n> INSERT queries for pg_subscription_rel to provide an exact copy of the\n> catalog contents. A SQL function would be more consistent with other\n> objects types that use similar tricks, see\n> binary_upgrade_create_empty_extension() that does something similar\n> for some pg_extension records. So, this function would require in\n> input 4 arguments:\n> - The subscription name or OID.\n> - The relation OID.\n> - Its LSN.\n> - Its sync state.\n>\n\n+1 for doing it via function (something like\nbinary_upgrade_create_sub_rel_state). We already have the internal\nfunction AddSubscriptionRelState() that can do the core work.\n\nLike the publisher-side upgrade patch [1], I think we should allow\nupgrading subscriptions by default instead with some flag like\n--preserve-subscription-state. If required, we can introduce --exclude\noption for upgrade. Having it just for pg_dump sounds reasonable to\nme.\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 11:51:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Apr 27, 2023 at 1:18 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> 03. main\n>\n> Currently --preserve-subscription-state and --no-subscriptions can be used\n> together, but the situation is quite unnatural. Shouldn't we exclude them?\n>\n\nRight, that makes sense to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 11:56:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Sep 04, 2023 at 11:51:14AM +0530, Amit Kapila wrote:\n> +1 for doing it via function (something like\n> binary_upgrade_create_sub_rel_state). We already have the internal\n> function AddSubscriptionRelState() that can do the core work.\n\nIt is one of these patches that I have let aside for too long, and it\nsolves a use-case of its own. I think that I could hack that pretty\nquickly given that Julien has done a bunch of the ground work. Would\nyou agree with that?\n\n> Like the publisher-side upgrade patch [1], I think we should allow\n> upgrading subscriptions by default instead with some flag like\n> --preserve-subscription-state. If required, we can introduce --exclude\n> option for upgrade. Having it just for pg_dump sounds reasonable to\n> me.\n> \n> [1] - https://www.postgresql.org/message-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nIn the interface of the publisher for pg_upgrade agreed on and set in\nstone? I certainly agree to have a consistent upgrade experience for\nthe two sides of logical replication, publications and subscriptions.\nAlso, I'd rather have a filtering option at the same time as the\nupgrade option to give more control to users from the start.\n--\nMichael",
"msg_date": "Mon, 4 Sep 2023 15:44:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Sep 4, 2023 at 11:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 19, 2023 at 12:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, May 10, 2023 at 05:59:24PM +1000, Peter Smith wrote:\n> > > 1. ALTER SUBSCRIPTION name ADD TABLE (relid = XYZ, state = 'x' [, lsn = 'X/Y'])\n> > >\n> > > I was a bit confused by this relation 'state' mentioned in multiple\n> > > places. IIUC the pg_upgrade logic is going to reject anything with a\n> > > non-READY (not 'r') state anyhow, so what is the point of having all\n> > > the extra grammar/parse_subscription_options etc to handle setting the\n> > > state when only possible value must be 'r'?\n> >\n> > We are just talking about the handling of an extra DefElem in an\n> > extensible grammar pattern, so adding the state field does not\n> > represent much maintenance work. I'm OK with the addition of this\n> > field in the data set dumped, FWIW, on the ground that it can be\n> > useful for debugging purposes when looking at --binary-upgrade dumps,\n> > and because we aim at copying catalog contents from one cluster to\n> > another.\n> >\n> > Anyway, I am not convinced that we have any need for a parse-able\n> > grammar at all, because anything that's presented on this thread is\n> > aimed at being used only for the internal purpose of an upgrade in a\n> > --binary-upgrade dump with a direct catalog copy in mind, and having a\n> > grammar would encourage abuses of it outside of this context. I think\n> > that we should aim for simpler than what's proposed by the patch,\n> > actually, with either a single SQL function à-la-binary_upgrade() that\n> > adds the contents of a relation. Or we can be crazier and just create\n> > INSERT queries for pg_subscription_rel to provide an exact copy of the\n> > catalog contents. A SQL function would be more consistent with other\n> > objects types that use similar tricks, see\n> > binary_upgrade_create_empty_extension() that does something similar\n> > for some pg_extension records. So, this function would require in\n> > input 4 arguments:\n> > - The subscription name or OID.\n> > - The relation OID.\n> > - Its LSN.\n> > - Its sync state.\n> >\n>\n> +1 for doing it via function (something like\n> binary_upgrade_create_sub_rel_state). We already have the internal\n> function AddSubscriptionRelState() that can do the core work.\n>\n\nOne more related point:\n@@ -4814,9 +4923,31 @@ dumpSubscription(Archive *fout, const\nSubscriptionInfo *subinfo)\n if (strcmp(subinfo->subpasswordrequired, \"t\") != 0)\n appendPQExpBuffer(query, \", password_required = false\");\n\n+ if (dopt->binary_upgrade && dopt->preserve_subscriptions &&\n+ subinfo->suboriginremotelsn)\n+ {\n+ appendPQExpBuffer(query, \", lsn = '%s'\", subinfo->suboriginremotelsn);\n+ }\n\nEven during Create Subscription, we can use an existing function\n(pg_replication_origin_advance()) or a set of functions to advance the\norigin instead of introducing a new option.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 12:19:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Sep 4, 2023 at 12:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 04, 2023 at 11:51:14AM +0530, Amit Kapila wrote:\n> > +1 for doing it via function (something like\n> > binary_upgrade_create_sub_rel_state). We already have the internal\n> > function AddSubscriptionRelState() that can do the core work.\n>\n> It is one of these patches that I have let aside for too long, and it\n> solves a use-case of its own. I think that I could hack that pretty\n> quickly given that Julien has done a bunch of the ground work. Would\n> you agree with that?\n>\n\nYeah, I agree that could be hacked quickly but note I haven't reviewed\nin detail if there are other design issues in this patch. Note that we\nthought first to support the upgrade of the publisher node, otherwise,\nimmediately after upgrading the subscriber and publisher, the\nsubscriptions won't work and start giving errors as they are dependent\non slots in the publisher. One other point that needs some thought is\nthat the LSN positions we are going to copy in the catalog may no\nlonger be valid after the upgrade (of the publisher) because we reset\nWAL. Does that need some special consideration or are we okay with\nthat in all cases? As of now, things are quite safe as documented in\npg_dump doc page that it will be the user's responsibility to set up\nreplication after dump/restore. I think it would be really helpful if\nyou could share your thoughts on the publisher-side matter as we are\nfacing a few tricky questions to be answered. For example, see a new\nthread [1].\n\n> > Like the publisher-side upgrade patch [1], I think we should allow\n> > upgrading subscriptions by default instead with some flag like\n> > --preserve-subscription-state. If required, we can introduce --exclude\n> > option for upgrade. Having it just for pg_dump sounds reasonable to\n> > me.\n> >\n> > [1] - https://www.postgresql.org/message-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n>\n> In the interface of the publisher for pg_upgrade agreed on and set in\n> stone? I certainly agree to have a consistent upgrade experience for\n> the two sides of logical replication, publications and subscriptions.\n> Also, I'd rather have a filtering option at the same time as the\n> upgrade option to give more control to users from the start.\n>\n\nThe point raised by Jonathan for not having an option for pg_upgrade\nis that it will be easy for users, otherwise, users always need to\nenable this option. Consider a replication setup, wouldn't users want\nby default it to be upgraded? Asking them to do that via an option\nwould be an inconvenience. So, that was the reason, we wanted to have\nan --exclude option and by default allow slots to be upgraded. I think\nthe same theory applies here.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LV3%2B76CSOAk0h8Kv0AKb-OETsJHe6Sq6172-7DZXf0Qg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 14:12:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, 19 Jul 2023 at 12:47, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 10, 2023 at 05:59:24PM +1000, Peter Smith wrote:\n> > 1. ALTER SUBSCRIPTION name ADD TABLE (relid = XYZ, state = 'x' [, lsn = 'X/Y'])\n> >\n> > I was a bit confused by this relation 'state' mentioned in multiple\n> > places. IIUC the pg_upgrade logic is going to reject anything with a\n> > non-READY (not 'r') state anyhow, so what is the point of having all\n> > the extra grammar/parse_subscription_options etc to handle setting the\n> > state when only possible value must be 'r'?\n>\n> We are just talking about the handling of an extra DefElem in an\n> extensible grammar pattern, so adding the state field does not\n> represent much maintenance work. I'm OK with the addition of this\n> field in the data set dumped, FWIW, on the ground that it can be\n> useful for debugging purposes when looking at --binary-upgrade dumps,\n> and because we aim at copying catalog contents from one cluster to\n> another.\n>\n> Anyway, I am not convinced that we have any need for a parse-able\n> grammar at all, because anything that's presented on this thread is\n> aimed at being used only for the internal purpose of an upgrade in a\n> --binary-upgrade dump with a direct catalog copy in mind, and having a\n> grammar would encourage abuses of it outside of this context. I think\n> that we should aim for simpler than what's proposed by the patch,\n> actually, with either a single SQL function à-la-binary_upgrade() that\n> adds the contents of a relation. Or we can be crazier and just create\n> INSERT queries for pg_subscription_rel to provide an exact copy of the\n> catalog contents. A SQL function would be more consistent with other\n> objects types that use similar tricks, see\n> binary_upgrade_create_empty_extension() that does something similar\n> for some pg_extension records. So, this function would require in\n> input 4 arguments:\n> - The subscription name or OID.\n> - The relation OID.\n> - Its LSN.\n> - Its sync state.\n\nAdded a SQL function to handle the insertion and removed the \"ALTER\nSUBSCRIPTION ... ADD TABLE\" command that was added.\nAttached patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 6 Sep 2023 16:28:39 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Sep 04, 2023 at 02:12:58PM +0530, Amit Kapila wrote:\n> Yeah, I agree that could be hacked quickly but note I haven't reviewed\n> in detail if there are other design issues in this patch. Note that we\n> thought first to support the upgrade of the publisher node, otherwise,\n> immediately after upgrading the subscriber and publisher, the\n> subscriptions won't work and start giving errors as they are dependent\n> on slots in the publisher. One other point that needs some thought is\n> that the LSN positions we are going to copy in the catalog may no\n> longer be valid after the upgrade (of the publisher) because we reset\n> WAL. Does that need some special consideration or are we okay with\n> that in all cases?\n\nIn pg_upgrade, copy_xact_xlog_xid() puts the new node ahead of the old\ncluster by 8 segments on TLI 1, so how would be it a problem if the\nsubscribers keep a remote confirmed LSN lower than that in their\ncatalogs? (You've mentioned that to me offline, but I forgot the\ndetails in the code.)\n\n> As of now, things are quite safe as documented in\n> pg_dump doc page that it will be the user's responsibility to set up\n> replication after dump/restore. I think it would be really helpful if\n> you could share your thoughts on the publisher-side matter as we are\n> facing a few tricky questions to be answered. For example, see a new\n> thread [1].\n\nIn my experience, users are quite used to upgrade standbys *first*,\neven in simple scenarios like minor upgrades, because that's the only\nway to do things safely. For example, updating and/or upgrading\nprimaries before the standbys could be a problem if an update\nintroduces a slight change in the WAL record format that could be\ngenerated by the primary but not be processed by a standby, and we've\ndone such tweaks in some records in the past for some bug fixes that\nhad to be backpatched to stable branches.\n\nIMO, the upgrade of subscriber nodes and the upgrade of publisher\nnodes need to be treated as two independent processing problems, dealt\nwith separately.\n\nAs you have mentioned me earlier offline, these two have, from what I\nunderstand. one dependency: during a publisher upgrade we need to make\nsure that there are no invalid slots when beginning to run pg_upgrade,\nand that the confirmed LSN of all the slots used by the subscribers\nmatch with the shutdown checkpoint's LSN, ensuring that the\nsubscribers would not lose any data because everything's already been\nconsumed by them when the publisher gets to be upgraded.\n\n> The point raised by Jonathan for not having an option for pg_upgrade\n> is that it will be easy for users, otherwise, users always need to\n> enable this option. Consider a replication setup, wouldn't users want\n> by default it to be upgraded? Asking them to do that via an option\n> would be an inconvenience. So, that was the reason, we wanted to have\n> an --exclude option and by default allow slots to be upgraded. I think\n> the same theory applies here.\n> \n> [1] - https://www.postgresql.org/message-id/CAA4eK1LV3%2B76CSOAk0h8Kv0AKb-OETsJHe6Sq6172-7DZXf0Qg%40mail.gmail.com\n\nI saw this thread, and have some thoughts to share. Will reply there.\n--\nMichael",
"msg_date": "Thu, 7 Sep 2023 15:33:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 27 Apr 2023 at 13:18, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Julien,\n>\n> Thank you for updating the patch! Followings are my comments.\n>\n> 01. documentation\n>\n> In this page steps to upgrade server with pg_upgrade is aligned. Should we write\n> down about subscriber? IIUC, it is sufficient to just add to \"Run pg_upgrade\",\n> like \"Apart from streaming replication standby, subscriber node can be upgrade\n> via pg_upgrade. At that time we strongly recommend to use --preserve-subscription-state\".\n\nNow this option has been removed and made default\n\n> 02. AlterSubscription\n>\n> I agreed that oid must be preserved between nodes, but I'm still afraid that\n> given oid is unconditionally trusted and added to pg_subscription_rel.\n> I think we can check the existenec of the relation via SearchSysCache1(RELOID,\n> ObjectIdGetDatum(relid)). Of cource the check is optional, so it should be\n> executed only when USE_ASSERT_CHECKING is on. Thought?\n\nModified\n\n> 03. main\n>\n> Currently --preserve-subscription-state and --no-subscriptions can be used\n> together, but the situation is quite unnatural. Shouldn't we exclude them?\n\nThis option is removed now, so this scenario will not happen\n\n> 04. getSubscriptionTables\n>\n>\n> ```\n> + SubRelInfo *rels = NULL;\n> ```\n>\n> The variable is used only inside the loop, so the definition should be also moved.\n\nThis logic is changed slightly, so it needs to be kept outside\n\n> 05. getSubscriptionTables\n>\n> ```\n> + nrels = atooid(PQgetvalue(res, i, i_nrels));\n> ```\n>\n> atoi() should be used instead of atooid().\n\nModified\n\n> 06. getSubscriptionTables\n>\n> ```\n> + subinfo = findSubscriptionByOid(cur_srsubid);\n> +\n> + nrels = atooid(PQgetvalue(res, i, i_nrels));\n> + rels = pg_malloc(nrels * sizeof(SubRelInfo));\n> +\n> + subinfo->subrels = rels;\n> + subinfo->nrels = nrels;\n> ```\n>\n> Maybe it never occurs, but findSubscriptionByOid() can return NULL. At that time\n> accesses to their attributes will lead the Segfault. Some handling is needed.\n\nThis should not happen, added a fatal error in this case.\n\n> 07. dumpSubscription\n>\n> Hmm, SubRelInfos are still dumped at the dumpSubscription(). I think this style\n> breaks the manner of pg_dump. I think another dump function is needed. Please\n> see dumpPublicationTable() and dumpPublicationNamespace(). If you have a reason\n> to use the style, some comments to describe it is needed.\n\nModified\n\n> 08. _SubRelInfo\n>\n> If you will address above comment, DumpableObject must be added as new attribute.\n\nModified\n\n> 09. check_for_subscription_state\n>\n> ```\n> + for (int i = 0; i < ntup; i++)\n> + {\n> + is_error = true;\n> + pg_log(PG_WARNING,\n> + \"\\nWARNING: subscription \\\"%s\\\" has an invalid remote_lsn\",\n> + PQgetvalue(res, 0, 0));\n> + }\n> ```\n>\n> The second argument should be i to report the name of subscription more than 2.\n\nModified\n\n> 10. 003_subscription.pl\n>\n> ```\n> $old_sub->wait_for_subscription_sync($publisher, 'sub');\n>\n> my $result = $old_sub->safe_psql('postgres',\n> \"SELECT COUNT(*) FROM pg_subscription_rel WHERE srsubstate != 'r'\");\n> is ($result, qq(0), \"All tables in pg_subscription_rel should be in ready state\");\n> ```\n>\n> I think there is a possibility to cause a timing issue, because the SELECT may\n> be executed before srsubstate is changed from 's' to 'r'. Maybe poll_query_until()\n> can be used instead.\n\nModified\n\n> 11. 003_subscription.pl\n>\n> ```\n> command_ok(\n> [\n> 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n> '-D', $new_sub->data_dir, '-b', $bindir,\n> '-B', $bindir, '-s', $new_sub->host,\n> '-p', $old_sub->port, '-P', $new_sub->port,\n> $mode,\n> '--preserve-subscription-state',\n> '--check',\n> ],\n> 'run of pg_upgrade --check for old instance with correct sub rel');\n> ```\n>\n> Missing check of pg_upgrade_output.d?\n\nModified\n\n> And maybe you missed to run pgperltidy.\n\nIt has been run for the new patch.\n\nThe attached v7 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Mon, 11 Sep 2023 16:01:52 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, 10 May 2023 at 13:29, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for the v5-0001 patch code.\n>\n> ======\n> General\n>\n> 1. ALTER SUBSCRIPTION name ADD TABLE (relid = XYZ, state = 'x' [, lsn = 'X/Y'])\n>\n> I was a bit confused by this relation 'state' mentioned in multiple\n> places. IIUC the pg_upgrade logic is going to reject anything with a\n> non-READY (not 'r') state anyhow, so what is the point of having all\n> the extra grammar/parse_subscription_options etc to handle setting the\n> state when only possible value must be 'r'?\n>\n\nThis command has been removed, this code has been removed\n\n>\n> 2. state V relstate\n>\n> I still feel code readbility suffers a bit by calling some fields/vars\n> a generic 'state' instead of the more descriptive 'relstate'. Maybe\n> it's just me.\n>\n> Previously commented same (see [1]#3, #4, #5)\n\nFew of the code has been removed, I have modified wherever possible\n\n> ======\n> doc/src/sgml/ref/pgupgrade.sgml\n>\n> 3.\n> + <para>\n> + Fully preserve the logical subscription state if any. That includes\n> + the underlying replication origin with their remote LSN and the list of\n> + relations in each subscription so that replication can be simply\n> + resumed if the subscriptions are reactivated.\n> + </para>\n>\n> I think the \"if any\" part is not necessary. If you remove those words,\n> then the rest of the sentence can be simplified.\n>\n> SUGGESTION\n> Fully preserve the logical subscription state, which includes the\n> underlying replication origin's remote LSN, and the list of relations\n> in each subscription. This allows replication to simply resume when\n> the subscriptions are reactivated.\n>\nThis has been removed now.\n\n>\n> 4.\n> + <para>\n> + If this option isn't used, it is up to the user to reactivate the\n> + subscriptions in a suitable way; see the subscription part in <xref\n> + linkend=\"pg-dump-notes\"/> for more information.\n> + </para>\n>\n> The link still renders strangely as previously reported (see [1]#2b).\n>\nThis has been removed now\n>\n> 5.\n> + <para>\n> + If this option is used and any of the subscription on the old cluster\n> + has an unknown <varname>remote_lsn</varname> (0/0), or has any relation\n> + in a state different from <literal>r</literal> (ready), the\n> + <application>pg_upgrade</application> run will error.\n> + </para>\n>\n> 5a.\n> /subscription/subscriptions/\n\nModified\n\n> 5b\n> \"has any relation in a state different from r\" --> \"has any relation\n> with state other than r\"\n\nModified slightly\n\n> ======\n> src/backend/commands/subscriptioncmds.c\n>\n> 6.\n> + if (strlen(state_str) != 1)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"invalid relation state: %s\", state_str)));\n>\n> Is this relation state validation overly simplistic, by only checking\n> for length 1? Shouldn't this just be asserting the relstate must be\n> 'r'?\n\nThis code has been removed\n\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> 7. getSubscriptionTables\n>\n> +/*\n> + * getSubscriptionTables\n> + * get information about the given subscription's relations\n> + */\n> +void\n> +getSubscriptionTables(Archive *fout)\n> +{\n> + SubscriptionInfo *subinfo;\n> + SubRelInfo *rels = NULL;\n> + PQExpBuffer query;\n> + PGresult *res;\n> + int i_srsubid;\n> + int i_srrelid;\n> + int i_srsubstate;\n> + int i_srsublsn;\n> + int i_nrels;\n> + int i,\n> + cur_rel = 0,\n> + ntups,\n> + last_srsubid = InvalidOid;\n>\n> Why some above are single int declarations and some are compound int\n> declarations? Why not make them all consistent?\n\nModified\n\n> ~\n>\n> 8.\n> + appendPQExpBuffer(query, \"SELECT srsubid, srrelid, srsubstate, srsublsn,\"\n> + \" count(*) OVER (PARTITION BY srsubid) AS nrels\"\n> + \" FROM pg_subscription_rel\"\n> + \" ORDER BY srsubid\");\n>\n> Should this SQL be schema-qualified like pg_catalog.pg_subscription_rel?\n\nModified\n\n> ~\n>\n> 9.\n> + for (i = 0; i < ntups; i++)\n> + {\n> + int cur_srsubid = atooid(PQgetvalue(res, i, i_srsubid));\n>\n> Should 'cur_srsubid' be declared Oid to match the atooid?\n\nModified\n\n> ~~~\n>\n> 10. getSubscriptions\n>\n> + if (PQgetisnull(res, i, i_suboriginremotelsn))\n> + subinfo[i].suboriginremotelsn = NULL;\n> + else\n> + subinfo[i].suboriginremotelsn =\n> + pg_strdup(PQgetvalue(res, i, i_suboriginremotelsn));\n> +\n> + /*\n> + * For now assume there's no relation associated with the\n> + * subscription. Later code might update this field and allocate\n> + * subrels as needed.\n> + */\n> + subinfo[i].nrels = 0;\n>\n> The wording \"For now assume there's no\" kind of gives an ambiguous\n> interpretation for this comment. IMO it sounds like this is the\n> \"current\" logic but some future PG version may behave differently - I\n> don't think that is the intended meaning at all.\n>\n> SUGGESTION.\n> Here we just initialize nrels to say there are 0 relations associated\n> with the subscription. If necessary, subsequent logic will update this\n> field and allocate the subrels.\n\nThis part of logic has been removed now as it is no more required\n\n> ~~~\n>\n> 11. dumpSubscription\n>\n> + for (i = 0; i < subinfo->nrels; i++)\n> + {\n> + appendPQExpBuffer(query, \"\\nALTER SUBSCRIPTION %s ADD TABLE \"\n> + \"(relid = %u, state = '%c'\",\n> + qsubname,\n> + subinfo->subrels[i].srrelid,\n> + subinfo->subrels[i].srsubstate);\n> +\n> + if (subinfo->subrels[i].srsublsn[0] != '\\0')\n> + appendPQExpBuffer(query, \", LSN = '%s'\",\n> + subinfo->subrels[i].srsublsn);\n> +\n> + appendPQExpBufferStr(query, \");\");\n> + }\n>\n> I previously asked ([1]#11) about how can this ALTER SUBSCRIPTION\n> TABLE code happen unless 'preserve_subscriptions' is true, and you\n> confirmed \"It indirectly is, as in that case subinfo->nrels is\n> guaranteed to be 0. I just tried to keep the code simpler and avoid\n> too many nested conditions.\"\n\n I have added the same check used that is used to get the subscription\ntables to avoid confusion.\n\n> ~\n>\n> If you are worried about too many nested conditions then a simple\n> Assert(dopt->preserve_subscriptions); might be good to have here.\n>\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 12. check_and_dump_old_cluster\n>\n> + /* PG 10 introduced subscriptions. */\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1000 &&\n> + user_opts.preserve_subscriptions)\n> + {\n> + check_for_subscription_state(&old_cluster);\n> + }\n>\n> 12a.\n> All the other checks in this function seem to be in decreasing order\n> of PG version so maybe this check should be moved to follow that same\n> pattern.\n\nModified\n\n> ~\n>\n> 12b.\n> Also won't it be better to give some error or notice of some kind if\n> the option/version are incompatible? I think this was mentioned in a\n> previous review.\n>\n> e.g.\n>\n> if (user_opts.preserve_subscriptions)\n> {\n> if (GET_MAJOR_VERSION(old_cluster.major_version) < 1000)\n> <pg_log or pg_fatal goes here...>;\n> check_for_subscription_state(&old_cluster);\n> }\n\nThis has been removed now\n\n> ~~~\n>\n> 13. check_for_subscription_state\n>\n> + for (int i = 0; i < ntup; i++)\n> + {\n> + is_error = true;\n> + pg_log(PG_WARNING,\n> + \"\\nWARNING: subscription \\\"%s\\\" has an invalid remote_lsn\",\n> + PQgetvalue(res, 0, 0));\n> + }\n>\n> 13a.\n> This WARNING does not mention the database, but a similar warning\n> later about the non-ready state does mention the database. Probably\n> they should be consistent.\n\nModified\n\n> ~\n>\n> 13b.\n> Something seems amiss. Here the is_error is assigned true; But later\n> when you test is_error that is for logging the ready-state problem.\n> Isn't there another missing pg_fatal for this invalid remote_lsn case?\n\nModified\n\n> ======\n> src/bin/pg_upgrade/option.c\n>\n> 14. usage\n>\n> + printf(_(\" --preserve-subscription-state preserve the subscription\n> state fully\\n\"));\n>\n> Why say \"fully\"? How is \"preserve the subscription state fully\"\n> different to \"preserve the subscription state\" from the user's POV?\n\nThis has been removed now\n\nThese are handled as part of v7 posted at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm1ZrbHaWpJwwNhDTJocRKWd3rEkgJazuDdZ9Z-WdvonFg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 11 Sep 2023 16:06:56 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 4 Sept 2023 at 13:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 4, 2023 at 11:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jul 19, 2023 at 12:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Wed, May 10, 2023 at 05:59:24PM +1000, Peter Smith wrote:\n> > > > 1. ALTER SUBSCRIPTION name ADD TABLE (relid = XYZ, state = 'x' [, lsn = 'X/Y'])\n> > > >\n> > > > I was a bit confused by this relation 'state' mentioned in multiple\n> > > > places. IIUC the pg_upgrade logic is going to reject anything with a\n> > > > non-READY (not 'r') state anyhow, so what is the point of having all\n> > > > the extra grammar/parse_subscription_options etc to handle setting the\n> > > > state when only possible value must be 'r'?\n> > >\n> > > We are just talking about the handling of an extra DefElem in an\n> > > extensible grammar pattern, so adding the state field does not\n> > > represent much maintenance work. I'm OK with the addition of this\n> > > field in the data set dumped, FWIW, on the ground that it can be\n> > > useful for debugging purposes when looking at --binary-upgrade dumps,\n> > > and because we aim at copying catalog contents from one cluster to\n> > > another.\n> > >\n> > > Anyway, I am not convinced that we have any need for a parse-able\n> > > grammar at all, because anything that's presented on this thread is\n> > > aimed at being used only for the internal purpose of an upgrade in a\n> > > --binary-upgrade dump with a direct catalog copy in mind, and having a\n> > > grammar would encourage abuses of it outside of this context. I think\n> > > that we should aim for simpler than what's proposed by the patch,\n> > > actually, with either a single SQL function à-la-binary_upgrade() that\n> > > adds the contents of a relation. Or we can be crazier and just create\n> > > INSERT queries for pg_subscription_rel to provide an exact copy of the\n> > > catalog contents. A SQL function would be more consistent with other\n> > > objects types that use similar tricks, see\n> > > binary_upgrade_create_empty_extension() that does something similar\n> > > for some pg_extension records. So, this function would require in\n> > > input 4 arguments:\n> > > - The subscription name or OID.\n> > > - The relation OID.\n> > > - Its LSN.\n> > > - Its sync state.\n> > >\n> >\n> > +1 for doing it via function (something like\n> > binary_upgrade_create_sub_rel_state). We already have the internal\n> > function AddSubscriptionRelState() that can do the core work.\n> >\n\nModified\n\n> One more related point:\n> @@ -4814,9 +4923,31 @@ dumpSubscription(Archive *fout, const\n> SubscriptionInfo *subinfo)\n> if (strcmp(subinfo->subpasswordrequired, \"t\") != 0)\n> appendPQExpBuffer(query, \", password_required = false\");\n>\n> + if (dopt->binary_upgrade && dopt->preserve_subscriptions &&\n> + subinfo->suboriginremotelsn)\n> + {\n> + appendPQExpBuffer(query, \", lsn = '%s'\", subinfo->suboriginremotelsn);\n> + }\n>\n> Even during Create Subscription, we can use an existing function\n> (pg_replication_origin_advance()) or a set of functions to advance the\n> origin instead of introducing a new option.\n\nAdded a function binary_upgrade_sub_replication_origin_advance which\nwill: a) check if the subscription exists, b) get the replication name\nfor subscription and c) advance the replication origin.\n\nThese are handled as part of v7 posted at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm1ZrbHaWpJwwNhDTJocRKWd3rEkgJazuDdZ9Z-WdvonFg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 11 Sep 2023 17:19:27 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, 10 May 2023 at 13:39, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Apr 24, 2023 at 4:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > On Thu, Apr 13, 2023 at 03:26:56PM +1000, Peter Smith wrote:\n> > >\n> > > 1.\n> > > All the comments look alike, so it is hard to know what is going on.\n> > > If each of the main test parts could be highlighted then the test code\n> > > would be easier to read IMO.\n> > >\n> > > Something like below:\n> > > [...]\n> >\n> > I added a bit more comments about what's is being tested. I'm not sure that a\n> > big TEST CASE prefix is necessary, as it's not really multiple separated test\n> > cases and other stuff can be tested in between. Also AFAICT no other TAP test\n> > current needs this kind of banner, even if they're testing more complex\n> > scenario.\n>\n> Hmm, I think there are plenty of examples of subscription TAP tests\n> having some kind of highlighted comments as suggested, for better\n> readability.\n>\n> e.g. See src/test/subscription\n> t/014_binary.pl\n> t/015_stream.pl\n> t/016_stream_subxact.pl\n> t/018_stream_subxact_abort.pl\n> t/021_twophase.pl\n> t/022_twophase_cascade.pl\n> t/023_twophase_stream.pl\n> t/028_row_filter.pl\n> t/030_origin.pl\n> t/031_column_list.pl\n> t/032_subscribe_use_index.pl\n>\n> A simple #################### to separate the main test parts is all\n> that is needed.\n\nModified\n\n>\n> > > 4b.\n> > > All these messages like \"Table t1 should still have 2 rows on the new\n> > > subscriber\" don't seem very helpful. e.g. They are not saying anything\n> > > about WHAT this is testing or WHY it should still have 2 rows.\n> >\n> > I don't think that those messages are supposed to say what or why something is\n> > tested, just give a quick context / reference on the test in case it's broken.\n> > The comments are there to explain in more details what is tested and/or why.\n> >\n>\n> But, why can’t they do both? They can be a quick reference *and* at\n> the same time give some more meaning to the error log. Otherwise,\n> these messages might as well just say ‘ref1’, ‘ref2’, ‘ref3’...\n\nModified\n\nThese are handled as part of v7 posted at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm1ZrbHaWpJwwNhDTJocRKWd3rEkgJazuDdZ9Z-WdvonFg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 11 Sep 2023 17:20:30 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThank you for updating the patch! Here are some comments.\r\n\r\nSorry if there are duplicate comments - the thread revived recently so I might\r\nlose my memory.\r\n\r\n01. General\r\n\r\nIs there a possibility that apply worker on old cluster connects to the\r\npublisher during the upgrade? Regarding the pg_upgrade on publisher, the we\r\nrefuse TCP/IP connections from remotes and port number is also changed, so we can\r\nassume that subscriber does not connect to. But IIUC such settings may not affect\r\nto the connection source, so that the apply worker may try to connect to the\r\npublisher. Also, is there any hazards if it happens?\r\n\r\n02. Upgrade functions\r\n\r\nTwo functions - binary_upgrade_create_sub_rel_state and binary_upgrade_sub_replication_origin_advance\r\nshould be located at pg_upgrade_support.c. Also, CHECK_IS_BINARY_UPGRADE() macro\r\ncan be used.\r\n\r\n03. Parameter combinations\r\n\r\nIIUC getSubscriptionTables() should be exitted quickly if --no-subscriptions is\r\nspecified, whereas binary_upgrade_create_sub_rel_state() is failed.\r\n\r\n\r\n04. I failed my test\r\n\r\nI executed attached script but failed to upgrade:\r\n\r\n```\r\nRestoring database schemas in the new cluster \r\n postgres \r\n*failure*\r\n\r\nConsult the last few lines of \"data_N3/pg_upgrade_output.d/20230912T054546.320/log/pg_upgrade_dump_5.log\" for\r\nthe probable cause of the failure.\r\nFailure, exiting\r\n```\r\n\r\nI checked the log and found that binary_upgrade_create_sub_rel_state() does not\r\nsupport skipping the fourth argument:\r\n\r\n```\r\npg_restore: from TOC entry 4059; 16384 16387 SUBSCRIPTION TABLE sub sub postgres\r\npg_restore: error: could not execute query: ERROR: function binary_upgrade_create_sub_rel_state(unknown, integer, unknown) does not exist\r\nLINE 1: SELECT binary_upgrade_create_sub_rel_state('sub', 16384, 'r'...\r\n ^\r\nHINT: No function matches the given name and argument types. You might need to add explicit type casts.\r\nCommand was: SELECT binary_upgrade_create_sub_rel_state('sub', 16384, 'r');\r\n```\r\n\r\nIIUC if we allow to skip arguments, we must define wrappers like pg_copy_logical_replication_slot_*.\r\nAnother approach is that pg_dump always dumps srsublsn even if it is NULL.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 12 Sep 2023 08:55:50 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "On Monday, September 11, 2023 6:32 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> \r\n> \r\n> The attached v7 patch has the changes for the same.\r\n\r\nThanks for updating the patch, here are few comments:\r\n\r\n\r\n1.\r\n\r\n+/*\r\n+ * binary_upgrade_sub_replication_origin_advance\r\n+ *\r\n+ * Update the remote_lsn for the subscriber's replication origin.\r\n+ */\r\n+Datum\r\n+binary_upgrade_sub_replication_origin_advance(PG_FUNCTION_ARGS)\r\n+{\r\n\r\nIs there any usage apart from pg_upgrade for this function, if not, I think\r\nwe'd better move this function to pg_upgrade_support.c. If yes, I think maybe\r\nbetter to rename it to a general one.\r\n\r\n2.\r\n\r\n+ * Verify that all subscriptions have a valid remote_lsn and don't contain\r\n+ * any table in srsubstate different than ready ('r').\r\n+ */\r\n+static void\r\n+check_for_subscription_state(ClusterInfo *cluster)\r\n\r\nI think we'd better follow the same style of\r\ncheck_for_isn_and_int8_passing_mismatch() to record the invalid things in a\r\nfile.\r\n\r\n\r\n3.\r\n\r\n+\t\tif (fout->dopt->binary_upgrade && fout->remoteVersion >= 100000)\r\n+\t\t{\r\n+\t\t\tappendPQExpBuffer(query,\r\n+\t\t\t\t\t\t\t \"SELECT binary_upgrade_create_sub_rel_state('%s', %u, '%c'\",\r\n+\t\t\t\t\t\t\t subrinfo->dobj.name,\r\n\r\nI think we'd better consider using appendStringLiteral or related function for\r\nthe dobj.name here to make sure the string convertion is safe.\r\n\r\n\r\n4.\r\n\r\nThe following commit message may need update:\r\n\"binary_upgrade_create_sub_rel_state SQL function, and also provides an\r\nadditional LSN parameter for CREATE SUBSCRIPTION to restore the underlying\r\nreplication origin remote LSN. \"\r\n\r\nI think we have changed to another approach which doesn't provide new parameter\r\nin DDL.\r\n\r\n\r\n5. \r\n+\t/* Fetch the existing tuple. */\r\n+\ttup = SearchSysCacheCopy2(SUBSCRIPTIONNAME, MyDatabaseId,\r\n+\t\t\t\t\t\t\t CStringGetDatum(subname));\r\n\r\nSince we don't modify the tuple here, SearchSysCache2 seems enough.\r\n\r\n\r\n6. \r\n+\t\t\t\t\t\t\t\t\t\"LEFT JOIN pg_catalog.pg_database d\"\r\n+\t\t\t\t\t\t\t\t\t\" ON d.oid = s.subdbid \"\r\n+\t\t\t\t\t\t\t\t\t\"WHERE coalesce(remote_lsn, '0/0') = '0/0'\");\r\n\r\nFor the subscriptions that were just created and finished the table sync but\r\nhaven't applied any changes, their remote_lsn will also be 0/0. Do we\r\nneed to report ERROR in this case ?\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Tue, 12 Sep 2023 13:22:50 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 05:19:27PM +0530, vignesh C wrote:\n> Added a function binary_upgrade_sub_replication_origin_advance which\n> will: a) check if the subscription exists, b) get the replication name\n> for subscription and c) advance the replication origin.\n> \n> These are handled as part of v7 posted at [1].\n> [1] - https://www.postgresql.org/message-id/CALDaNm1ZrbHaWpJwwNhDTJocRKWd3rEkgJazuDdZ9Z-WdvonFg%40mail.gmail.com\n\nThanks. I can see that some of the others have already provided\ncomments about this version. I have some comments on top of that.\n--\nMichael",
"msg_date": "Wed, 13 Sep 2023 15:37:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 01:22:50PM +0000, Zhijie Hou (Fujitsu) wrote:\n> +/*\n> + * binary_upgrade_sub_replication_origin_advance\n> + *\n> + * Update the remote_lsn for the subscriber's replication origin.\n> + */\n> +Datum\n> +binary_upgrade_sub_replication_origin_advance(PG_FUNCTION_ARGS)\n> +{\n> \n> Is there any usage apart from pg_upgrade for this function, if not, I think\n> we'd better move this function to pg_upgrade_support.c. If yes, I think maybe\n> better to rename it to a general one.\n\nI was equally surprised by the choice of the patch regarding the\nlocation of these functions, so I agree with your point that these\nfunctions should be in pg_upgrade_support.c. All the sub-routines\nthese two functions rely on are defined in some headers already, so\nthere seem to be nothing new required for pg_upgrade_support.c.\n--\nMichael",
"msg_date": "Wed, 13 Sep 2023 16:18:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, 12 Sept 2023 at 14:25, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Vignesh,\n>\n> Thank you for updating the patch! Here are some comments.\n>\n> Sorry if there are duplicate comments - the thread revived recently so I might\n> lose my memory.\n>\n> 01. General\n>\n> Is there a possibility that apply worker on old cluster connects to the\n> publisher during the upgrade? Regarding the pg_upgrade on publisher, the we\n> refuse TCP/IP connections from remotes and port number is also changed, so we can\n> assume that subscriber does not connect to. But IIUC such settings may not affect\n> to the connection source, so that the apply worker may try to connect to the\n> publisher. Also, is there any hazards if it happens?\n\nYes, there is a possibility that the apply worker gets started and new\ntransaction data is being synced from the publisher. I have made a fix\nnot to start the launcher process in binary ugprade mode as we don't\nwant the launcher to start apply worker during upgrade.\n\n> 02. Upgrade functions\n>\n> Two functions - binary_upgrade_create_sub_rel_state and binary_upgrade_sub_replication_origin_advance\n> should be located at pg_upgrade_support.c. Also, CHECK_IS_BINARY_UPGRADE() macro\n> can be used.\n\nModified\n\n> 03. Parameter combinations\n>\n> IIUC getSubscriptionTables() should be exitted quickly if --no-subscriptions is\n> specified, whereas binary_upgrade_create_sub_rel_state() is failed.\n\nModified\n\n>\n> 04. I failed my test\n>\n> I executed attached script but failed to upgrade:\n>\n> ```\n> Restoring database schemas in the new cluster\n> postgres\n> *failure*\n>\n> Consult the last few lines of \"data_N3/pg_upgrade_output.d/20230912T054546.320/log/pg_upgrade_dump_5.log\" for\n> the probable cause of the failure.\n> Failure, exiting\n> ```\n>\n> I checked the log and found that binary_upgrade_create_sub_rel_state() does not\n> support skipping the fourth argument:\n>\n> ```\n> pg_restore: from TOC entry 4059; 16384 16387 SUBSCRIPTION TABLE sub sub postgres\n> pg_restore: error: could not execute query: ERROR: function binary_upgrade_create_sub_rel_state(unknown, integer, unknown) does not exist\n> LINE 1: SELECT binary_upgrade_create_sub_rel_state('sub', 16384, 'r'...\n> ^\n> HINT: No function matches the given name and argument types. You might need to add explicit type casts.\n> Command was: SELECT binary_upgrade_create_sub_rel_state('sub', 16384, 'r');\n> ```\n>\n> IIUC if we allow to skip arguments, we must define wrappers like pg_copy_logical_replication_slot_*.\n> Another approach is that pg_dump always dumps srsublsn even if it is NULL.\nModified\n\nThe attached v8 version patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Fri, 15 Sep 2023 15:08:21 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, 12 Sept 2023 at 18:52, Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, September 11, 2023 6:32 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> >\n> > The attached v7 patch has the changes for the same.\n>\n> Thanks for updating the patch, here are few comments:\n>\n>\n> 1.\n>\n> +/*\n> + * binary_upgrade_sub_replication_origin_advance\n> + *\n> + * Update the remote_lsn for the subscriber's replication origin.\n> + */\n> +Datum\n> +binary_upgrade_sub_replication_origin_advance(PG_FUNCTION_ARGS)\n> +{\n>\n> Is there any usage apart from pg_upgrade for this function, if not, I think\n> we'd better move this function to pg_upgrade_support.c. If yes, I think maybe\n> better to rename it to a general one.\n\nMoved to pg_upgrade_support.c and renamed to binary_upgrade_replorigin_advance\n\n> 2.\n>\n> + * Verify that all subscriptions have a valid remote_lsn and don't contain\n> + * any table in srsubstate different than ready ('r').\n> + */\n> +static void\n> +check_for_subscription_state(ClusterInfo *cluster)\n>\n> I think we'd better follow the same style of\n> check_for_isn_and_int8_passing_mismatch() to record the invalid things in a\n> file.\n\nModfied\n\n>\n> 3.\n>\n> + if (fout->dopt->binary_upgrade && fout->remoteVersion >= 100000)\n> + {\n> + appendPQExpBuffer(query,\n> + \"SELECT binary_upgrade_create_sub_rel_state('%s', %u, '%c'\",\n> + subrinfo->dobj.name,\n>\n> I think we'd better consider using appendStringLiteral or related function for\n> the dobj.name here to make sure the string convertion is safe.\n>\n\nModified\n\n> 4.\n>\n> The following commit message may need update:\n> \"binary_upgrade_create_sub_rel_state SQL function, and also provides an\n> additional LSN parameter for CREATE SUBSCRIPTION to restore the underlying\n> replication origin remote LSN. \"\n>\n> I think we have changed to another approach which doesn't provide new parameter\n> in DDL.\n\nModified\n\n>\n> 5.\n> + /* Fetch the existing tuple. */\n> + tup = SearchSysCacheCopy2(SUBSCRIPTIONNAME, MyDatabaseId,\n> + CStringGetDatum(subname));\n>\n> Since we don't modify the tuple here, SearchSysCache2 seems enough.\n>\n>\n> 6.\n> + \"LEFT JOIN pg_catalog.pg_database d\"\n> + \" ON d.oid = s.subdbid \"\n> + \"WHERE coalesce(remote_lsn, '0/0') = '0/0'\");\n>\n> For the subscriptions that were just created and finished the table sync but\n> haven't applied any changes, their remote_lsn will also be 0/0. Do we\n> need to report ERROR in this case ?\nI will handle this in the next version.\n\nThanks for the comments, the v8 patch attached at [1] has the changes\nfor the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm1JzqTreCUrhNu5E1gq7Q8r_u3%2BFrisyT7moOED%3DUdoCg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 15 Sep 2023 15:12:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, 15 Sept 2023 at 15:08, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, 12 Sept 2023 at 14:25, Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Dear Vignesh,\n> >\n> > Thank you for updating the patch! Here are some comments.\n> >\n> > Sorry if there are duplicate comments - the thread revived recently so I might\n> > lose my memory.\n> >\n> > 01. General\n> >\n> > Is there a possibility that apply worker on old cluster connects to the\n> > publisher during the upgrade? Regarding the pg_upgrade on publisher, the we\n> > refuse TCP/IP connections from remotes and port number is also changed, so we can\n> > assume that subscriber does not connect to. But IIUC such settings may not affect\n> > to the connection source, so that the apply worker may try to connect to the\n> > publisher. Also, is there any hazards if it happens?\n>\n> Yes, there is a possibility that the apply worker gets started and new\n> transaction data is being synced from the publisher. I have made a fix\n> not to start the launcher process in binary ugprade mode as we don't\n> want the launcher to start apply worker during upgrade.\n\nAnother approach to solve this as suggested by one of my colleague\nHou-san would be to set max_logical_replication_workers = 0 while\nupgrading. I will evaluate this and update the next version of patch\naccordingly.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 15 Sep 2023 16:51:57 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 04:51:57PM +0530, vignesh C wrote:\n> Another approach to solve this as suggested by one of my colleague\n> Hou-san would be to set max_logical_replication_workers = 0 while\n> upgrading. I will evaluate this and update the next version of patch\n> accordingly.\n\nIn the context of an upgrade, any node started is isolated with its\nown port and a custom unix domain directory with connections allowed\nonly through this one.\n\nSaying that, I don't see why forcing max_logical_replication_workers\nto be 0 would be necessarily a bad thing to prevent unnecessary\nactivity on the backend. This should be a separate patch built on\ntop of the main one, IMO.\n\nLooking forward to seeing the rebased version you've mentioned, btw ;)\n--\nMichael",
"msg_date": "Tue, 19 Sep 2023 15:19:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, 19 Sept 2023 at 11:49, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 15, 2023 at 04:51:57PM +0530, vignesh C wrote:\n> > Another approach to solve this as suggested by one of my colleague\n> > Hou-san would be to set max_logical_replication_workers = 0 while\n> > upgrading. I will evaluate this and update the next version of patch\n> > accordingly.\n>\n> In the context of an upgrade, any node started is isolated with its\n> own port and a custom unix domain directory with connections allowed\n> only through this one.\n>\n> Saying that, I don't see why forcing max_logical_replication_workers\n> to be 0 would be necessarily a bad thing to prevent unnecessary\n> activity on the backend. This should be a separate patch built on\n> top of the main one, IMO.\n\nHere is a patch to set max_logical_replication_workers as 0 while the\nserver is started to prevent the launcher from being started. Since\nthis configuration is present from v10, no need for any version check.\nI have done upgrade tests for v10-master, v11-master, ... v16-master\nand found it to be working fine.\n\nRegards,\nVignesh",
"msg_date": "Tue, 19 Sep 2023 19:14:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 07:14:49PM +0530, vignesh C wrote:\n> Here is a patch to set max_logical_replication_workers as 0 while the\n> server is started to prevent the launcher from being started. Since\n> this configuration is present from v10, no need for any version check.\n> I have done upgrade tests for v10-master, v11-master, ... v16-master\n> and found it to be working fine.\n\nThe project policy is to support pg_upgrade for 10 years, and 9.6 was\nreleased in 2016:\nhttps://www.postgresql.org/docs/9.6/release-9-6.html\n\n> snprintf(cmd, sizeof(cmd),\n> - \"\\\"%s/pg_ctl\\\" -w -l \\\"%s/%s\\\" -D \\\"%s\\\" -o \\\"-p %d -b%s %s%s\\\" start\",\n> + \"\\\"%s/pg_ctl\\\" -w -l \\\"%s/%s\\\" -D \\\"%s\\\" -o \\\"-p %d -b%s %s%s%s\\\" start\",\n> cluster->bindir,\n> log_opts.logdir,\n> SERVER_LOG_FILE, cluster->pgconfig, cluster->port,\n> (cluster == &new_cluster) ?\n> \" -c synchronous_commit=off -c fsync=off -c full_page_writes=off\" : \"\",\n> + \" -c max_logical_replication_workers=0\",\n> cluster->pgopts ? cluster->pgopts : \"\", socket_string);\n> \n> /*\n\nAnd this code path is used to start postmaster instances for old and\nnew clusters. So it seems to me that it is incorrect if this is not\nconditional based on the cluster version.\n--\nMichael",
"msg_date": "Wed, 20 Sep 2023 09:38:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 3:08 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> The attached v8 version patch has the changes for the same.\n>\n\nIs the check to ensure remote_lsn is valid correct in function\ncheck_for_subscription_state()? How about the case where the apply\nworker didn't receive any change but just marked the relation as\n'ready'?\n\nAlso, the patch seems to be allowing subscription relations from PG\n>=10 to be migrated but how will that work if the corresponding\npublisher is also upgraded without slots? Won't the corresponding\nworkers start failing as soon as you restart the upgrade server? Do we\nneed to document the steps for users?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Sep 2023 16:54:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 04:54:36PM +0530, Amit Kapila wrote:\n> Also, the patch seems to be allowing subscription relations from PG\n> >=10 to be migrated but how will that work if the corresponding\n> publisher is also upgraded without slots? Won't the corresponding\n> workers start failing as soon as you restart the upgrade server? Do we\n> need to document the steps for users?\n\nHmm? How is that related to the upgrade of the subscribers? And how\nis that different from the case where a subscriber tries to connect\nback to a publisher where a slot has been dropped? There is no need\nof pg_upgrade to reach such a state:\nERROR: could not start WAL streaming: ERROR: replication slot \"popo\" does not exist\n--\nMichael",
"msg_date": "Thu, 21 Sep 2023 08:08:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 03:08:21PM +0530, vignesh C wrote:\n> On Tue, 12 Sept 2023 at 14:25, Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n>> Is there a possibility that apply worker on old cluster connects to the\n>> publisher during the upgrade? Regarding the pg_upgrade on publisher, the we\n>> refuse TCP/IP connections from remotes and port number is also changed, so we can\n>> assume that subscriber does not connect to. But IIUC such settings may not affect\n>> to the connection source, so that the apply worker may try to connect to the\n>> publisher. Also, is there any hazards if it happens?\n> \n> Yes, there is a possibility that the apply worker gets started and new\n> transaction data is being synced from the publisher. I have made a fix\n> not to start the launcher process in binary ugprade mode as we don't\n> want the launcher to start apply worker during upgrade.\n\nHmm. I was wondering if 0001 is the right way to handle this case,\nbut at the end I'm OK to paint one extra isBinaryUpgrade in the code\npath where apply launchers are registered. I don't think that the\npatch is complete, though. A comment should be added in pg_upgrade's\nserver.c, exactly start_postmaster(), to tell that -b also stops apply\nworkers. I am attaching a version updated as of the attached, that\nI'd be OK to apply.\n\nI don't really think that we need to worry about a subscriber\nconnecting back to a publisher in this case, though? I mean, each\npostmaster instance started by pg_upgrade restricts the access to the\ninstance with unix_socket_directories set to a custom path and\npermissions at 0700, and a subscription's connection string does not\nknow the unix path used by pg_upgrade. I certainly agree that\nstopping these processes could lead to inconsistencies in the data the\nsubscribers have been holding though, if we are not careful, so\npreventing them from running is a good practice anyway.\n\nI have also reviewed 0002. As a whole, I think that I'm OK with the\nmain approach of the patch in pg_dump to use a new type of dumpable\nobject for subscription relations that are dumped with their upgrade\nfunctions after. This still needs more work, and more documentation.\nAlso, perhaps we should really have an option to control if this part\nof the copy happens or not. With a --no-subscription-relations for\npg_dump at least?\n\n+{ oid => '4551', descr => 'add a relation with the specified relation state to pg_subscription_rel table', \n\nDuring a development cycle, any new function added needs to use an OID\nin range 8000-9999. Running unused_oids will suggest new random OIDs.\n\nFWIW, I am not convinced that there is a need for two functions to add\nan entry to pg_subscription_rel, with sole difference between both the\nhandling of a valid or invalid LSN. We should have only one function\nthat's able to handle NULL for the LSN. So let's remove rel_state_a\nand rel_state_b, and have a single rel_state(). The description of\nthe SQL functions is inconsistent with the other binary upgrade ones,\nI would suggest for the two functions:\n\"for use by pg_upgrade (relation for pg_subscription_rel)\"\n\"for use by pg_upgrade (remote_lsn for origin)\"\n\n+ i_srsublsn = PQfnumber(res, \"srsublsn\");\n[...]\n+ subrinfo[cur_rel].srsublsn = pg_strdup(PQgetvalue(res, i, i_srsublsn));\n\nIn getSubscriptionTables(), this should check for PQgetisnull()\nbecause we would have a NULL value for InvalidXLogRecPtr in the\ncatalog. Using a char* for srsublsn is OK, but just assign NULL to\nit, then just pass a hardcoded NULL value to the function as we do in\nother places. So I don't quite get why this is not the same handling\nas suboriginremotelsn.\n\ngetSubscriptionTables() is entirely skipped if we don't want any\nsubscriptions, if we deal with a server of 9.6 or older or if we don't\ndo binary upgrades, which is OK.\n\n+/*\n+ * getSubscriptionTables\n+ *\t get information about subscription membership for dumpable tables.\n+ */\nThis commit is slightly misleading and should mention that this is an\nupgrade-only path?\n\nThe code for dumpSubscriptionTable() is a copy-paste of\ndumpPublicationTable(), but a lot of what you are doing here is\nactually pointless if we are not in binary mode? Why should this code\npath not taken only under dataOnly? I mean, this is a code path we\nshould never take except if we are in binary mode. This should have\nat least a cross-check to make sure that we never have a\nDO_SUBSCRIPTION_REL in this code path if we are in non-binary mode.\n\n+ if (dopt->binary_upgrade && subinfo->suboriginremotelsn)\n+ {\n+ appendPQExpBufferStr(query,\n+ \"SELECT pg_catalog.binary_upgrade_replorigin_advance(\");\n+ appendStringLiteralAH(query, subinfo->dobj.name, fout);\n+ appendPQExpBuffer(query, \", '%s');\\n\", subinfo->suboriginremotelsn);\n+ }\n\nHmm.. Could it be actually useful even for debugging to still have\nthis query if suboriginremotelsn is an InvalidXLogRecPtr? I think\nthat this should have a comment of the kind \"\\n-- For binary upgrade,\nblah\". At least it would not be a bad thing to enforce a correct\nstate from the start, removing the NULL check for the second argument\nin binary_upgrade_replorigin_advance().\n\n+ /* We need to check for pg_replication_origin_status only once. */\nPerhaps it would be better to explain why?\n\n+ \"WHERE coalesce(remote_lsn, '0/0') = '0/0'\"\nWhy a COALESCE here? Cannot this stuff just use NULL?\n\n+ fprintf(script, \"database:%s subscription:%s relation:%s in non-ready state\\n\",\nCould it be possible to include the schema of the relation in this log?\n\n+static void check_for_subscription_state(ClusterInfo *cluster);\nI'd be tempted to move that into a patch on its own, actually, for a\ncleaner history.\n\n+# Copyright (c) 2022-2023, PostgreSQL Global Development Group\nNew as of 2023.\n\n+# Check that after upgradation of the subscriber server, the incremental\n+# changes added to the publisher are replicated.\n[..]\n+ For upgradation of the subscriptions, all the subscriptions on the old\n+ cluster must have a valid <varname>remote_lsn</varname>, and all the\n\nUpgradation? I think that this should be reworded:\n\"All the subscriptions of an old cluster require a valid remote_lsn\nduring an upgrade.\"\n\nA CI run is reporting the following compilation warnings:\n[04:21:15.290] pg_dump.c: In function ‘getSubscriptionTables’:\n[04:21:15.290] pg_dump.c:4655:29: error: ‘subinfo’ may be used\nuninitialized in this function [-Werror=maybe-uninitialized]\n[04:21:15.290] 4655 | subrinfo[cur_rel].subinfo = subinfo; \n\n+ok(-d $new_sub->data_dir . \"/pg_upgrade_output.d\",\n+\t\"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\nNot sure that there's a need for this check. Okay, that's cheap.\n\nAnd, err. We are going to need an option to control if the slot data\nis copied, and a bit more documentation in pg_upgrade to explain how\nthings happen when the copy happens.\n--\nMichael",
"msg_date": "Thu, 21 Sep 2023 14:57:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 04:54:36PM +0530, Amit Kapila wrote:\n> Is the check to ensure remote_lsn is valid correct in function\n> check_for_subscription_state()? How about the case where the apply\n> worker didn't receive any change but just marked the relation as\n> 'ready'?\n\nI may be missing, of course, but a relation is switched to\nSUBREL_STATE_READY only once a sync happened and its state was\nSUBREL_STATE_SYNCDONE, implying that SubscriptionRelState->lsn is\nnever InvalidXLogRecPtr, no?\n\nFor instance, nothing happens when a\nAssert(!XLogRecPtrIsInvalid(rstate->lsn)) is added in\nprocess_syncing_tables_for_apply().\n--\nMichael",
"msg_date": "Thu, 21 Sep 2023 15:07:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 11:37 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Sep 20, 2023 at 04:54:36PM +0530, Amit Kapila wrote:\n> > Is the check to ensure remote_lsn is valid correct in function\n> > check_for_subscription_state()? How about the case where the apply\n> > worker didn't receive any change but just marked the relation as\n> > 'ready'?\n>\n> I may be missing, of course, but a relation is switched to\n> SUBREL_STATE_READY only once a sync happened and its state was\n> SUBREL_STATE_SYNCDONE, implying that SubscriptionRelState->lsn is\n> never InvalidXLogRecPtr, no?\n>\n\nThe check in the patch is about the logical replication worker's\norigin's LSN. The value of SubscriptionRelState->lsn won't matter for\nthe check.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Sep 2023 14:31:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 4:39 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Sep 20, 2023 at 04:54:36PM +0530, Amit Kapila wrote:\n> > Also, the patch seems to be allowing subscription relations from PG\n> > >=10 to be migrated but how will that work if the corresponding\n> > publisher is also upgraded without slots? Won't the corresponding\n> > workers start failing as soon as you restart the upgrade server? Do we\n> > need to document the steps for users?\n>\n> Hmm? How is that related to the upgrade of the subscribers?\n>\n\nIt is because after upgrade of both publisher and subscriber, the\nsubscriptions won't work. Both publisher and subscriber should work,\notherwise, the logical replication set up won't work. I think we can\nprobably do this, if we can document clearly how the user can make\ntheir logical replication set up work after upgrade.\n\n>\n> And how\n> is that different from the case where a subscriber tries to connect\n> back to a publisher where a slot has been dropped?\n>\n\nIt is different because we don't drop slots automatically anywhere else.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Sep 2023 14:35:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 02:35:55PM +0530, Amit Kapila wrote:\n> It is because after upgrade of both publisher and subscriber, the\n> subscriptions won't work. Both publisher and subscriber should work,\n> otherwise, the logical replication set up won't work. I think we can\n> probably do this, if we can document clearly how the user can make\n> their logical replication set up work after upgrade.\n\nYeah, well, this comes back to my original point that the upgrade of\npublisher nodes and subscriber nodes should be treated as two\ndifferent problems or we're mixing apples and oranges (and a node\ncould have both subscriber and publishers). While being able to\nsupport both is a must, it is going to be a two-step process at the\nend, with the subscribers done first and the publishers done after.\nThat's also kind of the point that Julien makes in top message of this\nthread.\n\nI agree that docs are lacking in the proposed patch in terms of\nrestrictions, assumptions and process flow, but taken in isolation the\nproblem of the publishers is not something that this patch has to take\ncare of. I'd certainly agree that it should mention, at least and if\nmerged first, to be careful if upgrading the publishers as its slots\nare currently removed.\n--\nMichael",
"msg_date": "Fri, 22 Sep 2023 08:06:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 09:38:56AM +0900, Michael Paquier wrote:\n> And this code path is used to start postmaster instances for old and\n> new clusters. So it seems to me that it is incorrect if this is not\n> conditional based on the cluster version.\n\nAvoiding the startup of bgworkers during pg_upgrade is something that\nworries me a bit, actually, as it could be useful in some cases like\nmonitoring? That would be fancy, for sure.. For now and seeing a\nlack of consensus on this larger matter, I'd like to propose a check\nfor IsBinaryUpgrade into ApplyLauncherRegister() instead as it makes\nno real sense to start apply workers in this context. That would be\nequivalent to max_logical_replication_workers = 0.\n\nAmit, Vignesh, would the attached be OK for both of you?\n\n(Vignesh has posted a slightly different version of this patch on a\ndifferent thread, but the subscriber part should be part of this\nthread with the subscribers, I assume.) \n--\nMichael",
"msg_date": "Mon, 25 Sep 2023 10:57:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 4:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 21, 2023 at 02:35:55PM +0530, Amit Kapila wrote:\n> > It is because after upgrade of both publisher and subscriber, the\n> > subscriptions won't work. Both publisher and subscriber should work,\n> > otherwise, the logical replication set up won't work. I think we can\n> > probably do this, if we can document clearly how the user can make\n> > their logical replication set up work after upgrade.\n>\n> Yeah, well, this comes back to my original point that the upgrade of\n> publisher nodes and subscriber nodes should be treated as two\n> different problems or we're mixing apples and oranges (and a node\n> could have both subscriber and publishers). While being able to\n> support both is a must, it is going to be a two-step process at the\n> end, with the subscribers done first and the publishers done after.\n> That's also kind of the point that Julien makes in top message of this\n> thread.\n>\n> I agree that docs are lacking in the proposed patch in terms of\n> restrictions, assumptions and process flow, but taken in isolation the\n> problem of the publishers is not something that this patch has to take\n> care of.\n>\n\nI also don't think that this patch has to solve the problem of\npublishers in any way but as per my understanding, if due to some\nreason we are not able to do the upgrade of publishers, this can add\nmore steps for users than they have to do now for logical replication\nset up after upgrade. This is because now after restoring the\nsubscription rel's and origin, as soon as we start replication after\ncreating the slots on the publisher, we will never be able to\nguarantee data consistency. So, they need to drop the entire\nsubscription setup including truncating the relations, and then set it\nup from scratch which also means they need to somehow remember or take\na dump of the current subscription setup. According to me, the key\npoint is to have a mechanism to set up slots correctly to allow\nreplication (or subscriptions) to work after the upgrade. Without\nthat, it appears to me that we are restoring a subscription where it\ncan start from some random LSN and can easily lead to data consistency\nissues where it can miss some of the updates.\n\nThis is the primary reason why I prioritized to work on the publisher\nside before getting this patch done, otherwise, the solution for this\npatch was relatively clear. I am not sure but I guess this could be\nthe reason why originally we left it in the current state, otherwise,\nrestoring subscription rel's or origin doesn't seem to be too much of\nan additional effort than what we are doing now.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 25 Sep 2023 10:05:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Michael,\n\n> I'd like to propose a check\n> for IsBinaryUpgrade into ApplyLauncherRegister() instead as it makes\n> no real sense to start apply workers in this context. That would be\n> equivalent to max_logical_replication_workers = 0.\n\nPersonally, I prefer to change max_logical_replication_workers. Mainly there are\ntwo reasons:\n\n1. Your approach must be back-patched to older versions which support logical\n replication feature, but the oldest one (PG10) has already been unsupported.\n We should not modify such a branch.\n2. Also, \"max_logical_replication_workers = 0\" approach would be consistent\n with what we are doing now and for upgrade of publisher patch.\n Please see the previous discussion [1].\n\n[1]: https://www.postgresql.org/message-id/CAA4eK1%2BWBphnmvMpjrxceymzuoMuyV2_pMGaJq-zNODiJqAa7Q%40mail.gmail.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Mon, 25 Sep 2023 05:35:18 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 05:35:18AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> Personally, I prefer to change max_logical_replication_workers. Mainly there are\n> two reasons:\n> \n> 1. Your approach must be back-patched to older versions which support logical\n> replication feature, but the oldest one (PG10) has already been unsupported.\n> We should not modify such a branch.\n\nThis suggestion would be only for HEAD as it changes the behavior of -b.\n\n> 2. Also, \"max_logical_replication_workers = 0\" approach would be consistent\n> with what we are doing now and for upgrade of publisher patch.\n> Please see the previous discussion [1].\n\nYeah, you're right. Consistency would be good across the board, and\nwe'd need to take care of the old clusters as well, so the GUC\nenforcement would be needed as well. It does not strike me that this\nextra IsBinaryUpgrade would hurt anyway? Forcing the hand of the\nbackend has the merit of allowing the removal of the tweak with\nmax_logical_replication_workers at some point in the future.\n--\nMichael",
"msg_date": "Mon, 25 Sep 2023 14:58:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 10:05:41AM +0530, Amit Kapila wrote:\n> I also don't think that this patch has to solve the problem of\n> publishers in any way but as per my understanding, if due to some\n> reason we are not able to do the upgrade of publishers, this can add\n> more steps for users than they have to do now for logical replication\n> set up after upgrade. This is because now after restoring the\n> subscription rel's and origin, as soon as we start replication after\n> creating the slots on the publisher, we will never be able to\n> guarantee data consistency. So, they need to drop the entire\n> subscription setup including truncating the relations, and then set it\n> up from scratch which also means they need to somehow remember or take\n> a dump of the current subscription setup. According to me, the key\n> point is to have a mechanism to set up slots correctly to allow\n> replication (or subscriptions) to work after the upgrade. Without\n> that, it appears to me that we are restoring a subscription where it\n> can start from some random LSN and can easily lead to data consistency\n> issues where it can miss some of the updates.\n\nSure, that's assuming that the publisher side is upgraded. FWIW, my\ntake is that there's room to move forward with this patch anyway in\nfavor of cases like rollover upgrades to the subscriber.\n\n> This is the primary reason why I prioritized to work on the publisher\n> side before getting this patch done, otherwise, the solution for this\n> patch was relatively clear. I am not sure but I guess this could be\n> the reason why originally we left it in the current state, otherwise,\n> restoring subscription rel's or origin doesn't seem to be too much of\n> an additional effort than what we are doing now.\n\nBy \"additional effort\", you are referring to what the patch is doing,\nwith the binary dump of pg_subscription_rel, right?\n--\nMichael",
"msg_date": "Mon, 25 Sep 2023 15:13:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 11:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 25, 2023 at 10:05:41AM +0530, Amit Kapila wrote:\n> > I also don't think that this patch has to solve the problem of\n> > publishers in any way but as per my understanding, if due to some\n> > reason we are not able to do the upgrade of publishers, this can add\n> > more steps for users than they have to do now for logical replication\n> > set up after upgrade. This is because now after restoring the\n> > subscription rel's and origin, as soon as we start replication after\n> > creating the slots on the publisher, we will never be able to\n> > guarantee data consistency. So, they need to drop the entire\n> > subscription setup including truncating the relations, and then set it\n> > up from scratch which also means they need to somehow remember or take\n> > a dump of the current subscription setup. According to me, the key\n> > point is to have a mechanism to set up slots correctly to allow\n> > replication (or subscriptions) to work after the upgrade. Without\n> > that, it appears to me that we are restoring a subscription where it\n> > can start from some random LSN and can easily lead to data consistency\n> > issues where it can miss some of the updates.\n>\n> Sure, that's assuming that the publisher side is upgraded.\n>\n\nAt some point, user needs to upgrade publisher and subscriber could\nitself have some publications defined which means the downstream\nsubscribers will have the same problem.\n\n> FWIW, my\n> take is that there's room to move forward with this patch anyway in\n> favor of cases like rollover upgrades to the subscriber.\n>\n> > This is the primary reason why I prioritized to work on the publisher\n> > side before getting this patch done, otherwise, the solution for this\n> > patch was relatively clear. I am not sure but I guess this could be\n> > the reason why originally we left it in the current state, otherwise,\n> > restoring subscription rel's or origin doesn't seem to be too much of\n> > an additional effort than what we are doing now.\n>\n> By \"additional effort\", you are referring to what the patch is doing,\n> with the binary dump of pg_subscription_rel, right?\n>\n\nYes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 26 Sep 2023 09:40:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, 20 Sept 2023 at 16:54, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 15, 2023 at 3:08 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > The attached v8 version patch has the changes for the same.\n> >\n>\n> Is the check to ensure remote_lsn is valid correct in function\n> check_for_subscription_state()? How about the case where the apply\n> worker didn't receive any change but just marked the relation as\n> 'ready'?\n\nI agree that remote_lsn will not be valid in the case when all the\ntables are in ready state and there are no changes to be sent by the\nwalsender to the worker. I was not sure if this check is required in\nthis case in the check_for_subscription_state function. I was thinking\nthat this check could be removed.\nI'm also checking why the tables should only be in ready state, the\ncheck that is there in the same function, can we support upgrades when\nthe tables are in syncdone state or not. I will post my analysis once\nI have finished checking on the same.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 26 Sep 2023 10:58:12 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Michael,\n\n> > 1. Your approach must be back-patched to older versions which support logical\n> > replication feature, but the oldest one (PG10) has already been\n> unsupported.\n> > We should not modify such a branch.\n> \n> This suggestion would be only for HEAD as it changes the behavior of -b.\n> \n> > 2. Also, \"max_logical_replication_workers = 0\" approach would be consistent\n> > with what we are doing now and for upgrade of publisher patch.\n> > Please see the previous discussion [1].\n> \n> Yeah, you're right. Consistency would be good across the board, and\n> we'd need to take care of the old clusters as well, so the GUC\n> enforcement would be needed as well. It does not strike me that this\n> extra IsBinaryUpgrade would hurt anyway? Forcing the hand of the\n> backend has the merit of allowing the removal of the tweak with\n> max_logical_replication_workers at some point in the future.\n\nHmm, our initial motivation is to suppress registering the launcher, and adding\na GUC setting is sufficient for it. Indeed, registering a launcher may be harmful,\nbut it seems not the goal of this thread (changing -b workflow in HEAD is not\nsufficient alone for the issue). I'm not sure it should be included in patch sets\nhere.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 05:46:55 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Sep 26, 2023 at 09:40:48AM +0530, Amit Kapila wrote:\n> On Mon, Sep 25, 2023 at 11:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Sure, that's assuming that the publisher side is upgraded.\n> \n> At some point, user needs to upgrade publisher and subscriber could\n> itself have some publications defined which means the downstream\n> subscribers will have the same problem.\n\nNot always. I take it as a valid case that one may want to create a\nlogical setup only for the sake of an upgrade, and trashes the\npublisher after a failover to an upgraded subscriber node after the\nlatter has done a sync up of the data that's been added to the\nrelations tracked by the publications while the subscriber was\npg_upgrade'd.\n\n>>> This is the primary reason why I prioritized to work on the publisher\n>>> side before getting this patch done, otherwise, the solution for this\n>>> patch was relatively clear. I am not sure but I guess this could be\n>>> the reason why originally we left it in the current state, otherwise,\n>>> restoring subscription rel's or origin doesn't seem to be too much of\n>>> an additional effort than what we are doing now.\n>>\n>> By \"additional effort\", you are referring to what the patch is doing,\n>> with the binary dump of pg_subscription_rel, right?\n>>\n> \n> Yes.\n\nOkay. I'd like to move on with this stuff, then. At least it helps\nin maintaining data integrity when doing an upgrade with a logical\nsetup. The patch still needs more polishing, though..\n--\nMichael",
"msg_date": "Wed, 27 Sep 2023 12:44:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, 26 Sept 2023 at 10:58, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, 20 Sept 2023 at 16:54, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Sep 15, 2023 at 3:08 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > The attached v8 version patch has the changes for the same.\n> > >\n> >\n> > Is the check to ensure remote_lsn is valid correct in function\n> > check_for_subscription_state()? How about the case where the apply\n> > worker didn't receive any change but just marked the relation as\n> > 'ready'?\n>\n> I agree that remote_lsn will not be valid in the case when all the\n> tables are in ready state and there are no changes to be sent by the\n> walsender to the worker. I was not sure if this check is required in\n> this case in the check_for_subscription_state function. I was thinking\n> that this check could be removed.\n> I'm also checking why the tables should only be in ready state, the\n> check that is there in the same function, can we support upgrades when\n> the tables are in syncdone state or not. I will post my analysis once\n> I have finished checking on the same.\n\nOnce the table is in SUBREL_STATE_SYNCDONE state, the apply worker\nwill check if the apply worker has some LSN records that need to be\napplied to reach the LSN of the table. Once the required WAL is\napplied, the table state will be changed from SUBREL_STATE_SYNCDONE to\nSUBREL_STATE_READY state. Since there is a chance that in this case\nthe apply worker has to apply some transactions to get all the tables\nin READY state, I felt the minimum requirement should be that at least\nall the tables should be in READY state for the upgradation of the\nsubscriber.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 27 Sep 2023 15:36:52 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 3:37 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, 26 Sept 2023 at 10:58, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Wed, 20 Sept 2023 at 16:54, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Sep 15, 2023 at 3:08 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > The attached v8 version patch has the changes for the same.\n> > > >\n> > >\n> > > Is the check to ensure remote_lsn is valid correct in function\n> > > check_for_subscription_state()? How about the case where the apply\n> > > worker didn't receive any change but just marked the relation as\n> > > 'ready'?\n> >\n> > I agree that remote_lsn will not be valid in the case when all the\n> > tables are in ready state and there are no changes to be sent by the\n> > walsender to the worker. I was not sure if this check is required in\n> > this case in the check_for_subscription_state function. I was thinking\n> > that this check could be removed.\n> > I'm also checking why the tables should only be in ready state, the\n> > check that is there in the same function, can we support upgrades when\n> > the tables are in syncdone state or not. I will post my analysis once\n> > I have finished checking on the same.\n>\n> Once the table is in SUBREL_STATE_SYNCDONE state, the apply worker\n> will check if the apply worker has some LSN records that need to be\n> applied to reach the LSN of the table. Once the required WAL is\n> applied, the table state will be changed from SUBREL_STATE_SYNCDONE to\n> SUBREL_STATE_READY state. Since there is a chance that in this case\n> the apply worker has to apply some transactions to get all the tables\n> in READY state, I felt the minimum requirement should be that at least\n> all the tables should be in READY state for the upgradation of the\n> subscriber.\n>\n\nI don't think this theory is completely correct because the pending\nWAL can be applied even after an upgrade.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Sep 2023 19:31:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 25 Sept 2023 at 10:05, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 22, 2023 at 4:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Sep 21, 2023 at 02:35:55PM +0530, Amit Kapila wrote:\n> > > It is because after upgrade of both publisher and subscriber, the\n> > > subscriptions won't work. Both publisher and subscriber should work,\n> > > otherwise, the logical replication set up won't work. I think we can\n> > > probably do this, if we can document clearly how the user can make\n> > > their logical replication set up work after upgrade.\n> >\n> > Yeah, well, this comes back to my original point that the upgrade of\n> > publisher nodes and subscriber nodes should be treated as two\n> > different problems or we're mixing apples and oranges (and a node\n> > could have both subscriber and publishers). While being able to\n> > support both is a must, it is going to be a two-step process at the\n> > end, with the subscribers done first and the publishers done after.\n> > That's also kind of the point that Julien makes in top message of this\n> > thread.\n> >\n> > I agree that docs are lacking in the proposed patch in terms of\n> > restrictions, assumptions and process flow, but taken in isolation the\n> > problem of the publishers is not something that this patch has to take\n> > care of.\n> >\n>\n> I also don't think that this patch has to solve the problem of\n> publishers in any way but as per my understanding, if due to some\n> reason we are not able to do the upgrade of publishers, this can add\n> more steps for users than they have to do now for logical replication\n> set up after upgrade. This is because now after restoring the\n> subscription rel's and origin, as soon as we start replication after\n> creating the slots on the publisher, we will never be able to\n> guarantee data consistency. So, they need to drop the entire\n> subscription setup including truncating the relations, and then set it\n> up from scratch which also means they need to somehow remember or take\n> a dump of the current subscription setup. According to me, the key\n> point is to have a mechanism to set up slots correctly to allow\n> replication (or subscriptions) to work after the upgrade. Without\n> that, it appears to me that we are restoring a subscription where it\n> can start from some random LSN and can easily lead to data consistency\n> issues where it can miss some of the updates.\n>\n> This is the primary reason why I prioritized to work on the publisher\n> side before getting this patch done, otherwise, the solution for this\n> patch was relatively clear. I am not sure but I guess this could be\n> the reason why originally we left it in the current state, otherwise,\n> restoring subscription rel's or origin doesn't seem to be too much of\n> an additional effort than what we are doing now.\n\nI have tried to analyze the steps for upgrading the subscriber with\nHEAD and with the upgrade patches, Here are the steps for the same:\nCurrent steps to upgrade subscriber in HEAD:\n1) Upgrade the subscriber server\n2) Start subscriber server\n3) truncate the tables\n4) Alter the subscriptions to point to new slots in the subscriber\n5) Enable the subscriptions\n6) Alter subscription to refresh the publications\n\nSteps to upgrade If we commit only the subscriber upgrade patch:\n1) Upgrade the subscriber server\n2) Start subscriber server\n3) truncate the tables\nNote: We will have to drop the subscriptions as we have made changes\nto the pg_subscription_rel\n4) But drop subscription will throw error:\npostgres=# DROP SUBSCRIPTION test1 cascade;\nERROR: could not drop replication slot \"test1\" on publisher: ERROR:\nreplication slot \"test1\" does not exist\n5) Alter the subscription to set slot_name to none\n6) Make a note of all the subscriptions that are present\n7) drop the subscriptions\n8) Create the subscriptions\n\nThe number of steps will increase in this case.\n\nSteps to upgrade If we commit publisher upgrade patch first and then\nthe subscriber upgrade patch patch:\n1) Upgrade the subscriber server\n2) Start subscriber server\n3) Enable the subscription\n4) Alter subscription to refresh the publications\n\nBased on the above, I also feel it is better to get the upgrade\npublisher patch committed first, as a) it will reduce the data copying\ntime(as truncate is not required) b) the number of steps will reduce\nc) all the use cases will be handled.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 28 Sep 2023 10:17:35 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 07:31:41PM +0530, Amit Kapila wrote:\n> On Wed, Sep 27, 2023 at 3:37 PM vignesh C <vignesh21@gmail.com> wrote:\n>> Once the table is in SUBREL_STATE_SYNCDONE state, the apply worker\n>> will check if the apply worker has some LSN records that need to be\n>> applied to reach the LSN of the table. Once the required WAL is\n>> applied, the table state will be changed from SUBREL_STATE_SYNCDONE to\n>> SUBREL_STATE_READY state. Since there is a chance that in this case\n>> the apply worker has to apply some transactions to get all the tables\n>> in READY state, I felt the minimum requirement should be that at least\n>> all the tables should be in READY state for the upgradation of the\n>> Subscriber.\n> \n> I don't think this theory is completely correct because the pending\n> WAL can be applied even after an upgrade.\n\nYeah, agreed that putting a pre-check about the state of the relations\nstored in pg_subscription_rel when handling the upgrade of a\nsubscriber is not necessary.\n--\nMichael",
"msg_date": "Fri, 29 Sep 2023 09:33:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 9:14 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Sep 26, 2023 at 09:40:48AM +0530, Amit Kapila wrote:\n> > On Mon, Sep 25, 2023 at 11:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Sure, that's assuming that the publisher side is upgraded.\n> >\n> > At some point, user needs to upgrade publisher and subscriber could\n> > itself have some publications defined which means the downstream\n> > subscribers will have the same problem.\n>\n> Not always. I take it as a valid case that one may want to create a\n> logical setup only for the sake of an upgrade, and trashes the\n> publisher after a failover to an upgraded subscriber node after the\n> latter has done a sync up of the data that's been added to the\n> relations tracked by the publications while the subscriber was\n> pg_upgrade'd.\n>\n\nSuch a use case is possible to achieve even without this patch.\nSawada-San has already given an alternative to slightly tweak the\nsteps mentioned by Julien to achieve it. Also, there are other ways to\nachieve it by slightly changing the steps. OTOH, it will create a\nproblem for normal logical replication set up after upgrade as\ndiscused.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 29 Sep 2023 17:32:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Sep 29, 2023 at 05:32:52PM +0530, Amit Kapila wrote:\n> Such a use case is possible to achieve even without this patch.\n> Sawada-San has already given an alternative to slightly tweak the\n> steps mentioned by Julien to achieve it. Also, there are other ways to\n> achieve it by slightly changing the steps. OTOH, it will create a\n> problem for normal logical replication set up after upgrade as\n> discused.\n\nSo, now that 29d0a77fa6 has been applied to the tree, would it be time\nto brush up what's been discussed on this thread for subscribers? I'm\nOK to spend time on it.\n--\nMichael",
"msg_date": "Thu, 26 Oct 2023 16:39:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 21 Sept 2023 at 11:27, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 15, 2023 at 03:08:21PM +0530, vignesh C wrote:\n> > On Tue, 12 Sept 2023 at 14:25, Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> >> Is there a possibility that apply worker on old cluster connects to the\n> >> publisher during the upgrade? Regarding the pg_upgrade on publisher, the we\n> >> refuse TCP/IP connections from remotes and port number is also changed, so we can\n> >> assume that subscriber does not connect to. But IIUC such settings may not affect\n> >> to the connection source, so that the apply worker may try to connect to the\n> >> publisher. Also, is there any hazards if it happens?\n> >\n> > Yes, there is a possibility that the apply worker gets started and new\n> > transaction data is being synced from the publisher. I have made a fix\n> > not to start the launcher process in binary ugprade mode as we don't\n> > want the launcher to start apply worker during upgrade.\n>\n> Hmm. I was wondering if 0001 is the right way to handle this case,\n> but at the end I'm OK to paint one extra isBinaryUpgrade in the code\n> path where apply launchers are registered. I don't think that the\n> patch is complete, though. A comment should be added in pg_upgrade's\n> server.c, exactly start_postmaster(), to tell that -b also stops apply\n> workers. I am attaching a version updated as of the attached, that\n> I'd be OK to apply.\n\nI have added comments\n\n> I don't really think that we need to worry about a subscriber\n> connecting back to a publisher in this case, though? I mean, each\n> postmaster instance started by pg_upgrade restricts the access to the\n> instance with unix_socket_directories set to a custom path and\n> permissions at 0700, and a subscription's connection string does not\n> know the unix path used by pg_upgrade. I certainly agree that\n> stopping these processes could lead to inconsistencies in the data the\n> subscribers have been holding though, if we are not careful, so\n> preventing them from running is a good practice anyway.\n\nI have made the fix similar to how upgrade publisher has done to keep\nit consistent.\n\n> I have also reviewed 0002. As a whole, I think that I'm OK with the\n> main approach of the patch in pg_dump to use a new type of dumpable\n> object for subscription relations that are dumped with their upgrade\n> functions after. This still needs more work, and more documentation.\n\nAdded documentation\n\n> Also, perhaps we should really have an option to control if this part\n> of the copy happens or not. With a --no-subscription-relations for\n> pg_dump at least?\n\nCurrently this is done by default in binary upgrade mode, I will add a\nseparate patch to skip dump of subscription relations from upgrade and\ndump a little later.\n\n>\n> +{ oid => '4551', descr => 'add a relation with the specified relation state to pg_subscription_rel table',\n>\n> During a development cycle, any new function added needs to use an OID\n> in range 8000-9999. Running unused_oids will suggest new random OIDs.\n\nModified\n\n> FWIW, I am not convinced that there is a need for two functions to add\n> an entry to pg_subscription_rel, with sole difference between both the\n> handling of a valid or invalid LSN. We should have only one function\n> that's able to handle NULL for the LSN. So let's remove rel_state_a\n> and rel_state_b, and have a single rel_state(). The description of\n> the SQL functions is inconsistent with the other binary upgrade ones,\n> I would suggest for the two functions\n> \"for use by pg_upgrade (relation for pg_subscription_rel)\"\n> \"for use by pg_upgrade (remote_lsn for origin)\"\n\nRemoved rel_state_a and rel_state_b and updated the description accordingly\n\n> + i_srsublsn = PQfnumber(res, \"srsublsn\");\n> [...]\n> + subrinfo[cur_rel].srsublsn = pg_strdup(PQgetvalue(res, i, i_srsublsn));\n>\n> In getSubscriptionTables(), this should check for PQgetisnull()\n> because we would have a NULL value for InvalidXLogRecPtr in the\n> catalog. Using a char* for srsublsn is OK, but just assign NULL to\n> it, then just pass a hardcoded NULL value to the function as we do in\n> other places. So I don't quite get why this is not the same handling\n> as suboriginremotelsn.\n\nModified\n\n>\n> getSubscriptionTables() is entirely skipped if we don't want any\n> subscriptions, if we deal with a server of 9.6 or older or if we don't\n> do binary upgrades, which is OK.\n>\n> +/*\n> + * getSubscriptionTables\n> + * get information about subscription membership for dumpable tables.\n> + */\n> This commit is slightly misleading and should mention that this is an\n> upgrade-only path?\n\nModified\n\n>\n> The code for dumpSubscriptionTable() is a copy-paste of\n> dumpPublicationTable(), but a lot of what you are doing here is\n> actually pointless if we are not in binary mode? Why should this code\n> path not taken only under dataOnly? I mean, this is a code path we\n> should never take except if we are in binary mode. This should have\n> at least a cross-check to make sure that we never have a\n> DO_SUBSCRIPTION_REL in this code path if we are in non-binary mode.\n\nI have added an assert in this case, as it is not expected to come\nhere in non binary mode\n\n> + if (dopt->binary_upgrade && subinfo->suboriginremotelsn)\n> + {\n> + appendPQExpBufferStr(query,\n> + \"SELECT pg_catalog.binary_upgrade_replorigin_advance(\");\n> + appendStringLiteralAH(query, subinfo->dobj.name, fout);\n> + appendPQExpBuffer(query, \", '%s');\\n\", subinfo->suboriginremotelsn);\n> + }\n>\n> Hmm.. Could it be actually useful even for debugging to still have\n> this query if suboriginremotelsn is an InvalidXLogRecPtr? I think\n> that this should have a comment of the kind \"\\n-- For binary upgrade,\n> blah\". At least it would not be a bad thing to enforce a correct\n> state from the start, removing the NULL check for the second argument\n> in binary_upgrade_replorigin_advance().\n\nModified\n\n> + /* We need to check for pg_replication_origin_status only once. */\n> Perhaps it would be better to explain why?\n\nThis remote_lsn code change is actually not required, I have removed this now.\n\n>\n> + \"WHERE coalesce(remote_lsn, '0/0') = '0/0'\"\n> Why a COALESCE here? Cannot this stuff just use NULL?\n\nThis remote_lsn code change is actually not required, I have removed this now.\n\n> + fprintf(script, \"database:%s subscription:%s relation:%s in non-ready state\\n\",\n> Could it be possible to include the schema of the relation in this log?\n\nModified\n\n> +static void check_for_subscription_state(ClusterInfo *cluster);\n> I'd be tempted to move that into a patch on its own, actually, for a\n> cleaner history.\n\nAs of now I have kept it together, I will change it later based on\nmore feedback from others\n\n> +# Copyright (c) 2022-2023, PostgreSQL Global Development Group\n> New as of 2023.\n\nModified\n\n> +# Check that after upgradation of the subscriber server, the incremental\n> +# changes added to the publisher are replicated.\n> [..]\n> + For upgradation of the subscriptions, all the subscriptions on the old\n> + cluster must have a valid <varname>remote_lsn</varname>, and all the\n>\n> Upgradation? I think that this should be reworded:\n> \"All the subscriptions of an old cluster require a valid remote_lsn\n> during an upgrade.\"\n\nThis remote_lsn code change is actually not required, I have removed this now.\n\n>\n> A CI run is reporting the following compilation warnings:\n> [04:21:15.290] pg_dump.c: In function ‘getSubscriptionTables’:\n> [04:21:15.290] pg_dump.c:4655:29: error: ‘subinfo’ may be used\n> uninitialized in this function [-Werror=maybe-uninitialized]\n> [04:21:15.290] 4655 | subrinfo[cur_rel].subinfo = subinfo;\n\n I have initialized and checked with [-Werror=maybe-uninitialized],\nlet me check in the next cfbot run\n\n\n> +ok(-d $new_sub->data_dir . \"/pg_upgrade_output.d\",\n> + \"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\n> Not sure that there's a need for this check. Okay, that's cheap.\n\nModified\n\n> And, err. We are going to need an option to control if the slot data\n> is copied, and a bit more documentation in pg_upgrade to explain how\n> things happen when the copy happens.\nAdded documentation for this, we will copy the slot data by default,\nwe will add a separate patch to skip dump of subscription\nrelations/replication slot from upgrade and dump a little later.\n\nThe attached v9 version patch has the changes for the same.\n\nApart from this I'm still checking that the old cluster's subscription\nrelations states are READY state still, but there is a possibility\nthat SYNCDONE or FINISHEDCOPY could work, this needs more thought\nbefore concluding which is the correct state to check. Let' handle\nthis in the upcoming version.\n\nRegards,\nVignesh",
"msg_date": "Fri, 27 Oct 2023 12:09:27 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 12:09 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Apart from this I'm still checking that the old cluster's subscription\n> relations states are READY state still, but there is a possibility\n> that SYNCDONE or FINISHEDCOPY could work, this needs more thought\n> before concluding which is the correct state to check. Let' handle\n> this in the upcoming version.\n>\n\nI was analyzing this part and it seems it could be tricky to upgrade\nin FINISHEDCOPY state. Because the system would expect that subscriber\nwould know the old slotname from oldcluster which it can drop at\nSYNCDONE state. Now, as sync_slot_name is generated based on subid,\nrelid which could be different in the new cluster, the generated\nslotname would be different after the upgrade. OTOH, if the relstate\nis INIT, then I think the sync could be performed even after the\nupgrade.\n\nShouldn't we at least ensure that replication origins do exist in the\nold cluster corresponding to each of the subscriptions? Otherwise,\nlater the query to get remote_lsn for origin in getSubscriptions()\nwould fail.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 27 Oct 2023 17:05:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, 27 Oct 2023 at 12:09, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, 21 Sept 2023 at 11:27, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Sep 15, 2023 at 03:08:21PM +0530, vignesh C wrote:\n> > > On Tue, 12 Sept 2023 at 14:25, Hayato Kuroda (Fujitsu)\n> > > <kuroda.hayato@fujitsu.com> wrote:\n> > >> Is there a possibility that apply worker on old cluster connects to the\n> > >> publisher during the upgrade? Regarding the pg_upgrade on publisher, the we\n> > >> refuse TCP/IP connections from remotes and port number is also changed, so we can\n> > >> assume that subscriber does not connect to. But IIUC such settings may not affect\n> > >> to the connection source, so that the apply worker may try to connect to the\n> > >> publisher. Also, is there any hazards if it happens?\n> > >\n> > > Yes, there is a possibility that the apply worker gets started and new\n> > > transaction data is being synced from the publisher. I have made a fix\n> > > not to start the launcher process in binary ugprade mode as we don't\n> > > want the launcher to start apply worker during upgrade.\n> >\n> > Hmm. I was wondering if 0001 is the right way to handle this case,\n> > but at the end I'm OK to paint one extra isBinaryUpgrade in the code\n> > path where apply launchers are registered. I don't think that the\n> > patch is complete, though. A comment should be added in pg_upgrade's\n> > server.c, exactly start_postmaster(), to tell that -b also stops apply\n> > workers. I am attaching a version updated as of the attached, that\n> > I'd be OK to apply.\n>\n> I have added comments\n>\n> > I don't really think that we need to worry about a subscriber\n> > connecting back to a publisher in this case, though? I mean, each\n> > postmaster instance started by pg_upgrade restricts the access to the\n> > instance with unix_socket_directories set to a custom path and\n> > permissions at 0700, and a subscription's connection string does not\n> > know the unix path used by pg_upgrade. I certainly agree that\n> > stopping these processes could lead to inconsistencies in the data the\n> > subscribers have been holding though, if we are not careful, so\n> > preventing them from running is a good practice anyway.\n>\n> I have made the fix similar to how upgrade publisher has done to keep\n> it consistent.\n>\n> > I have also reviewed 0002. As a whole, I think that I'm OK with the\n> > main approach of the patch in pg_dump to use a new type of dumpable\n> > object for subscription relations that are dumped with their upgrade\n> > functions after. This still needs more work, and more documentation.\n>\n> Added documentation\n>\n> > Also, perhaps we should really have an option to control if this part\n> > of the copy happens or not. With a --no-subscription-relations for\n> > pg_dump at least?\n>\n> Currently this is done by default in binary upgrade mode, I will add a\n> separate patch to skip dump of subscription relations from upgrade and\n> dump a little later.\n>\n> >\n> > +{ oid => '4551', descr => 'add a relation with the specified relation state to pg_subscription_rel table',\n> >\n> > During a development cycle, any new function added needs to use an OID\n> > in range 8000-9999. Running unused_oids will suggest new random OIDs.\n>\n> Modified\n>\n> > FWIW, I am not convinced that there is a need for two functions to add\n> > an entry to pg_subscription_rel, with sole difference between both the\n> > handling of a valid or invalid LSN. We should have only one function\n> > that's able to handle NULL for the LSN. So let's remove rel_state_a\n> > and rel_state_b, and have a single rel_state(). The description of\n> > the SQL functions is inconsistent with the other binary upgrade ones,\n> > I would suggest for the two functions\n> > \"for use by pg_upgrade (relation for pg_subscription_rel)\"\n> > \"for use by pg_upgrade (remote_lsn for origin)\"\n>\n> Removed rel_state_a and rel_state_b and updated the description accordingly\n>\n> > + i_srsublsn = PQfnumber(res, \"srsublsn\");\n> > [...]\n> > + subrinfo[cur_rel].srsublsn = pg_strdup(PQgetvalue(res, i, i_srsublsn));\n> >\n> > In getSubscriptionTables(), this should check for PQgetisnull()\n> > because we would have a NULL value for InvalidXLogRecPtr in the\n> > catalog. Using a char* for srsublsn is OK, but just assign NULL to\n> > it, then just pass a hardcoded NULL value to the function as we do in\n> > other places. So I don't quite get why this is not the same handling\n> > as suboriginremotelsn.\n>\n> Modified\n>\n> >\n> > getSubscriptionTables() is entirely skipped if we don't want any\n> > subscriptions, if we deal with a server of 9.6 or older or if we don't\n> > do binary upgrades, which is OK.\n> >\n> > +/*\n> > + * getSubscriptionTables\n> > + * get information about subscription membership for dumpable tables.\n> > + */\n> > This commit is slightly misleading and should mention that this is an\n> > upgrade-only path?\n>\n> Modified\n>\n> >\n> > The code for dumpSubscriptionTable() is a copy-paste of\n> > dumpPublicationTable(), but a lot of what you are doing here is\n> > actually pointless if we are not in binary mode? Why should this code\n> > path not taken only under dataOnly? I mean, this is a code path we\n> > should never take except if we are in binary mode. This should have\n> > at least a cross-check to make sure that we never have a\n> > DO_SUBSCRIPTION_REL in this code path if we are in non-binary mode.\n>\n> I have added an assert in this case, as it is not expected to come\n> here in non binary mode\n>\n> > + if (dopt->binary_upgrade && subinfo->suboriginremotelsn)\n> > + {\n> > + appendPQExpBufferStr(query,\n> > + \"SELECT pg_catalog.binary_upgrade_replorigin_advance(\");\n> > + appendStringLiteralAH(query, subinfo->dobj.name, fout);\n> > + appendPQExpBuffer(query, \", '%s');\\n\", subinfo->suboriginremotelsn);\n> > + }\n> >\n> > Hmm.. Could it be actually useful even for debugging to still have\n> > this query if suboriginremotelsn is an InvalidXLogRecPtr? I think\n> > that this should have a comment of the kind \"\\n-- For binary upgrade,\n> > blah\". At least it would not be a bad thing to enforce a correct\n> > state from the start, removing the NULL check for the second argument\n> > in binary_upgrade_replorigin_advance().\n>\n> Modified\n>\n> > + /* We need to check for pg_replication_origin_status only once. */\n> > Perhaps it would be better to explain why?\n>\n> This remote_lsn code change is actually not required, I have removed this now.\n>\n> >\n> > + \"WHERE coalesce(remote_lsn, '0/0') = '0/0'\"\n> > Why a COALESCE here? Cannot this stuff just use NULL?\n>\n> This remote_lsn code change is actually not required, I have removed this now.\n>\n> > + fprintf(script, \"database:%s subscription:%s relation:%s in non-ready state\\n\",\n> > Could it be possible to include the schema of the relation in this log?\n>\n> Modified\n>\n> > +static void check_for_subscription_state(ClusterInfo *cluster);\n> > I'd be tempted to move that into a patch on its own, actually, for a\n> > cleaner history.\n>\n> As of now I have kept it together, I will change it later based on\n> more feedback from others\n>\n> > +# Copyright (c) 2022-2023, PostgreSQL Global Development Group\n> > New as of 2023.\n>\n> Modified\n>\n> > +# Check that after upgradation of the subscriber server, the incremental\n> > +# changes added to the publisher are replicated.\n> > [..]\n> > + For upgradation of the subscriptions, all the subscriptions on the old\n> > + cluster must have a valid <varname>remote_lsn</varname>, and all the\n> >\n> > Upgradation? I think that this should be reworded:\n> > \"All the subscriptions of an old cluster require a valid remote_lsn\n> > during an upgrade.\"\n>\n> This remote_lsn code change is actually not required, I have removed this now.\n>\n> >\n> > A CI run is reporting the following compilation warnings:\n> > [04:21:15.290] pg_dump.c: In function ‘getSubscriptionTables’:\n> > [04:21:15.290] pg_dump.c:4655:29: error: ‘subinfo’ may be used\n> > uninitialized in this function [-Werror=maybe-uninitialized]\n> > [04:21:15.290] 4655 | subrinfo[cur_rel].subinfo = subinfo;\n>\n> I have initialized and checked with [-Werror=maybe-uninitialized],\n> let me check in the next cfbot run\n>\n>\n> > +ok(-d $new_sub->data_dir . \"/pg_upgrade_output.d\",\n> > + \"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\n> > Not sure that there's a need for this check. Okay, that's cheap.\n>\n> Modified\n>\n> > And, err. We are going to need an option to control if the slot data\n> > is copied, and a bit more documentation in pg_upgrade to explain how\n> > things happen when the copy happens.\n> Added documentation for this, we will copy the slot data by default,\n> we will add a separate patch to skip dump of subscription\n> relations/replication slot from upgrade and dump a little later.\n>\n> The attached v9 version patch has the changes for the same.\n>\n> Apart from this I'm still checking that the old cluster's subscription\n> relations states are READY state still, but there is a possibility\n> that SYNCDONE or FINISHEDCOPY could work, this needs more thought\n> before concluding which is the correct state to check. Let' handle\n> this in the upcoming version.\n\nThe patch was not applying because of recent commits. Here is a\nrebased version of the patches.\n\nRegards,\nVignesh",
"msg_date": "Mon, 30 Oct 2023 15:05:09 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 05:05:39PM +0530, Amit Kapila wrote:\n> I was analyzing this part and it seems it could be tricky to upgrade\n> in FINISHEDCOPY state. Because the system would expect that subscriber\n> would know the old slotname from oldcluster which it can drop at\n> SYNCDONE state. Now, as sync_slot_name is generated based on subid,\n> relid which could be different in the new cluster, the generated\n> slotname would be different after the upgrade. OTOH, if the relstate\n> is INIT, then I think the sync could be performed even after the\n> upgrade.\n\nTBH, I am really wondering if there is any need to go down to being\nable to handle anything else than READY for the relation states in\npg_subscription_rel. One reason is that it makes it much easier to\nthink about how to handle these in parallel of a node with\npublications that also need to go through an upgrade, because as READY\nrelations they don't require any tracking. IMO, this makes it simpler\nto think about cases where a node holds both subscriptions and\npublications.\n\nFWIW, my take is that it feels natural to do the upgrades of\nsubscriptions first, creating a similarity with the case of minor\nupdates with physical replication setups.\n\n> Shouldn't we at least ensure that replication origins do exist in the\n> old cluster corresponding to each of the subscriptions? Otherwise,\n> later the query to get remote_lsn for origin in getSubscriptions()\n> would fail.\n\nYou mean in the shape of a pre-upgrade check making sure that\npg_replication_origin_status has entries for all the subscriptions we\nexpect to see during the upgrade? Makes sense to me.\n--\nMichael",
"msg_date": "Wed, 1 Nov 2023 12:03:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 03:05:09PM +0530, vignesh C wrote:\n> The patch was not applying because of recent commits. Here is a\n> rebased version of the patches.\n\n+ * We don't want the launcher to run while upgrading because it may start\n+ * apply workers which could start receiving changes from the publisher\n+ * before the physical files are put in place, causing corruption on the\n+ * new cluster upgrading to, so setting max_logical_replication_workers=0\n+ * to disable launcher.\n */\n if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n- appendPQExpBufferStr(&pgoptions, \" -c max_slot_wal_keep_size=-1\");\n+ appendPQExpBufferStr(&pgoptions, \" -c max_slot_wal_keep_size=-1 -c max_logical_replication_workers=0\");\n\nAt least that's consistent with the other side of the coin with\npublications. So 0001 looks basically OK seen from here.\n\nThe indentation of 0002 seems off in a few places.\n\n+ <para>\n+ Verify that all the subscription tables in the old subscriber are in\n+ <literal>r</literal> (ready) state. Setup the\n+ <link linkend=\"logical-replication-config-subscriber\"> subscriber\n+ configurations</link> in the new subscriber.\n[...]\n+ <para>\n+ There is a prerequisites that all the subscription tables should be in\n+ <literal>r</literal> (ready) state for\n+ <application>pg_upgrade</application> to be able to upgrade the\n+ subscriber. If this is not met an error will be reported.\n+ </para>\n\nThis part is repeated. Globally, this documentation addition does not\nseem really helpful for the end-user as it describes the checks that\nare done during the upgrade. Shouldn't this part of the docs,\nsimilarly to the publication part, focus on providing a check list of\nactions to take to achieve a clean upgrade, with a list of commands\nand configurations required? The good part is that information about\nwhat's copied is provided (pg_subscription_rel and the origin status),\nstill this could be improved.\n\n+ <para>\n+ Enable the subscriptions by executing\n+ <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ... ENABLE</command></link>.\n+ </para>\n\nThis is something users can act on, but how does this operation help\nwith the upgrade? Should this happen for all the descriptions\nsubscriptions? Or you mean that this is something that needs to be\nrun after the upgrade?\n\n+ <para>\n+ Create all the new tables that were created in the publication and\n+ refresh the publication by executing\n+ <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>.\n+ </para>\n\nWhat does \"new tables\" refer to in this case? Are you referring to\nthe case where new relations have been added on a publication node\nafter an upgrade and need to be copied? Does one need to DISABLE the\nsubscriptions on the subscriber node before running the upgrade, or is\na REFRESH enough? The test only uses a REFRESH, so the docs and the\ncode don't entirely agree with each other.\n\n+ <para>\n+ For upgradation of the subscriptions, all the subscription tables should be\n+ in <literal>r</literal> (ready) state, or else the\n+ <application>pg_upgrade</application> run will error.\n+ </para>\n\n\"Upgradation\"?\n\n+# Set tables to 'i' state\n+$old_sub->safe_psql(\n+\t'postgres',\n+\t\"UPDATE pg_subscription_rel\n+\t\tSET srsubstate = 'i' WHERE srsubstate = 'r'\");\n\nI am not sure that doing catalog manipulation in the TAP test itself\nis a good idea, because this can finish by being unpredictible in the\nlong-term for the test maintenance. I think that this portion of the\ntest should just be removed. poll_query_until() or wait queries\nmaking sure that all the relations are in the state we want them to be \nbefore the beginning of the upgrade is enough in terms of test\ncoverag, IMO.\n\n+$result = $new_sub->safe_psql('postgres',\n+\t\"SELECT remote_lsn FROM pg_replication_origin_status\");\n\nThis assumes one row, but perhaps this had better do a match based on\nexternal_id and/or local_id?\n--\nMichael",
"msg_date": "Wed, 1 Nov 2023 13:43:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, 27 Oct 2023 at 17:05, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 27, 2023 at 12:09 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Apart from this I'm still checking that the old cluster's subscription\n> > relations states are READY state still, but there is a possibility\n> > that SYNCDONE or FINISHEDCOPY could work, this needs more thought\n> > before concluding which is the correct state to check. Let' handle\n> > this in the upcoming version.\n> >\n>\n> I was analyzing this part and it seems it could be tricky to upgrade\n> in FINISHEDCOPY state. Because the system would expect that subscriber\n> would know the old slotname from oldcluster which it can drop at\n> SYNCDONE state. Now, as sync_slot_name is generated based on subid,\n> relid which could be different in the new cluster, the generated\n> slotname would be different after the upgrade. OTOH, if the relstate\n> is INIT, then I think the sync could be performed even after the\n> upgrade.\n\nI had analyzed all the subscription relation states further, here is\nmy analysis:\nThe following states are ok, as either the replication slot is not\ncreated or the replication slot is already dropped and the required\nWAL files will be present in the publisher:\na) SUBREL_STATE_SYNCDONE b) SUBREL_STATE_READY c) SUBREL_STATE_INIT\nThe following states are not ok as the worker has dependency on the\nreplication slot/origin in these case:\na) SUBREL_STATE_DATASYNC: In this case, the table sync worker will try\nto drop the replication slot but as the replication slots will be\ncreated with old subscription id in the publisher and the upgraded\nsubscriber will not be able to clean the slots in this case. b)\nSUBREL_STATE_FINISHEDCOPY: In this case, the tablesync worker will\nexpect the origin to be already existing as the origin is created with\nan old subscription id, tablesync worker will not be able to find the\norigin in this case. c) SUBREL_STATE_SYNCWAIT, SUBREL_STATE_CATCHUP\nand SUBREL_STATE_UNKNOWN: These states are not stored in the catalog,\nso we need not allow these states.\nI modified it to support the relation states accordingly.\n\n> Shouldn't we at least ensure that replication origins do exist in the\n> old cluster corresponding to each of the subscriptions? Otherwise,\n> later the query to get remote_lsn for origin in getSubscriptions()\n> would fail.\nAdded a check for the same.\n\nThe attached v10 version patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 2 Nov 2023 00:14:05 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Here are some review comments for patch v10-0001\n\n======\nCommit message\n\n1.\nThe chance of being able to do so should be small as pg_upgrade uses its\nown port and unix domain directory (customizable as well with\n--socketdir), but just preventing the launcher to start is safer at the\nend, because we are then sure that no changes would ever be applied.\n\n~\n\n\"safer at the end\" (??)\n\n======\nsrc/bin/pg_upgrade/server.c\n\n2.\n+ * We don't want the launcher to run while upgrading because it may start\n+ * apply workers which could start receiving changes from the publisher\n+ * before the physical files are put in place, causing corruption on the\n+ * new cluster upgrading to, so setting max_logical_replication_workers=0\n+ * to disable launcher.\n */\n if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n- appendPQExpBufferStr(&pgoptions, \" -c max_slot_wal_keep_size=-1\");\n+ appendPQExpBufferStr(&pgoptions, \" -c max_slot_wal_keep_size=-1 -c\nmax_logical_replication_workers=0\");\n\n2a.\nThe comment is one big long sentence. IMO it will be better to break it up.\n\n~\n\n2b.\nAdd a blank line between this comment note and the previous one.\n\n~~~\n\n2c.\nIn a recent similar thread [1], they chose to implement a guc_hook to\nprevent a user from overriding this via the command line option during\nthe upgrade. Shouldn't this patch do the same thing, for consistency?\n\n~~~\n\n2d.\nIf you do implement such a guc_hook (per #2c above), then should the\npatch also include a test case for getting an ERROR if the user tries\nto override that GUC?\n\n======\n[1] https://www.postgresql.org/message-id/20231027.115759.2206827438943188717.horikyota.ntt%40gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 2 Nov 2023 16:35:26 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Nov 02, 2023 at 04:35:26PM +1100, Peter Smith wrote:\n> The chance of being able to do so should be small as pg_upgrade uses its\n> own port and unix domain directory (customizable as well with\n> --socketdir), but just preventing the launcher to start is safer at the\n> end, because we are then sure that no changes would ever be applied.\n> ~\n> \"safer at the end\" (??)\n\nWell, just safer.\n\n> 2a.\n> The comment is one big long sentence. IMO it will be better to break it up.\n> 2b.\n> Add a blank line between this comment note and the previous one.\n\nYes, I found that equally confusing when looking at this patch, so\nI've edited the patch this way when I was looking at it today. This\nis enough to do the job, so I have applied it for now, before moving\non with the second one of this thread.\n\n> 2c.\n> In a recent similar thread [1], they chose to implement a guc_hook to\n> prevent a user from overriding this via the command line option during\n> the upgrade. Shouldn't this patch do the same thing, for consistency?\n> 2d.\n> If you do implement such a guc_hook (per #2c above), then should the\n> patch also include a test case for getting an ERROR if the user tries\n> to override that GUC?\n\nYeah, that may be something to do, but I am not sure that it is worth\ncomplicating the backend code for the remote case where one enforces\nan option while we are already setting a GUC in the upgrade path:\nhttps://www.postgresql.org/message-id/CAA4eK1Lh9J5VLypSQugkdD+H=_5-6p3rOocjo7JbTogcxA2hxg@mail.gmail.com\n\nThat feels like a lot of extra facility for cases that should never\nhappen.\n--\nMichael",
"msg_date": "Thu, 2 Nov 2023 14:48:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Nov 1, 2023 at 8:33 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Oct 27, 2023 at 05:05:39PM +0530, Amit Kapila wrote:\n> > I was analyzing this part and it seems it could be tricky to upgrade\n> > in FINISHEDCOPY state. Because the system would expect that subscriber\n> > would know the old slotname from oldcluster which it can drop at\n> > SYNCDONE state. Now, as sync_slot_name is generated based on subid,\n> > relid which could be different in the new cluster, the generated\n> > slotname would be different after the upgrade. OTOH, if the relstate\n> > is INIT, then I think the sync could be performed even after the\n> > upgrade.\n>\n> TBH, I am really wondering if there is any need to go down to being\n> able to handle anything else than READY for the relation states in\n> pg_subscription_rel. One reason is that it makes it much easier to\n> think about how to handle these in parallel of a node with\n> publications that also need to go through an upgrade, because as READY\n> relations they don't require any tracking. IMO, this makes it simpler\n> to think about cases where a node holds both subscriptions and\n> publications.\n>\n\nBut that poses needless restrictions for the users. For example, there\nappears no harm in upgrading even when the relation is in\nSUBREL_STATE_INIT state. Users should be able to continue replication\nafter the upgrade.\n\n> FWIW, my take is that it feels natural to do the upgrades of\n> subscriptions first, creating a similarity with the case of minor\n> updates with physical replication setups.\n>\n> > Shouldn't we at least ensure that replication origins do exist in the\n> > old cluster corresponding to each of the subscriptions? Otherwise,\n> > later the query to get remote_lsn for origin in getSubscriptions()\n> > would fail.\n>\n> You mean in the shape of a pre-upgrade check making sure that\n> pg_replication_origin_status has entries for all the subscriptions we\n> expect to see during the upgrade?\n>\n\nYes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Nov 2023 14:15:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, 1 Nov 2023 at 10:13, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 30, 2023 at 03:05:09PM +0530, vignesh C wrote:\n> > The patch was not applying because of recent commits. Here is a\n> > rebased version of the patches.\n>\n> + * We don't want the launcher to run while upgrading because it may start\n> + * apply workers which could start receiving changes from the publisher\n> + * before the physical files are put in place, causing corruption on the\n> + * new cluster upgrading to, so setting max_logical_replication_workers=0\n> + * to disable launcher.\n> */\n> if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n> - appendPQExpBufferStr(&pgoptions, \" -c max_slot_wal_keep_size=-1\");\n> + appendPQExpBufferStr(&pgoptions, \" -c max_slot_wal_keep_size=-1 -c max_logical_replication_workers=0\");\n>\n> At least that's consistent with the other side of the coin with\n> publications. So 0001 looks basically OK seen from here.\n>\n> The indentation of 0002 seems off in a few places.\n\nI fixed wherever possible for documentation and also ran pgindent and\npgperltidy.\n\n> + <para>\n> + Verify that all the subscription tables in the old subscriber are in\n> + <literal>r</literal> (ready) state. Setup the\n> + <link linkend=\"logical-replication-config-subscriber\"> subscriber\n> + configurations</link> in the new subscriber.\n> [...]\n> + <para>\n> + There is a prerequisites that all the subscription tables should be in\n> + <literal>r</literal> (ready) state for\n> + <application>pg_upgrade</application> to be able to upgrade the\n> + subscriber. If this is not met an error will be reported.\n> + </para>\n>\n> This part is repeated.\n\nRemoved the duplicate contents.\n\n> Globally, this documentation addition does not\n> seem really helpful for the end-user as it describes the checks that\n> are done during the upgrade. Shouldn't this part of the docs,\n> similarly to the publication part, focus on providing a check list of\n> actions to take to achieve a clean upgrade, with a list of commands\n> and configurations required? The good part is that information about\n> what's copied is provided (pg_subscription_rel and the origin status),\n> still this could be improved.\n\nI have slightly modified it now and also made it consistent with the\nreplication slot upgrade, but I was not sure if we need to add\nanything more. Let me know if anything else needs to be added. I will\nadd it.\n\n> + <para>\n> + Enable the subscriptions by executing\n> + <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ... ENABLE</command></link>.\n> + </para>\n>\n> This is something users can act on, but how does this operation help\n> with the upgrade? Should this happen for all the descriptions\n> subscriptions? Or you mean that this is something that needs to be\n> run after the upgrade?\n\nThe subscriptions will be upgraded in disabled mode. Users must enable\nthe subscriptions after the upgrade is completed. I have mentioned the\nsame to avoid confusion.\n\n> + <para>\n> + Create all the new tables that were created in the publication and\n> + refresh the publication by executing\n> + <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>.\n> + </para>\n>\n> What does \"new tables\" refer to in this case? Are you referring to\n> the case where new relations have been added on a publication node\n> after an upgrade and need to be copied? Does one need to DISABLE the\n> subscriptions on the subscriber node before running the upgrade, or is\n> a REFRESH enough? The test only uses a REFRESH, so the docs and the\n> code don't entirely agree with each other.\n\nYes, \"new tables\" refers to the new tables created in the publisher\nwhen the upgrade is in progress. No need to disable the subscription\nbefore upgrade, during upgrade the subscriptions will be copied in\ndisabled mode, they should be enabled after the upgrade. Mentioned all\nthese accordingly.\n\n> + <para>\n> + For upgradation of the subscriptions, all the subscription tables should be\n> + in <literal>r</literal> (ready) state, or else the\n> + <application>pg_upgrade</application> run will error.\n> + </para>\n>\n> \"Upgradation\"?\n\nI have removed this content since we have added this in the\nprerequisite section now.\n\n> +# Set tables to 'i' state\n> +$old_sub->safe_psql(\n> + 'postgres',\n> + \"UPDATE pg_subscription_rel\n> + SET srsubstate = 'i' WHERE srsubstate = 'r'\");\n>\n> I am not sure that doing catalog manipulation in the TAP test itself\n> is a good idea, because this can finish by being unpredictible in the\n> long-term for the test maintenance. I think that this portion of the\n> test should just be removed. poll_query_until() or wait queries\n> making sure that all the relations are in the state we want them to be\n> before the beginning of the upgrade is enough in terms of test\n> coverag, IMO.\n\nChanged the scenario by using primary key failure.\n\n> +$result = $new_sub->safe_psql('postgres',\n> + \"SELECT remote_lsn FROM pg_replication_origin_status\");\n>\n> This assumes one row, but perhaps this had better do a match based on\n> external_id and/or local_id?\n\nModified\n\nThe attached v11 version patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 2 Nov 2023 15:41:25 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Nov 2, 2023 at 3:41 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> I have slightly modified it now and also made it consistent with the\n> replication slot upgrade, but I was not sure if we need to add\n> anything more. Let me know if anything else needs to be added. I will\n> add it.\n>\n\nI think it is important for users to know how they upgrade their\nmulti-node setup. Say a two-node setup where replication is working\nboth ways (aka each node has both publications and subscriptions),\nsimilarly, how to upgrade, if there are multiple nodes involved?\n\nOne more thing I was thinking about this patch was that here unlike\nthe publication's slot information, we can't ensure with origin's\nremote_lsn that all the WAL is received and applied before allowing\nthe upgrade. I can't think of any problem at the moment due to this\nbut still a point worth giving a thought.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Nov 2023 17:00:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 2 Nov 2023 at 11:05, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> ~~~\n>\n> 2c.\n> In a recent similar thread [1], they chose to implement a guc_hook to\n> prevent a user from overriding this via the command line option during\n> the upgrade. Shouldn't this patch do the same thing, for consistency?\n\nAdded GUC hook for consistency.\n\n> ~~~\n>\n> 2d.\n> If you do implement such a guc_hook (per #2c above), then should the\n> patch also include a test case for getting an ERROR if the user tries\n> to override that GUC?\n\nAdded a test for the same.\n\nWe can use this patch if we are planning to go ahead with guc_hooks\nfor max_slot_wal_keep_size as discussed at [1].\nThe attached patch has the changes for the same.\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPsTrB%3DmjBA-Y-%2BW4kK63tao9%3DXBsMXG9rkw4g_m9WatwA%40mail.gmail.com\n\n\nRegards,\nVignesh",
"msg_date": "Fri, 3 Nov 2023 16:15:45 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Nov 02, 2023 at 05:00:55PM +0530, Amit Kapila wrote:\n> I think it is important for users to know how they upgrade their\n> multi-node setup. Say a two-node setup where replication is working\n> both ways (aka each node has both publications and subscriptions),\n> similarly, how to upgrade, if there are multiple nodes involved?\n\n+1. My next remarks also apply to the thread where publishers are\nhandled in upgrades, but I'd like to think that at the end of the\nrelease cycle it would be nice to have the basic features in, with\nalso a set of regression tests for logical upgrade scenarios that we'd\nexpect to work. Two \"basic\" ones coming into mind:\n- Cascading logical setup, with one node in the middle having both\npublisher(s) and subscriber(s).\n- Two-way replication, with two nodes.\n\n> One more thing I was thinking about this patch was that here unlike\n> the publication's slot information, we can't ensure with origin's\n> remote_lsn that all the WAL is received and applied before allowing\n> the upgrade. I can't think of any problem at the moment due to this\n> but still a point worth giving a thought.\n\nYeah, that may be an itchy point, which is also related to my concerns\non trying to allow more syncstates than ready when beginning the\nupgrade, which is at least a point we are sure that a relation was up\nto date, up to a certain point.\n--\nMichael",
"msg_date": "Sun, 5 Nov 2023 08:56:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Here are some review comments for patch v11-0001\n\n======\nCommit message\n\n1.\nThe subscription's replication origin are needed to ensure\nthat we don't replicate anything twice.\n\n~\n\n/are needed/is needed/\n\n~~~\n\n2.\nAuthor: Julien Rouhaud\nReviewed-by: FIXME\nDiscussion: https://postgr.es/m/20230217075433.u5mjly4d5cr4hcfe@jrouhaud\n\n~\n\nInclude Vignesh as another author.\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n3.\n+ <application>pg_upgrade</application> attempts to migrate subscription\n+ dependencies which includes the subscription tables information present in\n+ <link linkend=\"catalog-pg-subscription-rel\">pg_subscription_rel</link>\n+ system table and the subscription replication origin which\n+ will help in continuing logical replication from where the old subscriber\n+ was replicating. This helps in avoiding the need for setting up the\n\nI became a bit lost reading paragraph due to the multiple 'which'...\n\nSUGGESTION\npg_upgrade attempts to migrate subscription dependencies which\nincludes the subscription table information present in\npg_subscription_rel system\ncatalog and also the subscription replication origin. This allows\nlogical replication on the new subscriber to continue from where the\nold subscriber was up to.\n\n~~~\n\n4.\n+ was replicating. This helps in avoiding the need for setting up the\n+ subscription objects manually which requires truncating all the\n+ subscription tables and setting the logical replication slots. Migration\n\nSUGGESTION\nHaving the ability to migrate subscription objects avoids the need to\nset them up manually, which would require truncating all the\nsubscription tables and setting the logical replication slots.\n\n~\n\nTBH, I am wondering what is the purpose of this sentence. It seems\nmore like a justification for the patch, but does the user need to\nknow all this?\n\n~~~\n\n5.\n+ <para>\n+ All the subscription tables in the old subscriber should be in\n+ <literal>i</literal> (initialize), <literal>r</literal> (ready) or\n+ <literal>s</literal> (synchronized). This can be verified by checking\n+ <link linkend=\"catalog-pg-subscription-rel\">pg_subscription_rel</link>.<structfield>srsubstate</structfield>.\n+ </para>\n\n/should be in/should be in state/\n\n~~~\n\n6.\n+ <para>\n+ The replication origin entry corresponding to each of the subscriptions\n+ should exist in the old cluster. This can be checking\n+ <link linkend=\"catalog-pg-subscription\">pg_subscription</link> and\n+ <link linkend=\"catalog-pg-replication-origin\">pg_replication_origin</link>\n+ system tables.\n+ </para>\n\nmissing words?\n\n/This can be checking/This can be found by checking/\n\n~~~\n\n7.\n+ <para>\n+ The subscriptions will be migrated to new cluster in disabled state, they\n+ can be enabled after upgrade by following the steps:\n+ </para>\n\nThe first bullet also says \"Enable the subscription...\" so I think\nthis paragraph should be worded like the below.\n\nSUGGESTION\nThe subscriptions will be migrated to the new cluster in a disabled\nstate. After migration, do this:\n\n======\nsrc/backend/catalog/pg_subscription.c\n\n8.\n #include \"nodes/makefuncs.h\"\n+#include \"replication/origin.h\"\n+#include \"replication/worker_internal.h\"\n #include \"storage/lmgr.h\"\n\nWhy does this change need to be in the patch when there are no other\ncode changes in this file?\n\n======\nsrc/backend/utils/adt/pg_upgrade_support.c\n\n9. binary_upgrade_create_sub_rel_state\n\nIMO a better name for this function would be\n'binary_upgrade_add_sub_rel_state' (because it delegates to\nAddSubscriptionRelState).\n\nThen it would obey the same name pattern as the other function\n'binary_upgrade_replorigin_advance' (which delegates to\nreplorigin_advance).\n\n~~~\n\n10.\n+/*\n+ * binary_upgrade_create_sub_rel_state\n+ *\n+ * Add the relation with the specified relation state to pg_subscription_rel\n+ * table.\n+ */\n+Datum\n+binary_upgrade_create_sub_rel_state(PG_FUNCTION_ARGS)\n+{\n+ Relation rel;\n+ HeapTuple tup;\n+ Oid subid;\n+ Form_pg_subscription form;\n+ char *subname;\n+ Oid relid;\n+ char relstate;\n+ XLogRecPtr sublsn;\n\n10a.\n/to pg_subscription_rel table./to pg_subscription_rel catalog./\n\n~\n\n10b.\nMaybe it would be helpful if the function argument were documented\nup-front in the function-comment, or in the variable declarations.\n\nSUGGESTION\nchar *subname; /* ARG0 = subscription name */\nOid relid; /* ARG1 = relation Oid */\nchar relstate; /* ARG2 = subrel state */\nXLogRecPtr sublsn; /* ARG3 (optional) = subscription lsn */\n\n~~~\n\n11.\nif (PG_ARGISNULL(3))\nsublsn = InvalidXLogRecPtr;\nelse\nsublsn = PG_GETARG_LSN(3);\nFWIW, I'd write that as a one-line ternary assignment allowing all the\nargs to be grouped nicely together.\n\nSUGGESTION\nsublsn = PG_ARGISNULL(3) ? InvalidXLogRecPtr : PG_GETARG_LSN(3);\n\n~~~\n\n12. binary_upgrade_replorigin_advance\n\n/*\n * binary_upgrade_replorigin_advance\n *\n * Update the remote_lsn for the subscriber's replication origin.\n */\nDatum\nbinary_upgrade_replorigin_advance(PG_FUNCTION_ARGS)\n{\nRelation rel;\nHeapTuple tup;\nOid subid;\nForm_pg_subscription form;\nchar *subname;\nXLogRecPtr sublsn;\nchar originname[NAMEDATALEN];\nRepOriginId originid;\n~\n\nSimilar to previous comment #10b. Maybe it would be helpful if the\nfunction argument were documented up-front in the function-comment, or\nin the variable declarations.\n\nSUGGESTION\nchar originname[NAMEDATALEN];\nRepOriginId originid;\nchar *subname; /* ARG0 = subscription name */\nXLogRecPtr sublsn; /* ARG1 = subscription lsn */\n\n~~~\n\n13.\n+ subname = text_to_cstring(PG_GETARG_TEXT_PP(0));\n+\n+ if (PG_ARGISNULL(1))\n+ sublsn = InvalidXLogRecPtr;\n+ else\n+ sublsn = PG_GETARG_LSN(1);\n\nSimilar to previous comment #11. FWIW, I'd write that as a one-line\nternary assignment allowing all the args to be grouped nicely\ntogether.\n\nSUGGESTION\nsubname = text_to_cstring(PG_GETARG_TEXT_PP(0));\nsublsn = PG_ARGISNULL(1) ? InvalidXLogRecPtr : PG_GETARG_LSN(1);\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n14. getSubscriptionTables\n\n+/*\n+ * getSubscriptionTables\n+ * get information about subscription membership for dumpable tables, this\n+ * will be used only in binary-upgrade mode.\n+ */\n\nShould use multiple sentences.\n\nSUGGESTION\nGet information about subscription membership for dumpable tables.\nThis will be used only in binary-upgrade mode.\n\n~~~\n\n15.\n+ /* Get subscription relation fields */\n+ i_srsubid = PQfnumber(res, \"srsubid\");\n+ i_srrelid = PQfnumber(res, \"srrelid\");\n+ i_srsubstate = PQfnumber(res, \"srsubstate\");\n+ i_srsublsn = PQfnumber(res, \"srsublsn\");\n\nMight it be better to say \"Get pg_subscription_rel attributes\"?\n\n~~~\n\n16. getSubscriptions\n\n+ appendPQExpBufferStr(query, \"o.remote_lsn\\n\");\n appendPQExpBufferStr(query,\n \"FROM pg_subscription s\\n\"\n+ \"LEFT JOIN pg_replication_origin_status o \\n\"\n+ \" ON o.external_id = 'pg_' || s.oid::text \\n\"\n \"WHERE s.subdbid = (SELECT oid FROM pg_database\\n\"\n \" WHERE datname = current_database())\");\n\n~\n\n16a.\nShould that \"remote_lsn\" have an alias like \"suboriginremotelsn\" so\nthat it matches the later field assignment better?\n\n~\n\n16b.\nProbably these catalogs should be qualified using \"pg_catalog.\".\n\n~~~\n\n17. dumpSubscriptionTable\n\n+/*\n+ * dumpSubscriptionTable\n+ * dump the definition of the given subscription table mapping, this will be\n+ * used only for upgrade operation.\n+ */\n\nMake this comment consistent with the other one for getSubscriptionTables:\n- split into multiple sentences\n- use the same terminology \"binary-upgrade mode\" versus \"upgrade operation'.\n\n~~~\n\n18.\n+ /*\n+ * binary_upgrade_create_sub_rel_state will add the subscription\n+ * relation to pg_subscripion_rel table, this is supported only for\n+ * upgrade operation.\n+ */\n\nSplit into multiple sentences.\n\n======\nsrc/bin/pg_dump/pg_dump_sort.c\n\n19.\n+ case DO_SUBSCRIPTION_REL:\n+ snprintf(buf, bufsize,\n+ \"SUBSCRIPTION TABLE (ID %d)\",\n+ obj->dumpId);\n+ return;\n\nShould it include the OID (like for DO PUBLICATION_TABLE)?\n\n======\nsrc/bin/pg_upgrade/check.c\n\n20.\n check_for_reg_data_type_usage(&old_cluster);\n check_for_isn_and_int8_passing_mismatch(&old_cluster);\n\n+ check_for_subscription_state(&old_cluster);\n+\n\nThere seems no reason anymore for this check to be separated from all\nthe other checks. Just remove the blank line.\n\n~~~\n\n21. check_for_subscription_state\n\n+/*\n+ * check_for_subscription_state()\n+ *\n+ * Verify that each of the subscriptions have all their corresponding tables in\n+ * ready state.\n+ */\n+static void\n+check_for_subscription_state(ClusterInfo *cluster)\n\n/have/has/\n\nThis comment only refers to 'ready' state, but perhaps it is\nmisleading (or not entirely correct) because later the SQL is testing\nfor more than just the READY state:\n\n+ \"WHERE srsubstate NOT IN ('i', 's', 'r') \"\n\n~~~\n\n22.\n+ res = executeQueryOrDie(conn,\n+ \"SELECT s.subname, c.relname, n.nspname \"\n+ \"FROM pg_catalog.pg_subscription_rel r \"\n+ \"LEFT JOIN pg_catalog.pg_subscription s\"\n+ \" ON r.srsubid = s.oid \"\n+ \"LEFT JOIN pg_catalog.pg_class c\"\n+ \" ON r.srrelid = c.oid \"\n+ \"LEFT JOIN pg_catalog.pg_namespace n\"\n+ \" ON c.relnamespace = n.oid \"\n+ \"WHERE srsubstate NOT IN ('i', 's', 'r') \"\n+ \"ORDER BY s.subname\");\n\nIf you are going to check 'i', 's', and 'r' then I thought this\nstatement should maybe have some comment about why those states.\n\n~~~\n\n23.\n+ pg_fatal(\"Your installation contains subscription(s) with\\n\"\n+ \"Subscription not having origin and/or subscription relation(s) not\nin ready state.\\n\"\n+ \"A list of subscription not having origin and/or\\n\"\n+ \"subscription relation(s) not in ready state is in the file: %s\",\n+ output_path);\n\n23a.\nThis message seems to just be saying the same thing 2 times.\n\nIs also should use newlines and spaces more like the other similar\npg_patals in this file (e.g. the %s is on next line etc).\n\nSUGGESTION\nYour installation contains subscriptions without origin or having\nrelations not in a ready state.\\n\nA list of the problem subscriptions is in the file:\\n\n %s\n\n~\n\n23b.\nSame question about 'not in ready state'. Is that entirely correct?\n\n======\nsrc/bin/pg_upgrade/t/004_subscription.pl\n\n24.\n+sub insert_line\n+{\n+ my $payload = shift;\n+\n+ foreach (\"t1\", \"t2\")\n+ {\n+ $publisher->safe_psql('postgres',\n+ \"INSERT INTO \" . $_ . \" (val) VALUES('$payload')\");\n+ }\n+}\n\nFor clarity, maybe call this function 'insert_line_at_pub'\n\n~~~\n\n25.\n+# ------------------------------------------------------\n+# Check that pg_upgrade is succesful when all tables are in ready state.\n+# ------------------------------------------------------\n\n/succesful/successful/\n\n~~~\n\n26.\n+command_ok(\n+ [\n+ 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n+ '-D', $new_sub->data_dir, '-b', $bindir,\n+ '-B', $bindir, '-s', $new_sub->host,\n+ '-p', $old_sub->port, '-P', $new_sub->port,\n+ $mode, '--check',\n+ ],\n+ 'run of pg_upgrade --check for old instance with invalid remote_lsn');\n\nThis is the command for the \"success\" case. Why is the message part\nreferring to \"invalid remote_lsn\"?\n\n~~~\n\n27.\n+$publisher->safe_psql('postgres',\n+ \"CREATE TABLE tab_primary_key(id serial, val text);\");\n+$old_sub->safe_psql('postgres',\n+ \"CREATE TABLE tab_primary_key(id serial PRIMARY KEY, val text);\");\n+$publisher->safe_psql('postgres',\n\n\nMaybe it is not necessary, but won't it be better if the publisher\ntable also has a primary key (so DDL matches its table name)?\n\n~~~\n\n28.\n+# Add a row in subscriber so that the table sync will fail.\n+$old_sub->safe_psql('postgres',\n+ \"INSERT INTO tab_primary_key values(1, 'before initial sync')\");\n\nThe comment should be slightly more descriptive by saying the reason\nit will fail is that you deliberately inserted the same PK value\nagain.\n\n~~~\n\n29.\n+my $started_query =\n+ \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'd';\";\n+$old_sub->poll_query_until('postgres', $started_query)\n+ or die \"Timed out while waiting for subscriber to synchronize data\";\n\nSince this cannot synchronize the table data, maybe the message should\nbe more like \"Timed out while waiting for the table state to become\n'd' (datasync)\"\n\n\n~~~\n\n30.\n+command_fails(\n+ [\n+ 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n+ '-D', $new_sub->data_dir, '-b', $bindir,\n+ '-B', $bindir, '-s', $new_sub->host,\n+ '-p', $old_sub->port, '-P', $new_sub->port,\n+ $mode, '--check',\n+ ],\n+ 'run of pg_upgrade --check for old instance with incorrect sub rel');\n\n/with incorrect sub rel/with incorrect sub rel state/ (??)\n\n~~~\n\n31.\n+# ------------------------------------------------------\n+# Check that pg_upgrade doesn't detect any problem once all the subscription's\n+# relation are in 'r' (ready) state.\n+# ------------------------------------------------------\n\n\n31a.\n/relation/relations/\n\n~\n\n31b.\nDo you think that comment is correct? All you are doing here is\nallowing the old_sub to proceed because there is no longer any\nconflict -- but isn't that just normal pub/sub behaviour that has\nnothing to do with pg_upgrade?\n\n~~~\n\n32.\n+# Stop the old subscriber, insert a row in each table while it's down and add\n+# t2 to the publication\n\n/in each table/in each publisher table/\n\nAlso, it is not each table -- it's only t1 and t2; not tab_primary_key.\n\n~~~\n\n33.\n+ $new_sub->safe_psql('postgres', \"SELECT count(*) FROM pg_subscription_rel\");\n+is($result, qq(2), \"There should be 2 rows in pg_subscription_rel\");\n\n/2 rows in pg_subscription_rel/2 rows in pg_subscription_rel\n(representing t1 and tab_primary_key)/\n\n======\n\n34. binary_upgrade_create_sub_rel_state\n\n+{ oid => '8404', descr => 'for use by pg_upgrade (relation for\npg_subscription_rel)',\n+ proname => 'binary_upgrade_create_sub_rel_state', proisstrict => 'f',\n+ provolatile => 'v', proparallel => 'u', prorettype => 'void',\n+ proargtypes => 'text oid char pg_lsn',\n+ prosrc => 'binary_upgrade_create_sub_rel_state' },\n\nAs mentioned in a previous review comment #9, I felt this function\nshould have a different name: binary_upgrade_add_sub_rel_state.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 6 Nov 2023 13:20:49 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 6 Nov 2023 at 07:51, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v11-0001\n>\n> ======\n> Commit message\n>\n> 1.\n> The subscription's replication origin are needed to ensure\n> that we don't replicate anything twice.\n>\n> ~\n>\n> /are needed/is needed/\n\nModified\n\n>\n> 2.\n> Author: Julien Rouhaud\n> Reviewed-by: FIXME\n> Discussion: https://postgr.es/m/20230217075433.u5mjly4d5cr4hcfe@jrouhaud\n>\n> ~\n>\n> Include Vignesh as another author.\n\nModified\n\n> ======\n> doc/src/sgml/ref/pgupgrade.sgml\n>\n> 3.\n> + <application>pg_upgrade</application> attempts to migrate subscription\n> + dependencies which includes the subscription tables information present in\n> + <link linkend=\"catalog-pg-subscription-rel\">pg_subscription_rel</link>\n> + system table and the subscription replication origin which\n> + will help in continuing logical replication from where the old subscriber\n> + was replicating. This helps in avoiding the need for setting up the\n>\n> I became a bit lost reading paragraph due to the multiple 'which'...\n>\n> SUGGESTION\n> pg_upgrade attempts to migrate subscription dependencies which\n> includes the subscription table information present in\n> pg_subscription_rel system\n> catalog and also the subscription replication origin. This allows\n> logical replication on the new subscriber to continue from where the\n> old subscriber was up to.\n\nModified\n\n> ~~~\n>\n> 4.\n> + was replicating. This helps in avoiding the need for setting up the\n> + subscription objects manually which requires truncating all the\n> + subscription tables and setting the logical replication slots. Migration\n>\n> SUGGESTION\n> Having the ability to migrate subscription objects avoids the need to\n> set them up manually, which would require truncating all the\n> subscription tables and setting the logical replication slots.\n\nI have removed this\n\n> ~\n>\n> TBH, I am wondering what is the purpose of this sentence. It seems\n> more like a justification for the patch, but does the user need to\n> know all this?\n>\n> ~~~\n>\n> 5.\n> + <para>\n> + All the subscription tables in the old subscriber should be in\n> + <literal>i</literal> (initialize), <literal>r</literal> (ready) or\n> + <literal>s</literal> (synchronized). This can be verified by checking\n> + <link linkend=\"catalog-pg-subscription-rel\">pg_subscription_rel</link>.<structfield>srsubstate</structfield>.\n> + </para>\n>\n> /should be in/should be in state/\n\nModified\n\n> ~~~\n>\n> 6.\n> + <para>\n> + The replication origin entry corresponding to each of the subscriptions\n> + should exist in the old cluster. This can be checking\n> + <link linkend=\"catalog-pg-subscription\">pg_subscription</link> and\n> + <link linkend=\"catalog-pg-replication-origin\">pg_replication_origin</link>\n> + system tables.\n> + </para>\n>\n> missing words?\n>\n> /This can be checking/This can be found by checking/\n\nModified\n\n> ~~~\n>\n> 7.\n> + <para>\n> + The subscriptions will be migrated to new cluster in disabled state, they\n> + can be enabled after upgrade by following the steps:\n> + </para>\n>\n> The first bullet also says \"Enable the subscription...\" so I think\n> this paragraph should be worded like the below.\n>\n> SUGGESTION\n> The subscriptions will be migrated to the new cluster in a disabled\n> state. After migration, do this:\n\nModified\n\n> ======\n> src/backend/catalog/pg_subscription.c\n>\n> 8.\n> #include \"nodes/makefuncs.h\"\n> +#include \"replication/origin.h\"\n> +#include \"replication/worker_internal.h\"\n> #include \"storage/lmgr.h\"\n>\n> Why does this change need to be in the patch when there are no other\n> code changes in this file?\n\nModified\n\n> ======\n> src/backend/utils/adt/pg_upgrade_support.c\n>\n> 9. binary_upgrade_create_sub_rel_state\n>\n> IMO a better name for this function would be\n> 'binary_upgrade_add_sub_rel_state' (because it delegates to\n> AddSubscriptionRelState).\n>\n> Then it would obey the same name pattern as the other function\n> 'binary_upgrade_replorigin_advance' (which delegates to\n> replorigin_advance).\n\nModified\n\n> ~~~\n>\n> 10.\n> +/*\n> + * binary_upgrade_create_sub_rel_state\n> + *\n> + * Add the relation with the specified relation state to pg_subscription_rel\n> + * table.\n> + */\n> +Datum\n> +binary_upgrade_create_sub_rel_state(PG_FUNCTION_ARGS)\n> +{\n> + Relation rel;\n> + HeapTuple tup;\n> + Oid subid;\n> + Form_pg_subscription form;\n> + char *subname;\n> + Oid relid;\n> + char relstate;\n> + XLogRecPtr sublsn;\n>\n> 10a.\n> /to pg_subscription_rel table./to pg_subscription_rel catalog./\n\nModified\n\n> ~\n>\n> 10b.\n> Maybe it would be helpful if the function argument were documented\n> up-front in the function-comment, or in the variable declarations.\n>\n> SUGGESTION\n> char *subname; /* ARG0 = subscription name */\n> Oid relid; /* ARG1 = relation Oid */\n> char relstate; /* ARG2 = subrel state */\n> XLogRecPtr sublsn; /* ARG3 (optional) = subscription lsn */\n\nI felt the variables are self explainatory in this case and also\nconsistent with other functions.\n\n> ~~~\n>\n> 11.\n> if (PG_ARGISNULL(3))\n> sublsn = InvalidXLogRecPtr;\n> else\n> sublsn = PG_GETARG_LSN(3);\n> FWIW, I'd write that as a one-line ternary assignment allowing all the\n> args to be grouped nicely together.\n>\n> SUGGESTION\n> sublsn = PG_ARGISNULL(3) ? InvalidXLogRecPtr : PG_GETARG_LSN(3);\n\nModified\n\n> ~~~\n>\n> 12. binary_upgrade_replorigin_advance\n>\n> /*\n> * binary_upgrade_replorigin_advance\n> *\n> * Update the remote_lsn for the subscriber's replication origin.\n> */\n> Datum\n> binary_upgrade_replorigin_advance(PG_FUNCTION_ARGS)\n> {\n> Relation rel;\n> HeapTuple tup;\n> Oid subid;\n> Form_pg_subscription form;\n> char *subname;\n> XLogRecPtr sublsn;\n> char originname[NAMEDATALEN];\n> RepOriginId originid;\n> ~\n>\n> Similar to previous comment #10b. Maybe it would be helpful if the\n> function argument were documented up-front in the function-comment, or\n> in the variable declarations.\n>\n> SUGGESTION\n> char originname[NAMEDATALEN];\n> RepOriginId originid;\n> char *subname; /* ARG0 = subscription name */\n> XLogRecPtr sublsn; /* ARG1 = subscription lsn */\n\nI felt the variables are self explainatory in this case and also\nconsistent with other functions.\n\n> ~~~\n>\n> 13.\n> + subname = text_to_cstring(PG_GETARG_TEXT_PP(0));\n> +\n> + if (PG_ARGISNULL(1))\n> + sublsn = InvalidXLogRecPtr;\n> + else\n> + sublsn = PG_GETARG_LSN(1);\n>\n> Similar to previous comment #11. FWIW, I'd write that as a one-line\n> ternary assignment allowing all the args to be grouped nicely\n> together.\n>\n> SUGGESTION\n> subname = text_to_cstring(PG_GETARG_TEXT_PP(0));\n> sublsn = PG_ARGISNULL(1) ? InvalidXLogRecPtr : PG_GETARG_LSN(1);\n\nModified\n\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> 14. getSubscriptionTables\n>\n> +/*\n> + * getSubscriptionTables\n> + * get information about subscription membership for dumpable tables, this\n> + * will be used only in binary-upgrade mode.\n> + */\n>\n> Should use multiple sentences.\n>\n> SUGGESTION\n> Get information about subscription membership for dumpable tables.\n> This will be used only in binary-upgrade mode.\n\nModified\n\n> ~~~\n>\n> 15.\n> + /* Get subscription relation fields */\n> + i_srsubid = PQfnumber(res, \"srsubid\");\n> + i_srrelid = PQfnumber(res, \"srrelid\");\n> + i_srsubstate = PQfnumber(res, \"srsubstate\");\n> + i_srsublsn = PQfnumber(res, \"srsublsn\");\n>\n> Might it be better to say \"Get pg_subscription_rel attributes\"?\n\nModified\n\n> ~~~\n>\n> 16. getSubscriptions\n>\n> + appendPQExpBufferStr(query, \"o.remote_lsn\\n\");\n> appendPQExpBufferStr(query,\n> \"FROM pg_subscription s\\n\"\n> + \"LEFT JOIN pg_replication_origin_status o \\n\"\n> + \" ON o.external_id = 'pg_' || s.oid::text \\n\"\n> \"WHERE s.subdbid = (SELECT oid FROM pg_database\\n\"\n> \" WHERE datname = current_database())\");\n>\n> ~\n>\n> 16a.\n> Should that \"remote_lsn\" have an alias like \"suboriginremotelsn\" so\n> that it matches the later field assignment better?\n\nModified\n\n> ~\n>\n> 16b.\n> Probably these catalogs should be qualified using \"pg_catalog.\".\n\nModified\n\n> ~~~\n>\n> 17. dumpSubscriptionTable\n>\n> +/*\n> + * dumpSubscriptionTable\n> + * dump the definition of the given subscription table mapping, this will be\n> + * used only for upgrade operation.\n> + */\n>\n> Make this comment consistent with the other one for getSubscriptionTables:\n> - split into multiple sentences\n> - use the same terminology \"binary-upgrade mode\" versus \"upgrade operation'.\n\nModified\n\n> ~~~\n>\n> 18.\n> + /*\n> + * binary_upgrade_create_sub_rel_state will add the subscription\n> + * relation to pg_subscripion_rel table, this is supported only for\n> + * upgrade operation.\n> + */\n>\n> Split into multiple sentences.\n\nModified\n\n> ======\n> src/bin/pg_dump/pg_dump_sort.c\n>\n> 19.\n> + case DO_SUBSCRIPTION_REL:\n> + snprintf(buf, bufsize,\n> + \"SUBSCRIPTION TABLE (ID %d)\",\n> + obj->dumpId);\n> + return;\n>\n> Should it include the OID (like for DO PUBLICATION_TABLE)?\n\nModified\n\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 20.\n> check_for_reg_data_type_usage(&old_cluster);\n> check_for_isn_and_int8_passing_mismatch(&old_cluster);\n>\n> + check_for_subscription_state(&old_cluster);\n> +\n>\n> There seems no reason anymore for this check to be separated from all\n> the other checks. Just remove the blank line.\n\nModified\n\n> ~~~\n>\n> 21. check_for_subscription_state\n>\n> +/*\n> + * check_for_subscription_state()\n> + *\n> + * Verify that each of the subscriptions have all their corresponding tables in\n> + * ready state.\n> + */\n> +static void\n> +check_for_subscription_state(ClusterInfo *cluster)\n>\n> /have/has/\n>\n> This comment only refers to 'ready' state, but perhaps it is\n> misleading (or not entirely correct) because later the SQL is testing\n> for more than just the READY state:\n>\n> + \"WHERE srsubstate NOT IN ('i', 's', 'r') \"\n\nModified\n\n> ~~~\n>\n> 22.\n> + res = executeQueryOrDie(conn,\n> + \"SELECT s.subname, c.relname, n.nspname \"\n> + \"FROM pg_catalog.pg_subscription_rel r \"\n> + \"LEFT JOIN pg_catalog.pg_subscription s\"\n> + \" ON r.srsubid = s.oid \"\n> + \"LEFT JOIN pg_catalog.pg_class c\"\n> + \" ON r.srrelid = c.oid \"\n> + \"LEFT JOIN pg_catalog.pg_namespace n\"\n> + \" ON c.relnamespace = n.oid \"\n> + \"WHERE srsubstate NOT IN ('i', 's', 'r') \"\n> + \"ORDER BY s.subname\");\n>\n> If you are going to check 'i', 's', and 'r' then I thought this\n> statement should maybe have some comment about why those states.\n\nModified\n\n> ~~~\n>\n> 23.\n> + pg_fatal(\"Your installation contains subscription(s) with\\n\"\n> + \"Subscription not having origin and/or subscription relation(s) not\n> in ready state.\\n\"\n> + \"A list of subscription not having origin and/or\\n\"\n> + \"subscription relation(s) not in ready state is in the file: %s\",\n> + output_path);\n>\n> 23a.\n> This message seems to just be saying the same thing 2 times.\n>\n> Is also should use newlines and spaces more like the other similar\n> pg_patals in this file (e.g. the %s is on next line etc).\n>\n> SUGGESTION\n> Your installation contains subscriptions without origin or having\n> relations not in a ready state.\\n\n> A list of the problem subscriptions is in the file:\\n\n> %s\n\nModified\n\n> ~\n>\n> 23b.\n> Same question about 'not in ready state'. Is that entirely correct?\n\nModified\n\n> ======\n> src/bin/pg_upgrade/t/004_subscription.pl\n>\n> 24.\n> +sub insert_line\n> +{\n> + my $payload = shift;\n> +\n> + foreach (\"t1\", \"t2\")\n> + {\n> + $publisher->safe_psql('postgres',\n> + \"INSERT INTO \" . $_ . \" (val) VALUES('$payload')\");\n> + }\n> +}\n>\n> For clarity, maybe call this function 'insert_line_at_pub'\n\nModified\n\n> ~~~\n>\n> 25.\n> +# ------------------------------------------------------\n> +# Check that pg_upgrade is succesful when all tables are in ready state.\n> +# ------------------------------------------------------\n>\n> /succesful/successful/\n\nModified\n\n> ~~~\n>\n> 26.\n> +command_ok(\n> + [\n> + 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n> + '-D', $new_sub->data_dir, '-b', $bindir,\n> + '-B', $bindir, '-s', $new_sub->host,\n> + '-p', $old_sub->port, '-P', $new_sub->port,\n> + $mode, '--check',\n> + ],\n> + 'run of pg_upgrade --check for old instance with invalid remote_lsn');\n>\n> This is the command for the \"success\" case. Why is the message part\n> referring to \"invalid remote_lsn\"?\n\nModified\n\n> ~~~\n>\n> 27.\n> +$publisher->safe_psql('postgres',\n> + \"CREATE TABLE tab_primary_key(id serial, val text);\");\n> +$old_sub->safe_psql('postgres',\n> + \"CREATE TABLE tab_primary_key(id serial PRIMARY KEY, val text);\");\n> +$publisher->safe_psql('postgres',\n>\n>\n> Maybe it is not necessary, but won't it be better if the publisher\n> table also has a primary key (so DDL matches its table name)?\n\nModified\n\n> ~~~\n>\n> 28.\n> +# Add a row in subscriber so that the table sync will fail.\n> +$old_sub->safe_psql('postgres',\n> + \"INSERT INTO tab_primary_key values(1, 'before initial sync')\");\n>\n> The comment should be slightly more descriptive by saying the reason\n> it will fail is that you deliberately inserted the same PK value\n> again.\n\nModified\n\n> ~~~\n>\n> 29.\n> +my $started_query =\n> + \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'd';\";\n> +$old_sub->poll_query_until('postgres', $started_query)\n> + or die \"Timed out while waiting for subscriber to synchronize data\";\n>\n> Since this cannot synchronize the table data, maybe the message should\n> be more like \"Timed out while waiting for the table state to become\n> 'd' (datasync)\"\n\nModified\n\n> ~~~\n>\n> 30.\n> +command_fails(\n> + [\n> + 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n> + '-D', $new_sub->data_dir, '-b', $bindir,\n> + '-B', $bindir, '-s', $new_sub->host,\n> + '-p', $old_sub->port, '-P', $new_sub->port,\n> + $mode, '--check',\n> + ],\n> + 'run of pg_upgrade --check for old instance with incorrect sub rel');\n>\n> /with incorrect sub rel/with incorrect sub rel state/ (??)\n\nModified\n\n> ~~~\n>\n> 31.\n> +# ------------------------------------------------------\n> +# Check that pg_upgrade doesn't detect any problem once all the subscription's\n> +# relation are in 'r' (ready) state.\n> +# ------------------------------------------------------\n>\n>\n> 31a.\n> /relation/relations/\n>\n\nI have removed this comment\n\n>\n> 31b.\n> Do you think that comment is correct? All you are doing here is\n> allowing the old_sub to proceed because there is no longer any\n> conflict -- but isn't that just normal pub/sub behaviour that has\n> nothing to do with pg_upgrade?\n\nI have removed this comment\n\n> ~~~\n>\n> 32.\n> +# Stop the old subscriber, insert a row in each table while it's down and add\n> +# t2 to the publication\n>\n> /in each table/in each publisher table/\n>\n> Also, it is not each table -- it's only t1 and t2; not tab_primary_key.\n\nModified\n\n> ~~~\n>\n> 33.\n> + $new_sub->safe_psql('postgres', \"SELECT count(*) FROM pg_subscription_rel\");\n> +is($result, qq(2), \"There should be 2 rows in pg_subscription_rel\");\n>\n> /2 rows in pg_subscription_rel/2 rows in pg_subscription_rel\n> (representing t1 and tab_primary_key)/\n\nModified\n\n> ======\n>\n> 34. binary_upgrade_create_sub_rel_state\n>\n> +{ oid => '8404', descr => 'for use by pg_upgrade (relation for\n> pg_subscription_rel)',\n> + proname => 'binary_upgrade_create_sub_rel_state', proisstrict => 'f',\n> + provolatile => 'v', proparallel => 'u', prorettype => 'void',\n> + proargtypes => 'text oid char pg_lsn',\n> + prosrc => 'binary_upgrade_create_sub_rel_state' },\n>\n> As mentioned in a previous review comment #9, I felt this function\n> should have a different name: binary_upgrade_add_sub_rel_state.\n\nModified\n\nThanks for the comments, the attached v12 version patch has the\nchanges for the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 8 Nov 2023 11:51:06 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 2 Nov 2023 at 17:01, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Nov 2, 2023 at 3:41 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > I have slightly modified it now and also made it consistent with the\n> > replication slot upgrade, but I was not sure if we need to add\n> > anything more. Let me know if anything else needs to be added. I will\n> > add it.\n> >\n>\n> I think it is important for users to know how they upgrade their\n> multi-node setup. Say a two-node setup where replication is working\n> both ways (aka each node has both publications and subscriptions),\n> similarly, how to upgrade, if there are multiple nodes involved?\n\nI was thinking of documenting something like this:\nSteps to upgrade logical replication clusters:\nWarning:\nUpgrading logical replication nodes requires multiple steps to be\nperformed. Because not all operations are transactional, the user is\nadvised to take backups.\nBackups can be taken as described in\nhttps://www.postgresql.org/docs/current/backup.html\n\nUpgrading 2 node logical replication cluster:\n1) Let's say publisher is in Node1 and subscriber is in Node2.\n2) Stop the publisher server in Node1.\n3) Disable the subscriptions in Node2.\n4) Upgrade the publisher node Node1 to Node1_new.\n5) Start the publisher node Node1_new.\n6) Stop the subscriber server in Node2.\n7) Upgrade the subscriber node Node2 to Node2_new.\n8) Start the subscriber node Node2_new.\n9) Alter the subscription connections in Node2_new to point from Node1\nto Node1_new.\n10) Enable the subscriptions in Node2_new.\n11) Create any tables that were created in Node1_new between step-5\nand now and Refresh the publications.\n\nSteps to upgrade cascaded logical replication clusters:\n1) Let's say we have a cascaded logical replication setup\nNode1->Node2->Node3. Here Node2 is subscribing to Node1 and Node3 is\nsubscribing to Node2.\n2) Stop the server in Node1.\n3) Disable the subscriptions in Node2 and Node3.\n4) Upgrade the publisher node Node1 to Node1_new.\n5) Start the publisher node Node1_new.\n6) Stop the server in Node1.\n7) Upgrade the subscriber node Node2 to Node2_new.\n8) Start the subscriber node Node2_new.\n9) Alter the subscription connections in Node2_new to point from Node1\nto Node1_new.\n10) Enable the subscriptions in Node2_new.\n11) Create any tables that were created in Node1_new between step-5\nand now and Refresh the publications.\n12) Stop the server in Node3.\n13) Upgrade the subscriber node Node3 to Node3_new.\n14) Start the subscriber node Node3_new.\n15) Alter the subscription connections in Node3_new to point from\nNode2 to Node2_new.\n16) Enable the subscriptions in Node2_new.\n17) Create any tables that were created in Node2_new between step-8\nand now and Refresh the publications.\n\nUpgrading 2 node circular logical replication cluster:\n1) Let's say we have a circular logical replication setup Node1->Node2\n& Node2->Node1. Here Node2 is subscribing to Node1 and Node1 is\nsubscribing to Node2.\n2) Stop the server in Node1.\n3) Disable the subscriptions in Node2.\n4) Upgrade the node Node1 to Node1_new.\n5) Start the node Node1_new.\n6) Enable the subscriptions in Node1_new.\n7) Wait till all the incremental changes are synchronized.\n8) Alter the subscription connections in Node2 to point from Node1 to Node1_new.\n9) Create any tables that were created in Node2 between step-2 and now\nand Refresh the publications.\n10) Stop the server in Node2.\n11) Disable the subscriptions in Node1.\n12) Upgrade the node Node2 to Node2_new.\n13) Start the subscriber node Node2_new.\n14) Enable the subscriptions in Node2_new.\n15) Alter the subscription connections in Node1 to point from Node2 to\nNode2_new.\n16) Create any tables that were created in Node1_new between step-10\nand now and Refresh the publications.\n\nI have done basic testing with this, I will do further testing and\nupdate it if I find any issues.\nLet me know if this idea is ok or we need something different.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 8 Nov 2023 22:52:29 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Nov 08, 2023 at 10:52:29PM +0530, vignesh C wrote:\n> Upgrading logical replication nodes requires multiple steps to be\n> performed. Because not all operations are transactional, the user is\n> advised to take backups.\n> Backups can be taken as described in\n> https://www.postgresql.org/docs/current/backup.html\n\nThere's a similar risk with --link if the upgrade fails after the new\ncluster was started and the files linked began getting modified, so\nthat's something users would be OK with, I guess.\n\n> Upgrading 2 node logical replication cluster:\n> 1) Let's say publisher is in Node1 and subscriber is in Node2.\n> 2) Stop the publisher server in Node1.\n> 3) Disable the subscriptions in Node2.\n> 4) Upgrade the publisher node Node1 to Node1_new.\n> 5) Start the publisher node Node1_new.\n> 6) Stop the subscriber server in Node2.\n> 7) Upgrade the subscriber node Node2 to Node2_new.\n> 8) Start the subscriber node Node2_new.\n> 9) Alter the subscription connections in Node2_new to point from Node1\n> to Node1_new.\n\nDo they really need to do so in an pg_upgrade flow? The connection\nendpoint would be likely the same for transparency, no?\n\n> 10) Enable the subscriptions in Node2_new.\n> 11) Create any tables that were created in Node1_new between step-5\n> and now and Refresh the publications.\n\nHow about the opposite stance, where an upgrade flow does first the\nsubscriber and then the publisher? Would this be worth mentioning?\nCase 3 touches that as nodes hold both publishers and subscribers.\n\n> Steps to upgrade cascaded logical replication clusters:\n> 1) Let's say we have a cascaded logical replication setup\n> Node1->Node2->Node3. Here Node2 is subscribing to Node1 and Node3 is\n> subscribing to Node2.\n> 2) Stop the server in Node1.\n> 3) Disable the subscriptions in Node2 and Node3.\n> 4) Upgrade the publisher node Node1 to Node1_new.\n> 5) Start the publisher node Node1_new.\n> 6) Stop the server in Node1.\n> 7) Upgrade the subscriber node Node2 to Node2_new.\n> 8) Start the subscriber node Node2_new.\n> 9) Alter the subscription connections in Node2_new to point from Node1\n> to Node1_new.\n\nSame here.\n\n> 10) Enable the subscriptions in Node2_new.\n> 11) Create any tables that were created in Node1_new between step-5\n> and now and Refresh the publications.\n> 12) Stop the server in Node3.\n> 13) Upgrade the subscriber node Node3 to Node3_new.\n> 14) Start the subscriber node Node3_new.\n> 15) Alter the subscription connections in Node3_new to point from\n> Node2 to Node2_new.\n> 16) Enable the subscriptions in Node2_new.\n> 17) Create any tables that were created in Node2_new between step-8\n> and now and Refresh the publications.\n> \n> Upgrading 2 node circular logical replication cluster:\n> 1) Let's say we have a circular logical replication setup Node1->Node2\n> & Node2->Node1. Here Node2 is subscribing to Node1 and Node1 is\n> subscribing to Node2.\n> 2) Stop the server in Node1.\n> 3) Disable the subscriptions in Node2.\n> 4) Upgrade the node Node1 to Node1_new.\n> 5) Start the node Node1_new.\n> 6) Enable the subscriptions in Node1_new.\n> 7) Wait till all the incremental changes are synchronized.\n> 8) Alter the subscription connections in Node2 to point from Node1 to Node1_new.\n> 9) Create any tables that were created in Node2 between step-2 and now\n> and Refresh the publications.\n> 10) Stop the server in Node2.\n> 11) Disable the subscriptions in Node1.\n> 12) Upgrade the node Node2 to Node2_new.\n> 13) Start the subscriber node Node2_new.\n> 14) Enable the subscriptions in Node2_new.\n> 15) Alter the subscription connections in Node1 to point from Node2 to\n> Node2_new.\n> 16) Create any tables that were created in Node1_new between step-10\n> and now and Refresh the publications.\n> \n> I have done basic testing with this, I will do further testing and\n> update it if I find any issues.\n> Let me know if this idea is ok or we need something different.\n\nI have not tested, but having documentation among these lines is good\nbecause it becomes clear what the steps one needs to do are.\n\nAnother thing that I doubt is worth mentioning is the schema changes\nthat may happen. We could just say that the schema should be fixed\nwhile running an upgrade, which is kind of fair to expect in logical\nsetups for tables replicated anyway?\n\nDo you think that there would be an issue in automating such tests\nonce support for the upgrade of subscribers is done (hopefully)? The\nfirst scenario may not need extra coverage if we have already\n003_logical_slots.pl and a second file to test for the subscriber\npart, though.\n--\nMichael",
"msg_date": "Thu, 9 Nov 2023 07:56:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Thanks for addressing my previous review comments.\n\nI re-checked the latest patch v12-0001 and found the following:\n\n======\nCommit message\n\n1.\nThe new SQL binary_upgrade_create_sub_rel_state function has the following\nsyntax:\nSELECT binary_upgrade_create_sub_rel_state(subname text, relid oid,\nstate char [,sublsn pg_lsn])\n\n~\n\nLooks like v12 accidentally forgot to update this to the modified\nfunction name 'binary_upgrade_add_sub_rel_state'\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 9 Nov 2023 13:14:05 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 10:52 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Upgrading 2 node circular logical replication cluster:\n> 1) Let's say we have a circular logical replication setup Node1->Node2\n> & Node2->Node1. Here Node2 is subscribing to Node1 and Node1 is\n> subscribing to Node2.\n> 2) Stop the server in Node1.\n> 3) Disable the subscriptions in Node2.\n> 4) Upgrade the node Node1 to Node1_new.\n> 5) Start the node Node1_new.\n> 6) Enable the subscriptions in Node1_new.\n> 7) Wait till all the incremental changes are synchronized.\n> 8) Alter the subscription connections in Node2 to point from Node1 to Node1_new.\n> 9) Create any tables that were created in Node2 between step-2 and now\n> and Refresh the publications.\n>\n\nI haven't reviewed all the steps yet but here steps 7 and 9 seem to\nrequire some validation. How can incremental changes be synchronized\ntill all the new tables are created and synced before step 7?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Nov 2023 11:48:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Nov 09, 2023 at 01:14:05PM +1100, Peter Smith wrote:\n> Looks like v12 accidentally forgot to update this to the modified\n> function name 'binary_upgrade_add_sub_rel_state'\n\nThis v12 is overall cleaner than its predecessors. Nice to see.\n\n+my $result = $publisher->safe_psql('postgres', \"SELECT count(*) FROM t1\");\n+is($result, qq(1), \"check initial t1 table data on publisher\");\n+$result = $publisher->safe_psql('postgres', \"SELECT count(*) FROM t2\");\n+is($result, qq(1), \"check initial t1 table data on publisher\");\n+$result = $old_sub->safe_psql('postgres', \"SELECT count(*) FROM t1\");\n+is($result, qq(1), \"check initial t1 table data on the old subscriber\");\n+$result = $old_sub->safe_psql('postgres', \"SELECT count(*) FROM t2\");\n\nI'd argue that t1 and t2 should have less generic names. t1 is used\nto check that the upgrade process works, while t2 is added to the\npublication after upgrading the subscriber. Say something like\ntab_upgraded or tab_not_upgraded?\n\n+my $synced_query =\n+ \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');\";\n\nPerhaps it would be safer to use a query that checks the number of\nrelations in 'r' state? This query would return true if\npg_subscription_rel has no tuples.\n\n+# Table will be in 'd' (data is being copied) state as table sync will fail\n+# because of primary key constraint error.\n+my $started_query =\n+ \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'd';\";\n\nRelying on a pkey error to enforce an incorrect state is a good trick.\nNice.\n\n+command_fails(\n+ [\n+ 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n+ '-D', $new_sub->data_dir, '-b', $bindir,\n+ '-B', $bindir, '-s', $new_sub->host,\n+ '-p', $old_sub->port, '-P', $new_sub->port,\n+ $mode, '--check',\n+ ],\n+ 'run of pg_upgrade --check for old instance with relation in \\'d\\' datasync(invalid) state');\n+rmtree($new_sub->data_dir . \"/pg_upgrade_output.d\");\n\nOkay by me to not stop the cluster for the --check to shave a few\ncycles. It's a bit sad that we don't cross-check the contents of\nsubscription_state.txt before removing pg_upgrade_output.d. Finding\nthe file is easy even if the subdir where it is included is not a\nconstant name. Then it is possible to apply a regexp with the\ncontents consumed by a slurp_file().\n\n+my $remote_lsn = $old_sub->safe_psql('postgres',\n+ \"SELECT remote_lsn FROM pg_replication_origin_status\");\nPerhaps you've not noticed, but this would be 0/0 most of the time.\nHowever the intention is to check after a valid LSN to make sure that\nthe origin is set, no?\n\nI am wondering whether this should use a bit more data than just one\ntuple, say at least two transaction, one of them with a multi-value\nINSERT?\n\n+# ------------------------------------------------------\n+# Check that pg_upgrade is successful when all tables are in ready state.\n+# ------------------------------------------------------\nThis comment is a bit inconsistent with the state that are accepted,\nbut why not, at least that's predictible.\n\n+ * relation to pg_subscripion_rel table. This will be used only in \n\nTypo: s/pg_subscripion_rel/pg_subscription_rel/.\n\nThis needs some word-smithing to explain the reasons why a state is\nnot needed:\n\n+ /*\n+ * The subscription relation should be in either i (initialize),\n+ * r (ready) or s (synchronized) state as either the replication slot\n+ * is not created or the replication slot is already dropped and the\n+ * required WAL files will be present in the publisher. The other\n+ * states are not ok as the worker has dependency on the replication\n+ * slot/origin in these case:\n\nA slot not created yet refers to the 'i' state, while 'r' and 's'\nrefer to a slot created previously but already dropped, right?\nShouldn't this comment tell that rather than mixing the assumptions?\n\n+ * a) SUBREL_STATE_DATASYNC: In this case, the table sync worker will\n+ * try to drop the replication slot but as the replication slots will\n+ * be created with old subscription id in the publisher and the\n+ * upgraded subscriber will not be able to clean the slots in this\n+ * case.\n\nProposal: A relation upgraded while in this state would retain a\nreplication slot, which could not be dropped by the sync worker\nspawned after the upgrade because the subscription ID tracked by the\npublisher does not match anymore.\n\nNote: actually, this would be OK if we are able to keep the OIDs of\nthe subscribers consistent across upgrades? I'm OK to not do nothing\nabout that in this patch, to keep it simpler. Just asking in passing.\n\n+ * b) SUBREL_STATE_FINISHEDCOPY: In this case, the tablesync worker will\n+ * expect the origin to be already existing as the origin is created\n+ * with an old subscription id, tablesync worker will not be able to\n+ * find the origin in this case.\n\nProposal: A tablesync worker spawned to work on a relation upgraded\nwhile in this state would expect an origin ID with the OID of the\nsubscription used before the upgrade, causing it to fail.\n\n+ \"A list of problem subscriptions is in the file:\\n\" \n\nSounds a bit strange, perhaps use an extra \"the\", as of \"the problem\nsubscriptions\"?\n\nCould it be worth mentioning in the docs that one could also DISABLE\nthe subscriptions before running the upgrade?\n\n+ The replication origin entry corresponding to each of the subscriptions\n+ should exist in the old cluster. This can be found by checking\n+ <link linkend=\"catalog-pg-subscription\">pg_subscription</link> and\n+ <link linkend=\"catalog-pg-replication-origin\">pg_replication_origin</link>\n+ system tables.\n\nHmm. No need to mention pg_replication_origin_status?\n\nIf I may ask, how did you check that the given relation states were\nOK or not OK? Did you hardcode some wait points in tablesync.c up to\nwhere a state is updated in pg_subscription_rel, then shutdown the\ncluster before the upgrade to maintain the catalog in this state?\nFinally, after the upgrade, you've cross-checked the dependencies on\nthe slots and origins to see that the spawned sync workers turned\ncrazy because of the inconsistencies. Right?\n--\nMichael",
"msg_date": "Thu, 9 Nov 2023 15:53:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 9 Nov 2023 at 12:23, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Nov 09, 2023 at 01:14:05PM +1100, Peter Smith wrote:\n> > Looks like v12 accidentally forgot to update this to the modified\n> > function name 'binary_upgrade_add_sub_rel_state'\n>\n> This v12 is overall cleaner than its predecessors. Nice to see.\n>\n> +my $result = $publisher->safe_psql('postgres', \"SELECT count(*) FROM t1\");\n> +is($result, qq(1), \"check initial t1 table data on publisher\");\n> +$result = $publisher->safe_psql('postgres', \"SELECT count(*) FROM t2\");\n> +is($result, qq(1), \"check initial t1 table data on publisher\");\n> +$result = $old_sub->safe_psql('postgres', \"SELECT count(*) FROM t1\");\n> +is($result, qq(1), \"check initial t1 table data on the old subscriber\");\n> +$result = $old_sub->safe_psql('postgres', \"SELECT count(*) FROM t2\");\n>\n> I'd argue that t1 and t2 should have less generic names. t1 is used\n> to check that the upgrade process works, while t2 is added to the\n> publication after upgrading the subscriber. Say something like\n> tab_upgraded or tab_not_upgraded?\n\nModified\n\n> +my $synced_query =\n> + \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r');\";\n>\n> Perhaps it would be safer to use a query that checks the number of\n> relations in 'r' state? This query would return true if\n> pg_subscription_rel has no tuples.\n\nModified\n\n> +# Table will be in 'd' (data is being copied) state as table sync will fail\n> +# because of primary key constraint error.\n> +my $started_query =\n> + \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'd';\";\n>\n> Relying on a pkey error to enforce an incorrect state is a good trick.\n> Nice.\n\nThat was better way to get data sync state without manually changing\nthe pg_subscription_rel catalog\n\n> +command_fails(\n> + [\n> + 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n> + '-D', $new_sub->data_dir, '-b', $bindir,\n> + '-B', $bindir, '-s', $new_sub->host,\n> + '-p', $old_sub->port, '-P', $new_sub->port,\n> + $mode, '--check',\n> + ],\n> + 'run of pg_upgrade --check for old instance with relation in \\'d\\' datasync(invalid) state');\n> +rmtree($new_sub->data_dir . \"/pg_upgrade_output.d\");\n>\n> Okay by me to not stop the cluster for the --check to shave a few\n> cycles. It's a bit sad that we don't cross-check the contents of\n> subscription_state.txt before removing pg_upgrade_output.d. Finding\n> the file is easy even if the subdir where it is included is not a\n> constant name. Then it is possible to apply a regexp with the\n> contents consumed by a slurp_file().\n\nModified\n\n> +my $remote_lsn = $old_sub->safe_psql('postgres',\n> + \"SELECT remote_lsn FROM pg_replication_origin_status\");\n> Perhaps you've not noticed, but this would be 0/0 most of the time.\n> However the intention is to check after a valid LSN to make sure that\n> the origin is set, no?\n\nI have added few more inserts to make remote_lsn not be 0/0\n\n> I am wondering whether this should use a bit more data than just one\n> tuple, say at least two transaction, one of them with a multi-value\n> INSERT?\n\nAdded one more multi-insert\n\n> +# ------------------------------------------------------\n> +# Check that pg_upgrade is successful when all tables are in ready state.\n> +# ------------------------------------------------------\n> This comment is a bit inconsistent with the state that are accepted,\n> but why not, at least that's predictible.\n\nThe key test validation is mentioned in this style of comment\n\n> + * relation to pg_subscripion_rel table. This will be used only in\n>\n> Typo: s/pg_subscripion_rel/pg_subscription_rel/.\n\nModified\n\n> This needs some word-smithing to explain the reasons why a state is\n> not needed:\n>\n> + /*\n> + * The subscription relation should be in either i (initialize),\n> + * r (ready) or s (synchronized) state as either the replication slot\n> + * is not created or the replication slot is already dropped and the\n> + * required WAL files will be present in the publisher. The other\n> + * states are not ok as the worker has dependency on the replication\n> + * slot/origin in these case:\n>\n> A slot not created yet refers to the 'i' state, while 'r' and 's'\n> refer to a slot created previously but already dropped, right?\n> Shouldn't this comment tell that rather than mixing the assumptions?\n\nModified\n\n> + * a) SUBREL_STATE_DATASYNC: In this case, the table sync worker will\n> + * try to drop the replication slot but as the replication slots will\n> + * be created with old subscription id in the publisher and the\n> + * upgraded subscriber will not be able to clean the slots in this\n> + * case.\n>\n> Proposal: A relation upgraded while in this state would retain a\n> replication slot, which could not be dropped by the sync worker\n> spawned after the upgrade because the subscription ID tracked by the\n> publisher does not match anymore.\n\nModified\n\n> Note: actually, this would be OK if we are able to keep the OIDs of\n> the subscribers consistent across upgrades? I'm OK to not do nothing\n> about that in this patch, to keep it simpler. Just asking in passing.\n\nI will analyze more on this and post the analysis in the subsequent mail.\n\n> + * b) SUBREL_STATE_FINISHEDCOPY: In this case, the tablesync worker will\n> + * expect the origin to be already existing as the origin is created\n> + * with an old subscription id, tablesync worker will not be able to\n> + * find the origin in this case.\n>\n> Proposal: A tablesync worker spawned to work on a relation upgraded\n> while in this state would expect an origin ID with the OID of the\n> subscription used before the upgrade, causing it to fail.\n\nModified\n\n> + \"A list of problem subscriptions is in the file:\\n\"\n>\n> Sounds a bit strange, perhaps use an extra \"the\", as of \"the problem\n> subscriptions\"?\n\nModified\n\n> Could it be worth mentioning in the docs that one could also DISABLE\n> the subscriptions before running the upgrade?\n\nI felt since the changes that we are planning to make won't start the\napply workers during upgrade, there will be no impact even if the\nsubscriptions are enabled. I felt no need to mention it unless we are\nplanning to allow starting of apply workers during upgrade.\n\n> + The replication origin entry corresponding to each of the subscriptions\n> + should exist in the old cluster. This can be found by checking\n> + <link linkend=\"catalog-pg-subscription\">pg_subscription</link> and\n> + <link linkend=\"catalog-pg-replication-origin\">pg_replication_origin</link>\n> + system tables.\n>\n> Hmm. No need to mention pg_replication_origin_status?\n\n When we create origin, the origin status would be created implicitly,\nI felt we need not check on replication origin status and also need\nnot mention it here.\n\n> If I may ask, how did you check that the given relation states were\n> OK or not OK? Did you hardcode some wait points in tablesync.c up to\n> where a state is updated in pg_subscription_rel, then shutdown the\n> cluster before the upgrade to maintain the catalog in this state?\n> Finally, after the upgrade, you've cross-checked the dependencies on\n> the slots and origins to see that the spawned sync workers turned\n> crazy because of the inconsistencies. Right?\n\nI did testing in the same lines that you mentioned. Apart from that I\nalso reviewed the design where it was using the old subscription id\nlike in case of table sync workers, the tables sync worker will use\nreplication using old subscription id. replication slot and\nreplication origin. I also checked the impact of remote_lsn's.\nFew example: IN SUBREL_STATE_DATASYNC state we will try to drop the\nreplication slot once worker is started but since the slot will be\ncreated with an old subscription, we will not be able to drop the\nreplication slot and create a leak. Similarly the problem exists with\nSUBREL_STATE_FINISHEDCOPY where we will not be able to drop the origin\ncreated with an old sub id.\n\nThanks for the comments, the attached v13 version patch has the\nchanges for the same.\n\nRegards,\nVignesh",
"msg_date": "Fri, 10 Nov 2023 19:26:18 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 9 Nov 2023 at 07:44, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Thanks for addressing my previous review comments.\n>\n> I re-checked the latest patch v12-0001 and found the following:\n>\n> ======\n> Commit message\n>\n> 1.\n> The new SQL binary_upgrade_create_sub_rel_state function has the following\n> syntax:\n> SELECT binary_upgrade_create_sub_rel_state(subname text, relid oid,\n> state char [,sublsn pg_lsn])\n>\n> ~\n>\n> Looks like v12 accidentally forgot to update this to the modified\n> function name 'binary_upgrade_add_sub_rel_state'\n\nThis is handled in the v13 version patch posted at:\nhttps://www.postgresql.org/message-id/CALDaNm0mGz6_69BiJTmEqC8Q0U0x2nMZOs3w9btKOHZZpfC2ow%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 10 Nov 2023 19:31:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Here are some review comments for patch v13-0001\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n1. getSubscriptionTables\n\n+ int i_srsublsn;\n+ int i;\n+ int cur_rel = 0;\n+ int ntups;\n\nWhat is the difference between 'i' and 'cur_rel'?\n\nAFAICT these represent the same tuple index, in which case you might\nas well throw away 'cur_rel' and only keep 'i'.\n\n~~~\n\n2. getSubscriptionTables\n\n+ for (i = 0; i < ntups; i++)\n+ {\n+ Oid cur_srsubid = atooid(PQgetvalue(res, i, i_srsubid));\n+ Oid relid = atooid(PQgetvalue(res, i, i_srrelid));\n+ TableInfo *tblinfo;\n\nSince this is all new code, using C99 style for loop variable\ndeclaration of 'i' will be better.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n3. check_for_subscription_state\n\n+check_for_subscription_state(ClusterInfo *cluster)\n+{\n+ int dbnum;\n+ FILE *script = NULL;\n+ char output_path[MAXPGPATH];\n+ int ntup;\n+\n+ /* Subscription relations state can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) < 1700)\n+ return;\n+\n+ prep_status(\"Checking for subscription state\");\n+\n+ snprintf(output_path, sizeof(output_path), \"%s/%s\",\n+ log_opts.basedir,\n+ \"subscription_state.txt\");\n\nI felt this filename ought to be more like\n'subscriptions_with_bad_state.txt' because the current name looks like\na normal logfile with nothing to indicate that it is only for the\nstates of the \"bad\" subscriptions.\n\n~~~\n\n4.\n+ for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\n+ {\n\nSince this is all new code, using C99 style for loop variable\ndeclaration of 'dbnum' will be better.\n\n~~~\n\n5.\n+ * a) SUBREL_STATE_DATASYNC:A relation upgraded while in this state\n+ * would retain a replication slot, which could not be dropped by the\n+ * sync worker spawned after the upgrade because the subscription ID\n+ * tracked by the publisher does not match anymore.\n\nmissing whitespace\n\n/SUBREL_STATE_DATASYNC:A relation/SUBREL_STATE_DATASYNC: A relation/\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 13 Nov 2023 19:21:39 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 07:26:18PM +0530, vignesh C wrote:\n> I did testing in the same lines that you mentioned. Apart from that I\n> also reviewed the design where it was using the old subscription id\n> like in case of table sync workers, the tables sync worker will use\n> replication using old subscription id. replication slot and\n> replication origin. I also checked the impact of remote_lsn's.\n> Few example: IN SUBREL_STATE_DATASYNC state we will try to drop the\n> replication slot once worker is started but since the slot will be\n> created with an old subscription, we will not be able to drop the\n> replication slot and create a leak. Similarly the problem exists with\n> SUBREL_STATE_FINISHEDCOPY where we will not be able to drop the origin\n> created with an old sub id.\n\nYeah, I was playing a bit with these states and I can confirm that\nleaving around a DATASYNC relation in pg_subscription_rel during\nthe upgrade would leave a slot on the publisher of the old cluster,\nwhich is no good. It would be an option to explore later what could\nbe improved, but I'm also looking forward at hearing from the users\nfirst, as what you have here may be enough for the basic purposes we\nare trying to cover. FINISHEDCOPY similarly, is not OK. I was able\nto get an origin lying around after an upgrade.\n\nAnyway, after a closer lookup, I think that your conclusions regarding\nthe states that are allowed in the patch during the upgrade have some\nflaws.\n\nFirst, are you sure that SYNCDONE is OK to keep? This catalog state\nis set in process_syncing_tables_for_sync(), and just after the code\nopens a transaction to clean up the tablesync slot, followed by a\nsecond transaction to clean up the origin. However, imagine that\nthere is a failure in dropping the slot, the origin, or just in\ntransaction processing, cannot we finish in a state where the relation\nis marked as SYNCDONE in the catalog but still has an origin and/or a\ntablesync slot lying around? Assuming that SYNCDONE is an OK state\nseems incorrect to me. I am pretty sure that injecting an error in a\ncode path after the slot is created would equally lead to an\ninconsistency.\n\nIt seems to me that INIT cannot be relied on for a similar reason.\nThis state would be set for a new relation in\nLogicalRepSyncTableStart(), and the relation would still be in INIT\nstate when creating the slot via walrcv_create_slot() in a second\ntransaction started a bit later. However, if we have a failure after\nthe transaction that created the slot commits, then we'd have an INIT\nrelation in the catalog that got committed *and* a slot related to it\nlying around.\n\nThe only state that I can see is possible to rely on safely is READY,\nset in the same transaction as when the replication origin is dropped,\nbecause that's the point where we are sure that there are no origin\nand no tablesync slot: the READY state is visible in the catalog only\nif the transaction dropping the slot succeeds. Even with this one, I\nwas having the odd feeling that there's a code path where we could\nleak something, though I have not seen a problem with after a few\nhours of looking at this area.\n--\nMichael",
"msg_date": "Mon, 13 Nov 2023 17:22:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 1:52 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> It seems to me that INIT cannot be relied on for a similar reason.\n> This state would be set for a new relation in\n> LogicalRepSyncTableStart(), and the relation would still be in INIT\n> state when creating the slot via walrcv_create_slot() in a second\n> transaction started a bit later.\n>\n\nBefore creating a slot, we changed the state to DATASYNC.\n\n>\n> However, if we have a failure after\n> the transaction that created the slot commits, then we'd have an INIT\n> relation in the catalog that got committed *and* a slot related to it\n> lying around.\n>\n\nI don't think this can happen otherwise this could be a problem even\nwithout an upgrade after restart.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Nov 2023 16:02:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 7:26 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for the comments, the attached v13 version patch has the\n> changes for the same.\n>\n\n+\n+ ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname,\nsizeof(originname));\n+ originid = replorigin_by_name(originname, false);\n+ replorigin_advance(originid, sublsn, InvalidXLogRecPtr,\n+ false /* backward */ ,\n+ false /* WAL log */ );\n\nThis seems to update the origin state only in memory. Is it sufficient\nto use this here? Anyway, I think using this requires us to first\nacquire RowExclusiveLock on pg_replication_origin something the patch\nis doing for some other system table.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Nov 2023 17:01:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 5:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 10, 2023 at 7:26 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for the comments, the attached v13 version patch has the\n> > changes for the same.\n> >\n>\n> +\n> + ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname,\n> sizeof(originname));\n> + originid = replorigin_by_name(originname, false);\n> + replorigin_advance(originid, sublsn, InvalidXLogRecPtr,\n> + false /* backward */ ,\n> + false /* WAL log */ );\n>\n> This seems to update the origin state only in memory. Is it sufficient\n> to use this here?\n>\n\nI think it is probably getting ensured by clean shutdown\n(shutdown_checkpoint) which happens on the new cluster after calling\nthis function. We can probably try to add a comment for it. BTW, we\nalso need to ensure that max_replication_slots is configured to a\nvalue higher than origins we are planning to create on the new\ncluster.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Nov 2023 17:49:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 04:02:27PM +0530, Amit Kapila wrote:\n> On Mon, Nov 13, 2023 at 1:52 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> It seems to me that INIT cannot be relied on for a similar reason.\n>> This state would be set for a new relation in\n>> LogicalRepSyncTableStart(), and the relation would still be in INIT\n>> state when creating the slot via walrcv_create_slot() in a second\n>> transaction started a bit later.\n> \n> Before creating a slot, we changed the state to DATASYNC.\n\nStill, playing the devil's advocate, couldn't it be possible that a\nserver crashes just after the slot got created, then restarts with\nmax_logical_replication_workers=0? This would keep the catalog in a\nstate authorized by the upgrade, still leak a replication slot on the\npublication side if the node gets upgraded. READY in the catalog\nseems to be the only state where we are guaranteed that there is no\norigin and no slot remaining around.\n--\nMichael",
"msg_date": "Tue, 14 Nov 2023 09:22:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication["
},
{
"msg_contents": "On Mon, 13 Nov 2023 at 13:52, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Nov 10, 2023 at 07:26:18PM +0530, vignesh C wrote:\n> > I did testing in the same lines that you mentioned. Apart from that I\n> > also reviewed the design where it was using the old subscription id\n> > like in case of table sync workers, the tables sync worker will use\n> > replication using old subscription id. replication slot and\n> > replication origin. I also checked the impact of remote_lsn's.\n> > Few example: IN SUBREL_STATE_DATASYNC state we will try to drop the\n> > replication slot once worker is started but since the slot will be\n> > created with an old subscription, we will not be able to drop the\n> > replication slot and create a leak. Similarly the problem exists with\n> > SUBREL_STATE_FINISHEDCOPY where we will not be able to drop the origin\n> > created with an old sub id.\n>\n> Yeah, I was playing a bit with these states and I can confirm that\n> leaving around a DATASYNC relation in pg_subscription_rel during\n> the upgrade would leave a slot on the publisher of the old cluster,\n> which is no good. It would be an option to explore later what could\n> be improved, but I'm also looking forward at hearing from the users\n> first, as what you have here may be enough for the basic purposes we\n> are trying to cover. FINISHEDCOPY similarly, is not OK. I was able\n> to get an origin lying around after an upgrade.\n>\n> Anyway, after a closer lookup, I think that your conclusions regarding\n> the states that are allowed in the patch during the upgrade have some\n> flaws.\n>\n> First, are you sure that SYNCDONE is OK to keep? This catalog state\n> is set in process_syncing_tables_for_sync(), and just after the code\n> opens a transaction to clean up the tablesync slot, followed by a\n> second transaction to clean up the origin. However, imagine that\n> there is a failure in dropping the slot, the origin, or just in\n> transaction processing, cannot we finish in a state where the relation\n> is marked as SYNCDONE in the catalog but still has an origin and/or a\n> tablesync slot lying around? Assuming that SYNCDONE is an OK state\n> seems incorrect to me. I am pretty sure that injecting an error in a\n> code path after the slot is created would equally lead to an\n> inconsistency.\n\nThere are couple of things happening here: a) In the first part we\ntake care of setting subscription relation to SYNCDONE and dropping\nthe replication slot at publisher node, only if drop replication slot\nis successful the relation state will be set to SYNCDONE , if drop\nreplication slot fails the relation state will still be in\nFINISHEDCOPY. So if there is a failure in the drop replication slot we\nwill not have an issue as the tablesync worker will be in\nFINISHEDCOPYstate and this state is not allowed for upgrade. When the\nstate is in SYNCDONE the tablesync slot will not be present. b) In the\nsecond part we drop the replication origin, even if there is a chance\nthat drop replication origin fails due to some reason, there will be\nno problem as we do not copy the table sync replication origin to the\nnew cluster while upgrading. Since the table sync replication origin\nis not copied to the new cluster there will be no replication origin\nleaks.\nI feel these issues will not be there in SYNCDONE state.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 14 Nov 2023 07:20:51 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 5:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Nov 13, 2023 at 04:02:27PM +0530, Amit Kapila wrote:\n> > On Mon, Nov 13, 2023 at 1:52 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> It seems to me that INIT cannot be relied on for a similar reason.\n> >> This state would be set for a new relation in\n> >> LogicalRepSyncTableStart(), and the relation would still be in INIT\n> >> state when creating the slot via walrcv_create_slot() in a second\n> >> transaction started a bit later.\n> >\n> > Before creating a slot, we changed the state to DATASYNC.\n>\n> Still, playing the devil's advocate, couldn't it be possible that a\n> server crashes just after the slot got created, then restarts with\n> max_logical_replication_workers=0? This would keep the catalog in a\n> state authorized by the upgrade,\n>\n\nThe state should be DATASYNC by that time and I don't think that is an\nauthorized state by upgrade.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 14 Nov 2023 09:24:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication["
},
{
"msg_contents": "On Mon, 13 Nov 2023 at 13:52, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v13-0001\n>\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> 1. getSubscriptionTables\n>\n> + int i_srsublsn;\n> + int i;\n> + int cur_rel = 0;\n> + int ntups;\n>\n> What is the difference between 'i' and 'cur_rel'?\n>\n> AFAICT these represent the same tuple index, in which case you might\n> as well throw away 'cur_rel' and only keep 'i'.\n\nModified\n\n> ~~~\n>\n> 2. getSubscriptionTables\n>\n> + for (i = 0; i < ntups; i++)\n> + {\n> + Oid cur_srsubid = atooid(PQgetvalue(res, i, i_srsubid));\n> + Oid relid = atooid(PQgetvalue(res, i, i_srrelid));\n> + TableInfo *tblinfo;\n>\n> Since this is all new code, using C99 style for loop variable\n> declaration of 'i' will be better.\n\nModified\n\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 3. check_for_subscription_state\n>\n> +check_for_subscription_state(ClusterInfo *cluster)\n> +{\n> + int dbnum;\n> + FILE *script = NULL;\n> + char output_path[MAXPGPATH];\n> + int ntup;\n> +\n> + /* Subscription relations state can be migrated since PG17. */\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) < 1700)\n> + return;\n> +\n> + prep_status(\"Checking for subscription state\");\n> +\n> + snprintf(output_path, sizeof(output_path), \"%s/%s\",\n> + log_opts.basedir,\n> + \"subscription_state.txt\");\n>\n> I felt this filename ought to be more like\n> 'subscriptions_with_bad_state.txt' because the current name looks like\n> a normal logfile with nothing to indicate that it is only for the\n> states of the \"bad\" subscriptions.\n\nI have kept the file name intentionally shorted as we noticed that\nwhen the upgrade of the publisher patch used a longer name there were\nsome buildfarm failures because of longer names.\n\n> ~~~\n>\n> 4.\n> + for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\n> + {\n>\n> Since this is all new code, using C99 style for loop variable\n> declaration of 'dbnum' will be better.\n\nModified\n\n> ~~~\n>\n> 5.\n> + * a) SUBREL_STATE_DATASYNC:A relation upgraded while in this state\n> + * would retain a replication slot, which could not be dropped by the\n> + * sync worker spawned after the upgrade because the subscription ID\n> + * tracked by the publisher does not match anymore.\n>\n> missing whitespace\n>\n> /SUBREL_STATE_DATASYNC:A relation/SUBREL_STATE_DATASYNC: A relation/\n\nModified\n\nAlso added a couple of missing test cases. The attached v14 version\npatch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 15 Nov 2023 23:33:28 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 13 Nov 2023 at 17:02, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 10, 2023 at 7:26 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for the comments, the attached v13 version patch has the\n> > changes for the same.\n> >\n>\n> +\n> + ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname,\n> sizeof(originname));\n> + originid = replorigin_by_name(originname, false);\n> + replorigin_advance(originid, sublsn, InvalidXLogRecPtr,\n> + false /* backward */ ,\n> + false /* WAL log */ );\n>\n> This seems to update the origin state only in memory. Is it sufficient\n> to use this here? Anyway, I think using this requires us to first\n> acquire RowExclusiveLock on pg_replication_origin something the patch\n> is doing for some other system table.\n\nAdded the lock.\n\nThe attached v14 patch at [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm20%3DBk_w9jDZXEqkJ3_NUAxOBswCn4jR-tmh-MqNpPZYw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 15 Nov 2023 23:35:22 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 13 Nov 2023 at 17:49, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 13, 2023 at 5:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Nov 10, 2023 at 7:26 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Thanks for the comments, the attached v13 version patch has the\n> > > changes for the same.\n> > >\n> >\n> > +\n> > + ReplicationOriginNameForLogicalRep(subid, InvalidOid, originname,\n> > sizeof(originname));\n> > + originid = replorigin_by_name(originname, false);\n> > + replorigin_advance(originid, sublsn, InvalidXLogRecPtr,\n> > + false /* backward */ ,\n> > + false /* WAL log */ );\n> >\n> > This seems to update the origin state only in memory. Is it sufficient\n> > to use this here?\n> >\n>\n> I think it is probably getting ensured by clean shutdown\n> (shutdown_checkpoint) which happens on the new cluster after calling\n> this function. We can probably try to add a comment for it. BTW, we\n> also need to ensure that max_replication_slots is configured to a\n> value higher than origins we are planning to create on the new\n> cluster.\n\nAdded comments and also added the check for max_replication_slots.\n\nThe attached v14 patch at [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm20%3DBk_w9jDZXEqkJ3_NUAxOBswCn4jR-tmh-MqNpPZYw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 15 Nov 2023 23:36:32 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Here are some review comments for patch v14-0001\n\n======\nsrc/backend/utils/adt/pg_upgrade_support.c\n\n1. binary_upgrade_replorigin_advance\n\n+ /* lock to prevent the replication origin from vanishing */\n+ LockRelationOid(ReplicationOriginRelationId, RowExclusiveLock);\n+ originid = replorigin_by_name(originname, false);\n\nUse uppercase for the lock comment.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n2. check_for_subscription_state\n\n> > + prep_status(\"Checking for subscription state\");\n> > +\n> > + snprintf(output_path, sizeof(output_path), \"%s/%s\",\n> > + log_opts.basedir,\n> > + \"subscription_state.txt\");\n> >\n> > I felt this filename ought to be more like\n> > 'subscriptions_with_bad_state.txt' because the current name looks like\n> > a normal logfile with nothing to indicate that it is only for the\n> > states of the \"bad\" subscriptions.\n>\n> I have kept the file name intentionally shorted as we noticed that\n> when the upgrade of the publisher patch used a longer name there were\n> some buildfarm failures because of longer names.\n\nOK, but how about some other short meaningful name like 'subs_invalid.txt'?\n\nI also thought \"state\" in the original name was misleading because\nthis file contains not only subscriptions with bad 'state' but also\nsubscriptions with missing 'origin'.\n\n~~~\n\n3. check_new_cluster_logical_replication_slots\n\n int nslots_on_old;\n int nslots_on_new;\n+ int nsubs_on_old = old_cluster.subscription_count;\n\nI felt it might be better to make both these quantities 'unsigned' to\nmake it more obvious that there are no special meanings for negative\nnumbers.\n\n~~~\n\n4. check_new_cluster_logical_replication_slots\n\nnslots_on_old = count_old_cluster_logical_slots();\n\n~\n\nIMO the 'nsubs_on_old' should be coded the same as above. AFAICT, this\nis the only code where you are interested in the number of\nsubscribers, and furthermore, it seems you only care about that count\nin the *old* cluster. This means the current implementation of\nget_subscription_count() seems more generic than it needs to be and\nthat results in more unnecessary patch code. (I will repeat this same\nreview comment in the other relevant places).\n\nSUGGESTION\nnslots_on_old = count_old_cluster_logical_slots();\nnsubs_on_old = count_old_cluster_subscriptions();\n\n~~~\n\n5.\n+ /*\n+ * Quick return if there are no logical slots and subscriptions to be\n+ * migrated.\n+ */\n+ if (nslots_on_old == 0 && nsubs_on_old == 0)\n return;\n\n/and subscriptions/and no subscriptions/\n\n~~~\n\n6.\n- if (nslots_on_old > max_replication_slots)\n+ if (nslots_on_old && nslots_on_old > max_replication_slots)\n pg_fatal(\"max_replication_slots (%d) must be greater than or equal\nto the number of \"\n \"logical replication slots (%d) on the old cluster\",\n max_replication_slots, nslots_on_old);\n\nNeither nslots_on_old nor max_replication_slots can be < 0, so I don't\nsee why the additional check is needed here.\nAFAICT \"if (nslots_on_old > max_replication_slots)\" acheives the same\nthing that you want.\n\n~~~\n\n7.\n+ if (nsubs_on_old && nsubs_on_old > max_replication_slots)\n+ pg_fatal(\"max_replication_slots (%d) must be greater than or equal\nto the number of \"\n+ \"subscriptions (%d) on the old cluster\",\n+ max_replication_slots, nsubs_on_old);\n\nNeither nsubs_on_old nor max_replication_slots can be < 0, so I don't\nsee why the additional check is needed here.\nAFAICT \"if (nsubs_on_old > max_replication_slots)\" achieves the same\nthing that you want.\n\n======\nsrc/bin/pg_upgrade/info.c\n\n8. get_db_rel_and_slot_infos\n\n+ if (cluster == &old_cluster)\n+ get_subscription_count(cluster);\n+\n\nI felt this is unnecessary because you only want to know the\nnsubs_on_old in one place and then only for the old cluster, so\ncalling this to set a generic attribute for the cluster is overkill.\n\n~~~\n\n9.\n+/*\n+ * Get the number of subscriptions in the old cluster.\n+ */\n+static void\n+get_subscription_count(ClusterInfo *cluster)\n+{\n+ PGconn *conn;\n+ PGresult *res;\n+\n+ if (GET_MAJOR_VERSION(cluster->major_version) < 1700)\n+ return;\n+\n+ conn = connectToServer(cluster, \"template1\");\n+ res = executeQueryOrDie(conn,\n+ \"SELECT oid FROM pg_catalog.pg_subscription\");\n+\n+ cluster->subscription_count = PQntuples(res);\n+\n+ PQclear(res);\n+ PQfinish(conn);\n+}\n\n9a.\nCurrently, this is needed only for the old_cluster (like the function\ncomment implies), so the parameter is not required.\n\nAlso, AFAICT this number is only needed in one place\n(check_new_cluster_logical_replication_slots) so IMO it would be\nbetter to make lots of changes to simplify this code:\n- change the function name to be like the other one. e.g.\ncount_old_cluster_subscriptions()\n- function to return unsigned\n\nSUGGESTION (something like this...)\n\nunsigned\ncount_old_cluster_subscriptions(void)\n{\n unsigned nsubs = 0;\n\n if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n {\n PGconn *conn = connectToServer(&old_cluster, \"template1\");\n PGresult *res = executeQueryOrDie(conn,\n \"SELECT oid FROM pg_catalog.pg_subscription\");\n nsubs = PQntuples(res);\n PQclear(res);\n PQfinish(conn);\n }\n\n return nsubs;\n}\n\n~\n\n9b.\nThis function is returning 0 (aka not assigning\ncluster->subscription_count) for clusters before PG17. IIUC this is\neffectively the same behaviour as count_old_cluster_logical_slots()\nbut probably it needs to be mentioned more in this function comment\nwhy it is like this.\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.h\n\n10.\n const char *tablespace_suffix; /* directory specification */\n+ int subscription_count; /* number of subscriptions */\n } ClusterInfo;\n\nI felt this is not needed because you only need to know the\nnsubs_on_old in one place, so you can just call the counting function\nfrom there. Making this a generic attribute for the cluster seems\noverkill.\n\n======\nsrc/bin/pg_upgrade/t/004_subscription.pl\n\n11. TEST: Check that pg_upgrade is successful when the table is in init state.\n\n+$synced_query =\n+ \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'i'\";\n+$old_sub1->poll_query_until('postgres', $synced_query)\n+ or die \"Timed out while waiting for subscriber to synchronize data\";\n\nBut it doesn't get to \"synchronize data\", so should that message say\nmore like \"Timed out while waiting for the table to reach INIT state\"\n\n~\n\n12.\n+command_ok(\n+ [\n+ 'pg_upgrade', '--no-sync', '-d', $old_sub1->data_dir,\n+ '-D', $new_sub1->data_dir, '-b', $bindir,\n+ '-B', $bindir, '-s', $new_sub1->host,\n+ '-p', $old_sub1->port, '-P', $new_sub1->port,\n+ $mode,\n+ ],\n+ 'run of pg_upgrade --check for old instance when the subscription\ntables are in ready state'\n+);\n\nShould that message say \"init state\" instead of \"ready state\"?\n\n~~~\n\n13. TEST: when the subscription's replication origin does not exist.\n\n+$old_sub2->safe_psql('postgres',\n+ \"ALTER SUBSCRIPTION regress_sub2 disable\");\n\n/disable/DISABLE/\n\n~~~\n\n14.\n+my $subid = $old_sub2->safe_psql('postgres',\n+ \"SELECT oid FROM pg_subscription WHERE subname = 'regress_sub2'\");\n+my $reporigin = 'pg_'.qq($subid);\n+$old_sub2->safe_psql('postgres',\n+ \"SELECT pg_replication_origin_drop('$reporigin')\"\n+);\n\nMaybe this part needs a comment to say the reason why the origin does\nnot exist -- it's because you found and explicitly dropped it.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 16 Nov 2023 13:14:42 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThanks for updating the patch! Here are some comments.\r\nThey are mainly cosmetic because I have not read yours these days.\r\n\r\n01. binary_upgrade_add_sub_rel_state()\r\n\r\n```\r\n+ /* We must check these things before dereferencing the arguments */\r\n+ if (PG_ARGISNULL(0) || PG_ARGISNULL(1) || PG_ARGISNULL(2))\r\n+ elog(ERROR, \"null argument to binary_upgrade_add_sub_rel_state is not allowed\")\r\n```\r\n\r\nBut fourth argument can be NULL, right? I know you copied from other functions,\r\nbut they do not accept for all arguments. One approach is that pg_dump explicitly\r\nwrites InvalidXLogRecPtr as the fourth argument.\r\n\r\n02. binary_upgrade_add_sub_rel_state()\r\n\r\n```\r\n+ if (!OidIsValid(relid))\r\n+ ereport(ERROR,\r\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n+ errmsg(\"invalid relation identifier used: %u\", relid));\r\n+\r\n+ tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));\r\n+ if (!HeapTupleIsValid(tup))\r\n+ ereport(ERROR,\r\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n+ errmsg(\"relation %u does not exist\", relid))\r\n```\r\n\r\nI'm not sure they should be ereport(). Isn't it that they will be never occurred?\r\nOther upgrade funcs do not have ereport(), and I think it does not have to be\r\ntranslated.\r\n\r\n03. binary_upgrade_replorigin_advance()\r\n\r\nIIUC this function is very similar to pg_replication_origin_advance(). Can we\r\nextract a common part of them? I think pg_replication_origin_advance() will be\r\njust a wrapper, and binary_upgrade_replorigin_advance() will get the name of\r\norigin and pass to it.\r\n\r\n04. binary_upgrade_replorigin_advance()\r\n\r\nEven if you do not accept 03, some variable name can be follow the function.\r\n\r\n05. getSubscriptions()\r\n\r\n```\r\n+ appendPQExpBufferStr(query, \"o.remote_lsn AS suboriginremotelsn\\n\")\r\n```\r\n\r\nHmm, this value is taken anyway, but will be dumed only when the cluster is PG17+.\r\nShould we avoid getting the value like subrunasowner and subpasswordrequired?\r\nNot sure...\r\n\r\n06. dumpSubscriptionTable()\r\n\r\nCan we assert that remote version is PG17+?\r\n\r\n07. check_for_subscription_state()\r\n\r\nIIUC, this function is used only for old cluster. Should we follow\r\ncheck_old_cluster_for_valid_slots()?\r\n\r\n08. check_for_subscription_state()\r\n\r\n```\r\n+ fprintf(script, \"database:%s subscription:%s schema:%s relation:%s state:%s not in required state\\n\",\r\n+ active_db->db_name,\r\n+ PQgetvalue(res, i, 0),\r\n+ PQgetvalue(res, i, 1),\r\n+ PQgetvalue(res, i, 2),\r\n+ PQgetvalue(res, i, 3));\r\n```\r\n\r\nIIRC, format strings should be double-quoted.\r\n\r\n09. check_new_cluster_logical_replication_slots()\r\n\r\nChecks for replication origin were added in check_new_cluster_logical_replication_slots(),\r\nbut I felt it became a super function. Can we devide?\r\n\r\n10. check_new_cluster_logical_replication_slots()\r\n\r\nEven if you reject above, it should be renamed.\r\n\r\n11. pg_upgrade.h\r\n\r\n```\r\n+ int subscription_count; /* number of subscriptions */\r\n```\r\n\r\nBased on other struct, it should be \"nsubscriptions\".\r\n\r\n12. 004_subscription.pl\r\n\r\n```\r\n+use File::Path qw(rmtree);\r\n```\r\n\r\nI think this is not used.\r\n\r\n13. 004_subscription.pl\r\n\r\n```\r\n+my $bindir = $new_sub->config_data('--bindir');\r\n```\r\nFor extensibility, it might be better to separate for old/new bindir.\r\n\r\n14. 004_subscription.pl\r\n\r\n```\r\n+my $synced_query =\r\n+ \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'r'\";\r\n+$old_sub->poll_query_until('postgres', $synced_query)\r\n+ or die \"Timed out while waiting for subscriber to synchronize data\";\r\n```\r\n\r\nActually, I'm not sure it is really needed. wait_for_subscription_sync() in line 163\r\nensures that sync are done? Are there any holes around here?\r\n\r\n15. 004_subscription.pl\r\n\r\n```\r\n+# Check the number of rows for each table on each server\r\n+my $result =\r\n+ $publisher->safe_psql('postgres', \"SELECT count(*) FROM tab_upgraded\");\r\n+is($result, qq(50), \"check initial tab_upgraded table data on publisher\");\r\n+$result =\r\n+ $publisher->safe_psql('postgres', \"SELECT count(*) FROM tab_not_upgraded\");\r\n+is($result, qq(1), \"check initial tab_upgraded table data on publisher\");\r\n+$result =\r\n+ $old_sub->safe_psql('postgres', \"SELECT count(*) FROM tab_upgraded\");\r\n+is($result, qq(50),\r\n+ \"check initial tab_upgraded table data on the old subscriber\");\r\n+$result =\r\n+ $old_sub->safe_psql('postgres', \"SELECT count(*) FROM tab_not_upgraded\");\r\n+is($result, qq(0),\r\n+ \"check initial tab_not_upgraded table data on the old subscriber\");\r\n```\r\n\r\nI'm not sure they are really needed. At that time pg_upgrade --check is called,\r\nthis won't change the state of clusters.\r\n\r\n16. pg_proc.dat\r\n\r\n```\r\n+{ oid => '8404', descr => 'for use by pg_upgrade (relation for pg_subscription_rel)',\r\n+ proname => 'binary_upgrade_add_sub_rel_state', proisstrict => 'f',\r\n+ provolatile => 'v', proparallel => 'u', prorettype => 'void',\r\n+ proargtypes => 'text oid char pg_lsn',\r\n+ prosrc => 'binary_upgrade_add_sub_rel_state' },\r\n+{ oid => '8405', descr => 'for use by pg_upgrade (remote_lsn for origin)',\r\n+ proname => 'binary_upgrade_replorigin_advance', proisstrict => 'f',\r\n+ provolatile => 'v', proparallel => 'u', prorettype => 'void',\r\n+ proargtypes => 'text pg_lsn',\r\n+ prosrc => 'binary_upgrade_replorigin_advance' },\r\n```\r\n\r\nBased on other function, descr just should be \"for use by pg_upgrade\".\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 16 Nov 2023 12:55:20 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, 10 Nov 2023 at 19:26, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, 9 Nov 2023 at 12:23, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n>\n> > Note: actually, this would be OK if we are able to keep the OIDs of\n> > the subscribers consistent across upgrades? I'm OK to not do nothing\n> > about that in this patch, to keep it simpler. Just asking in passing.\n>\n> I will analyze more on this and post the analysis in the subsequent mail.\n\nI analyzed further and felt that retaining subscription oid would be\ncleaner as subscription/subscription_rel/replication_origin/replication_origin_status\nall of these will be using the same oid as earlier and also probably\nhelp in supporting upgrade of subscription in more scenarios later.\nHere is a patch to handle the same.\n\nRegards,\nVignesh",
"msg_date": "Sun, 19 Nov 2023 06:52:09 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sun, 19 Nov 2023 at 06:52, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 10 Nov 2023 at 19:26, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, 9 Nov 2023 at 12:23, Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> >\n> > > Note: actually, this would be OK if we are able to keep the OIDs of\n> > > the subscribers consistent across upgrades? I'm OK to not do nothing\n> > > about that in this patch, to keep it simpler. Just asking in passing.\n> >\n> > I will analyze more on this and post the analysis in the subsequent mail.\n>\n> I analyzed further and felt that retaining subscription oid would be\n> cleaner as subscription/subscription_rel/replication_origin/replication_origin_status\n> all of these will be using the same oid as earlier and also probably\n> help in supporting upgrade of subscription in more scenarios later.\n> Here is a patch to handle the same.\n\nSorry I had attached the older patch, here is the correct updated one.\n\nRegards,\nVignesh",
"msg_date": "Sun, 19 Nov 2023 06:56:05 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 16 Nov 2023 at 07:45, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v14-0001\n>\n> ======\n> src/backend/utils/adt/pg_upgrade_support.c\n>\n> 1. binary_upgrade_replorigin_advance\n>\n> + /* lock to prevent the replication origin from vanishing */\n> + LockRelationOid(ReplicationOriginRelationId, RowExclusiveLock);\n> + originid = replorigin_by_name(originname, false);\n>\n> Use uppercase for the lock comment.\n\nModified\n\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 2. check_for_subscription_state\n>\n> > > + prep_status(\"Checking for subscription state\");\n> > > +\n> > > + snprintf(output_path, sizeof(output_path), \"%s/%s\",\n> > > + log_opts.basedir,\n> > > + \"subscription_state.txt\");\n> > >\n> > > I felt this filename ought to be more like\n> > > 'subscriptions_with_bad_state.txt' because the current name looks like\n> > > a normal logfile with nothing to indicate that it is only for the\n> > > states of the \"bad\" subscriptions.\n> >\n> > I have kept the file name intentionally shorted as we noticed that\n> > when the upgrade of the publisher patch used a longer name there were\n> > some buildfarm failures because of longer names.\n>\n> OK, but how about some other short meaningful name like 'subs_invalid.txt'?\n>\n> I also thought \"state\" in the original name was misleading because\n> this file contains not only subscriptions with bad 'state' but also\n> subscriptions with missing 'origin'.\n\nModified\n\n> ~~~\n>\n> 3. check_new_cluster_logical_replication_slots\n>\n> int nslots_on_old;\n> int nslots_on_new;\n> + int nsubs_on_old = old_cluster.subscription_count;\n>\n> I felt it might be better to make both these quantities 'unsigned' to\n> make it more obvious that there are no special meanings for negative\n> numbers.\n\nI have used int itself as all others also use int like in case of\nlogical slots. I tried making the changes, but the code was not\nconsistent, so used int like that is used for others.\n\n\n> ~~~\n>\n> 4. check_new_cluster_logical_replication_slots\n>\n> nslots_on_old = count_old_cluster_logical_slots();\n>\n> ~\n>\n> IMO the 'nsubs_on_old' should be coded the same as above. AFAICT, this\n> is the only code where you are interested in the number of\n> subscribers, and furthermore, it seems you only care about that count\n> in the *old* cluster. This means the current implementation of\n> get_subscription_count() seems more generic than it needs to be and\n> that results in more unnecessary patch code. (I will repeat this same\n> review comment in the other relevant places).\n>\n> SUGGESTION\n> nslots_on_old = count_old_cluster_logical_slots();\n> nsubs_on_old = count_old_cluster_subscriptions();\n\nModified to keep it similar to logical slot implementation.\n\n> ~~~\n>\n> 5.\n> + /*\n> + * Quick return if there are no logical slots and subscriptions to be\n> + * migrated.\n> + */\n> + if (nslots_on_old == 0 && nsubs_on_old == 0)\n> return;\n>\n> /and subscriptions/and no subscriptions/\n\nModified\n\n> ~~~\n>\n> 6.\n> - if (nslots_on_old > max_replication_slots)\n> + if (nslots_on_old && nslots_on_old > max_replication_slots)\n> pg_fatal(\"max_replication_slots (%d) must be greater than or equal\n> to the number of \"\n> \"logical replication slots (%d) on the old cluster\",\n> max_replication_slots, nslots_on_old);\n>\n> Neither nslots_on_old nor max_replication_slots can be < 0, so I don't\n> see why the additional check is needed here.\n> AFAICT \"if (nslots_on_old > max_replication_slots)\" acheives the same\n> thing that you want.\n\nThis part of code is changed now\n\n> ~~~\n>\n> 7.\n> + if (nsubs_on_old && nsubs_on_old > max_replication_slots)\n> + pg_fatal(\"max_replication_slots (%d) must be greater than or equal\n> to the number of \"\n> + \"subscriptions (%d) on the old cluster\",\n> + max_replication_slots, nsubs_on_old);\n>\n> Neither nsubs_on_old nor max_replication_slots can be < 0, so I don't\n> see why the additional check is needed here.\n> AFAICT \"if (nsubs_on_old > max_replication_slots)\" achieves the same\n> thing that you want.\n\nThis part of code is changed now\n\n> ======\n> src/bin/pg_upgrade/info.c\n>\n> 8. get_db_rel_and_slot_infos\n>\n> + if (cluster == &old_cluster)\n> + get_subscription_count(cluster);\n> +\n>\n> I felt this is unnecessary because you only want to know the\n> nsubs_on_old in one place and then only for the old cluster, so\n> calling this to set a generic attribute for the cluster is overkill.\n\nWe need to do this here because when we do the validation of new\ncluster the old cluster will not be running. I have made the flow\nsimilar to logical slots now.\n\n> ~~~\n>\n> 9.\n> +/*\n> + * Get the number of subscriptions in the old cluster.\n> + */\n> +static void\n> +get_subscription_count(ClusterInfo *cluster)\n> +{\n> + PGconn *conn;\n> + PGresult *res;\n> +\n> + if (GET_MAJOR_VERSION(cluster->major_version) < 1700)\n> + return;\n> +\n> + conn = connectToServer(cluster, \"template1\");\n> + res = executeQueryOrDie(conn,\n> + \"SELECT oid FROM pg_catalog.pg_subscription\");\n> +\n> + cluster->subscription_count = PQntuples(res);\n> +\n> + PQclear(res);\n> + PQfinish(conn);\n> +}\n>\n> 9a.\n> Currently, this is needed only for the old_cluster (like the function\n> comment implies), so the parameter is not required.\n>\n> Also, AFAICT this number is only needed in one place\n> (check_new_cluster_logical_replication_slots) so IMO it would be\n> better to make lots of changes to simplify this code:\n> - change the function name to be like the other one. e.g.\n> count_old_cluster_subscriptions()\n> - function to return unsigned\n>\n> SUGGESTION (something like this...)\n>\n> unsigned\n> count_old_cluster_subscriptions(void)\n> {\n> unsigned nsubs = 0;\n>\n> if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n> {\n> PGconn *conn = connectToServer(&old_cluster, \"template1\");\n> PGresult *res = executeQueryOrDie(conn,\n> \"SELECT oid FROM pg_catalog.pg_subscription\");\n> nsubs = PQntuples(res);\n> PQclear(res);\n> PQfinish(conn);\n> }\n>\n> return nsubs;\n> }\n\n This function is not needed anymore, making the logic similar to logical slots.\n\n> ~\n>\n> 9b.\n> This function is returning 0 (aka not assigning\n> cluster->subscription_count) for clusters before PG17. IIUC this is\n> effectively the same behaviour as count_old_cluster_logical_slots()\n> but probably it needs to be mentioned more in this function comment\n> why it is like this.\n\nThis function is not needed anymore, making the logic similar to logical slots.\n\n> ======\n> src/bin/pg_upgrade/pg_upgrade.h\n>\n> 10.\n> const char *tablespace_suffix; /* directory specification */\n> + int subscription_count; /* number of subscriptions */\n> } ClusterInfo;\n>\n> I felt this is not needed because you only need to know the\n> nsubs_on_old in one place, so you can just call the counting function\n> from there. Making this a generic attribute for the cluster seems\n> overkill.\n\nWe need to do this here because when we do the validation of a new\ncluster the old cluster will not be running. I have made the flow\nsimilar to logical slots now.\n\n> ======\n> src/bin/pg_upgrade/t/004_subscription.pl\n>\n> 11. TEST: Check that pg_upgrade is successful when the table is in init state.\n>\n> +$synced_query =\n> + \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'i'\";\n> +$old_sub1->poll_query_until('postgres', $synced_query)\n> + or die \"Timed out while waiting for subscriber to synchronize data\";\n>\n> But it doesn't get to \"synchronize data\", so should that message say\n> more like \"Timed out while waiting for the table to reach INIT state\"\n\nModified\n\n> ~\n>\n> 12.\n> +command_ok(\n> + [\n> + 'pg_upgrade', '--no-sync', '-d', $old_sub1->data_dir,\n> + '-D', $new_sub1->data_dir, '-b', $bindir,\n> + '-B', $bindir, '-s', $new_sub1->host,\n> + '-p', $old_sub1->port, '-P', $new_sub1->port,\n> + $mode,\n> + ],\n> + 'run of pg_upgrade --check for old instance when the subscription\n> tables are in ready state'\n> +);\n>\n> Should that message say \"init state\" instead of \"ready state\"?\n\nModified\n\n> ~~~\n>\n> 13. TEST: when the subscription's replication origin does not exist.\n>\n> +$old_sub2->safe_psql('postgres',\n> + \"ALTER SUBSCRIPTION regress_sub2 disable\");\n>\n> /disable/DISABLE/\n\nModified\n\n> ~~~\n>\n> 14.\n> +my $subid = $old_sub2->safe_psql('postgres',\n> + \"SELECT oid FROM pg_subscription WHERE subname = 'regress_sub2'\");\n> +my $reporigin = 'pg_'.qq($subid);\n> +$old_sub2->safe_psql('postgres',\n> + \"SELECT pg_replication_origin_drop('$reporigin')\"\n> +);\n>\n> Maybe this part needs a comment to say the reason why the origin does\n> not exist -- it's because you found and explicitly dropped it.\n\nModified\n\nThe attached v15 version patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Sun, 19 Nov 2023 09:08:39 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 16 Nov 2023 at 18:25, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Vignesh,\n>\n> Thanks for updating the patch! Here are some comments.\n> They are mainly cosmetic because I have not read yours these days.\n>\n> 01. binary_upgrade_add_sub_rel_state()\n>\n> ```\n> + /* We must check these things before dereferencing the arguments */\n> + if (PG_ARGISNULL(0) || PG_ARGISNULL(1) || PG_ARGISNULL(2))\n> + elog(ERROR, \"null argument to binary_upgrade_add_sub_rel_state is not allowed\")\n> ```\n>\n> But fourth argument can be NULL, right? I know you copied from other functions,\n> but they do not accept for all arguments. One approach is that pg_dump explicitly\n> writes InvalidXLogRecPtr as the fourth argument.\n\nI did not find any problem with this approach, if the lsn is valid\nlike in ready state, we will send a valid lsn, if lsn is not valid\nlike in init state we will pass as NULL. This approach was also\nsuggested at [1].\n\n> 02. binary_upgrade_add_sub_rel_state()\n>\n> ```\n> + if (!OidIsValid(relid))\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"invalid relation identifier used: %u\", relid));\n> +\n> + tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));\n> + if (!HeapTupleIsValid(tup))\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"relation %u does not exist\", relid))\n> ```\n>\n> I'm not sure they should be ereport(). Isn't it that they will be never occurred?\n> Other upgrade funcs do not have ereport(), and I think it does not have to be\n> translated.\n\nI have removed the first check and retained the second one for a sanity check.\n\n> 03. binary_upgrade_replorigin_advance()\n>\n> IIUC this function is very similar to pg_replication_origin_advance(). Can we\n> extract a common part of them? I think pg_replication_origin_advance() will be\n> just a wrapper, and binary_upgrade_replorigin_advance() will get the name of\n> origin and pass to it.\n\nWe will be able to reduce hardly 4 lines, I felt the existing is better.\n\n> 04. binary_upgrade_replorigin_advance()\n>\n> Even if you do not accept 03, some variable name can be follow the function.\n\nModified\n\n> 05. getSubscriptions()\n>\n> ```\n> + appendPQExpBufferStr(query, \"o.remote_lsn AS suboriginremotelsn\\n\")\n> ```\n>\n> Hmm, this value is taken anyway, but will be dumed only when the cluster is PG17+.\n> Should we avoid getting the value like subrunasowner and subpasswordrequired?\n> Not sure...\n\nModified\n\n> 06. dumpSubscriptionTable()\n>\n> Can we assert that remote version is PG17+?\n\nModified\n\n> 07. check_for_subscription_state()\n>\n> IIUC, this function is used only for old cluster. Should we follow\n> check_old_cluster_for_valid_slots()?\n\nModified\n\n> 08. check_for_subscription_state()\n>\n> ```\n> + fprintf(script, \"database:%s subscription:%s schema:%s relation:%s state:%s not in required state\\n\",\n> + active_db->db_name,\n> + PQgetvalue(res, i, 0),\n> + PQgetvalue(res, i, 1),\n> + PQgetvalue(res, i, 2),\n> + PQgetvalue(res, i, 3));\n> ```\n>\n> IIRC, format strings should be double-quoted.\n\nModified\n\n> 09. check_new_cluster_logical_replication_slots()\n>\n> Checks for replication origin were added in check_new_cluster_logical_replication_slots(),\n> but I felt it became a super function. Can we devide?\n\nModified\n\n> 10. check_new_cluster_logical_replication_slots()\n>\n> Even if you reject above, it should be renamed.\n\nSince the previous is handled, this is not valid.\n\n> 11. pg_upgrade.h\n>\n> ```\n> + int subscription_count; /* number of subscriptions */\n> ```\n>\n> Based on other struct, it should be \"nsubscriptions\".\n\nModified\n\n> 12. 004_subscription.pl\n>\n> ```\n> +use File::Path qw(rmtree);\n> ```\n>\n> I think this is not used.\n\nModified\n\n> 13. 004_subscription.pl\n>\n> ```\n> +my $bindir = $new_sub->config_data('--bindir');\n> ```\n> For extensibility, it might be better to separate for old/new bindir.\n\nModified\n\n> 14. 004_subscription.pl\n>\n> ```\n> +my $synced_query =\n> + \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'r'\";\n> +$old_sub->poll_query_until('postgres', $synced_query)\n> + or die \"Timed out while waiting for subscriber to synchronize data\";\n> ```\n>\n> Actually, I'm not sure it is really needed. wait_for_subscription_sync() in line 163\n> ensures that sync are done? Are there any holes around here?\n\nwait_for_subscription_sync will check if table is in syndone or in\nready state, since we are allowing sycndone state, I have removed this\npart.\n\n> 15. 004_subscription.pl\n>\n> ```\n> +# Check the number of rows for each table on each server\n> +my $result =\n> + $publisher->safe_psql('postgres', \"SELECT count(*) FROM tab_upgraded\");\n> +is($result, qq(50), \"check initial tab_upgraded table data on publisher\");\n> +$result =\n> + $publisher->safe_psql('postgres', \"SELECT count(*) FROM tab_not_upgraded\");\n> +is($result, qq(1), \"check initial tab_upgraded table data on publisher\");\n> +$result =\n> + $old_sub->safe_psql('postgres', \"SELECT count(*) FROM tab_upgraded\");\n> +is($result, qq(50),\n> + \"check initial tab_upgraded table data on the old subscriber\");\n> +$result =\n> + $old_sub->safe_psql('postgres', \"SELECT count(*) FROM tab_not_upgraded\");\n> +is($result, qq(0),\n> + \"check initial tab_not_upgraded table data on the old subscriber\");\n> ```\n>\n> I'm not sure they are really needed. At that time pg_upgrade --check is called,\n> this won't change the state of clusters.\n\nIn the newer version, the check has been removed now. So these are required.\n\n> 16. pg_proc.dat\n>\n> ```\n> +{ oid => '8404', descr => 'for use by pg_upgrade (relation for pg_subscription_rel)',\n> + proname => 'binary_upgrade_add_sub_rel_state', proisstrict => 'f',\n> + provolatile => 'v', proparallel => 'u', prorettype => 'void',\n> + proargtypes => 'text oid char pg_lsn',\n> + prosrc => 'binary_upgrade_add_sub_rel_state' },\n> +{ oid => '8405', descr => 'for use by pg_upgrade (remote_lsn for origin)',\n> + proname => 'binary_upgrade_replorigin_advance', proisstrict => 'f',\n> + provolatile => 'v', proparallel => 'u', prorettype => 'void',\n> + proargtypes => 'text pg_lsn',\n> + prosrc => 'binary_upgrade_replorigin_advance' },\n> ```\n>\n> Based on other function, descr just should be \"for use by pg_upgrade\".\n\nThis was improvised based on one of earlier comments at [1]\nThe v15 version attached at [2] has the changes for the comments.\n\n[1] - https://www.postgresql.org/message-id/ZQvbV2sdzBY6WEBl%40paquier.xyz\n[2] - https://www.postgresql.org/message-id/CALDaNm2ssmSFs4bjpfxbkfUbPE%3DxFSGqxFoip87kF259FG%3DX2g%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 19 Nov 2023 09:12:46 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sun, Nov 19, 2023 at 06:56:05AM +0530, vignesh C wrote:\n> On Sun, 19 Nov 2023 at 06:52, vignesh C <vignesh21@gmail.com> wrote:\n>> On Fri, 10 Nov 2023 at 19:26, vignesh C <vignesh21@gmail.com> wrote:\n>>> I will analyze more on this and post the analysis in the subsequent mail.\n>>\n>> I analyzed further and felt that retaining subscription oid would be\n>> cleaner as subscription/subscription_rel/replication_origin/replication_origin_status\n>> all of these will be using the same oid as earlier and also probably\n>> help in supporting upgrade of subscription in more scenarios later.\n>> Here is a patch to handle the same.\n> \n> Sorry I had attached the older patch, here is the correct updated one.\n\nThanks for digging into that. I think that we should consider that\nonce the main patch is merged and stable in the tree for v17 to get a\nmore consistent experience. Shouldn't this include a test in the new\nTAP test for the upgrade of subscriptions? It should be as simple as\ncross-checking the OIDs of the subscriptions before and after the\nupgrade.\n--\nMichael",
"msg_date": "Mon, 20 Nov 2023 08:56:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 7:21 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, 13 Nov 2023 at 13:52, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > Anyway, after a closer lookup, I think that your conclusions regarding\n> > the states that are allowed in the patch during the upgrade have some\n> > flaws.\n> >\n> > First, are you sure that SYNCDONE is OK to keep? This catalog state\n> > is set in process_syncing_tables_for_sync(), and just after the code\n> > opens a transaction to clean up the tablesync slot, followed by a\n> > second transaction to clean up the origin. However, imagine that\n> > there is a failure in dropping the slot, the origin, or just in\n> > transaction processing, cannot we finish in a state where the relation\n> > is marked as SYNCDONE in the catalog but still has an origin and/or a\n> > tablesync slot lying around? Assuming that SYNCDONE is an OK state\n> > seems incorrect to me. I am pretty sure that injecting an error in a\n> > code path after the slot is created would equally lead to an\n> > inconsistency.\n>\n> There are couple of things happening here: a) In the first part we\n> take care of setting subscription relation to SYNCDONE and dropping\n> the replication slot at publisher node, only if drop replication slot\n> is successful the relation state will be set to SYNCDONE , if drop\n> replication slot fails the relation state will still be in\n> FINISHEDCOPY. So if there is a failure in the drop replication slot we\n> will not have an issue as the tablesync worker will be in\n> FINISHEDCOPYstate and this state is not allowed for upgrade. When the\n> state is in SYNCDONE the tablesync slot will not be present. b) In the\n> second part we drop the replication origin, even if there is a chance\n> that drop replication origin fails due to some reason, there will be\n> no problem as we do not copy the table sync replication origin to the\n> new cluster while upgrading. Since the table sync replication origin\n> is not copied to the new cluster there will be no replication origin\n> leaks.\n>\n\nAnd, this will work because in the SYNCDONE state, while removing the\norigin, we are okay with missing origins. It seems not copying the\norigin for tablesync workers in this state (SYNCDONE) relies on the\nfact that currently, we don't use those origins once the system\nreaches the SYNCDONE state but I am not sure it is a good idea to have\nsuch a dependency and that upgrade assuming such things doesn't seems\nideal to me. Personally, I think allowing an upgrade in 'i'\n(initialize) state or 'r' (ready) state seems safe because in those\nstates either slots/origins don't exist or are dropped. What do you\nthink?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Nov 2023 09:49:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Here are some review comments for patch v15-0001\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n1. getSubscriptions\n\n+ if (fout->remoteVersion >= 170000)\n+ appendPQExpBufferStr(query, \"o.remote_lsn AS suboriginremotelsn\\n\");\n+ else\n+ appendPQExpBufferStr(query, \"NULL AS suboriginremotelsn\\n\");\n+\n\nThere should be preceding spaces in those append strings to match the\nother ones.\n\n~~~\n\n2. dumpSubscriptionTable\n\n+/*\n+ * dumpSubscriptionTable\n+ * Dump the definition of the given subscription table mapping. This will be\n+ * used only in binary-upgrade mode.\n+ */\n+static void\n+dumpSubscriptionTable(Archive *fout, const SubRelInfo *subrinfo)\n+{\n+ DumpOptions *dopt = fout->dopt;\n+ SubscriptionInfo *subinfo = subrinfo->subinfo;\n+ PQExpBuffer query;\n+ char *tag;\n+\n+ /* Do nothing in data-only dump */\n+ if (dopt->dataOnly)\n+ return;\n+\n+ Assert(fout->dopt->binary_upgrade || fout->remoteVersion >= 170000);\n\nThe function comment says this is only for binary-upgrade mode, so why\ndoes the Assert use || (OR)?\n\n======\nsrc/bin/pg_upgrade/check.c\n\n3. check_and_dump_old_cluster\n\n+ /*\n+ * Subscription dependencies can be migrated since PG17. See comments atop\n+ * get_old_cluster_subscription_count().\n+ */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n+ check_old_cluster_subscription_state(&old_cluster);\n+\n\nShould this be combined with the other adjacent check so there is only\none \"if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\"\nneeded?\n\n~~~\n\n4. check_new_cluster\n\n check_new_cluster_logical_replication_slots();\n+\n+ check_new_cluster_subscription_configuration();\n\nWhen checking the old cluster, the subscription was checked before the\nslots, but here for the new cluster, the slots are checked before the\nsubscription. Maybe it makes no difference but it might be tidier to\ndo these old/new checks in the same order.\n\n~~~\n\n5. check_new_cluster_logical_replication_slots\n\n- /* Quick return if there are no logical slots to be migrated. */\n+ /* Quick return if there are no logical slots to be migrated */\n\nChange is not relevant for this patch.\n\n~~~\n\n6.\n\n+ res = executeQueryOrDie(conn, \"SELECT setting FROM pg_settings \"\n+ \"WHERE name IN ('max_replication_slots') \"\n+ \"ORDER BY name DESC;\");\n\nUsing IN and ORDER BY in this SQL seems unnecessary when you are only\nsearching for one name.\n\n======\nsrc/bin/pg_upgrade/info.c\n\n7. statics\n\n-\n+static void get_old_cluster_subscription_count(DbInfo *dbinfo);\n\nThis change also removes an existing blank line -- not sure if that\nwas intentional\n\n~~~\n\n8.\n@@ -365,7 +369,6 @@ get_template0_info(ClusterInfo *cluster)\n PQfinish(conn);\n }\n\n-\n /*\n * get_db_infos()\n *\n\nThis blank line change (before get_db_infos) should not be part of this patch.\n\n~~~\n\n9. get_old_cluster_subscription_count\n\nIt seems a slightly misleading function name because this is a PER-DB\ncount, not a cluster count.\n\n~~~\n\n\n10.\n+ /* Subscriptions can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n+ return;\n\nIMO it is better to compare < 1700 instead of <= 1600. It keeps the\ncode more aligned with the comment.\n\n~~~\n\n11. count_old_cluster_subscriptions\n\n+/*\n+ * count_old_cluster_subscriptions()\n+ *\n+ * Returns the number of subscription for all databases.\n+ *\n+ * Note: this function always returns 0 if the old_cluster is PG16 and prior\n+ * because we gather subscriptions only for cluster versions greater than or\n+ * equal to PG17. See get_old_cluster_subscription_count().\n+ */\n+int\n+count_old_cluster_subscriptions(void)\n+{\n+ int nsubs = 0;\n+\n+ for (int dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n+ nsubs += old_cluster.dbarr.dbs[dbnum].nsubs;\n+\n+ return nsubs;\n+}\n\n11a.\n/subscription/subscriptions/\n\n~\n\n11b.\nThe code is now consistent with the slots code which looks good. OTOH\nI thought that 'pg_catalog.pg_subscription' is shared across all\ndatabases of the cluster, so isn't this code inefficient to be\nquerying again and again for every database (if there are many of\nthem) instead of just querying 1 time only for the whole cluster?\n\n======\nsrc/bin/pg_upgrade/t/004_subscription.pl\n\n12.\nIt is difficult to keep track of all the tables (upgraded and not\nupgraded) at each step of these tests. Maybe the comments can be more\nexplicit along the way. e.g\n\nBEFORE\n+# Add tab_not_upgraded1 to the publication\n\nSUGGESTION\n+# Add tab_not_upgraded1 to the publication. Now publication has <blah blah>\n\nand\n\nBEFORE\n+# Subscription relations should be preserved\n\nSUGGESTION\n+# Subscription relations should be preserved. The upgraded won't know\nabout 'tab_not_upgraded1' because <blah blah>\n\netc.\n\n~~~\n\n13.\n+$result =\n+ $new_sub->safe_psql('postgres', \"SELECT count(*) FROM tab_not_upgraded1\");\n+is($result, qq(0),\n+ \"no change in table tab_not_upgraded1 afer enable subscription which\nis not part of the publication\"\n\n/afer/after/\n\n~~~\n\n14.\n+# ------------------------------------------------------\n+# Check that pg_upgrade refuses to run a) if there's a subscription with tables\n+# in a state different than 'r' (ready), 'i' (init) and 's' (synchronized)\n+# and/or b) if the subscription does not have a replication origin.\n+# ------------------------------------------------------\n\n14a,\n/does not have a/has no/\n\n~\n\n14b.\nMaybe put a) and b) on newlines to be more readable.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 20 Nov 2023 16:13:32 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 09:49:41AM +0530, Amit Kapila wrote:\n> On Tue, Nov 14, 2023 at 7:21 AM vignesh C <vignesh21@gmail.com> wrote:\n>> There are couple of things happening here: a) In the first part we\n>> take care of setting subscription relation to SYNCDONE and dropping\n>> the replication slot at publisher node, only if drop replication slot\n>> is successful the relation state will be set to SYNCDONE , if drop\n>> replication slot fails the relation state will still be in\n>> FINISHEDCOPY. So if there is a failure in the drop replication slot we\n>> will not have an issue as the tablesync worker will be in\n>> FINISHEDCOPYstate and this state is not allowed for upgrade. When the\n>> state is in SYNCDONE the tablesync slot will not be present. b) In the\n>> second part we drop the replication origin, even if there is a chance\n>> that drop replication origin fails due to some reason, there will be\n>> no problem as we do not copy the table sync replication origin to the\n>> new cluster while upgrading. Since the table sync replication origin\n>> is not copied to the new cluster there will be no replication origin\n>> leaks.\n> \n> And, this will work because in the SYNCDONE state, while removing the\n> origin, we are okay with missing origins. It seems not copying the\n> origin for tablesync workers in this state (SYNCDONE) relies on the\n> fact that currently, we don't use those origins once the system\n> reaches the SYNCDONE state but I am not sure it is a good idea to have\n> such a dependency and that upgrade assuming such things doesn't seems\n> ideal to me.\n\nHmm, yeah, you mean the replorigin_drop_by_name() calls in\ntablesync.c. I did not pay much attention about that in the code, but\nyour point sounds sensible.\n\n(I have not been able to complete an analysis of the risks behind 's'\nto convince myself that it is entirely safe, but leaks are scary as\nhell if this gets automated across a large fleet of nodes..)\n\n> Personally, I think allowing an upgrade in 'i'\n> (initialize) state or 'r' (ready) state seems safe because in those\n> states either slots/origins don't exist or are dropped. What do you\n> think?\n\nI share a similar impression about 's'. From a design point of view,\nmaking the conditions to reach harder in the first implementation\nmakes the user experience stricter, but that's safer regarding leaks\nand it is still possible to relax these choices in the future\ndepending on the improvement pieces we are able to figure out.\n--\nMichael",
"msg_date": "Tue, 21 Nov 2023 10:41:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 20 Nov 2023 at 10:44, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v15-0001\n>\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> 1. getSubscriptions\n>\n> + if (fout->remoteVersion >= 170000)\n> + appendPQExpBufferStr(query, \"o.remote_lsn AS suboriginremotelsn\\n\");\n> + else\n> + appendPQExpBufferStr(query, \"NULL AS suboriginremotelsn\\n\");\n> +\n>\n> There should be preceding spaces in those append strings to match the\n> other ones.\n\nModified\n\n> ~~~\n>\n> 2. dumpSubscriptionTable\n>\n> +/*\n> + * dumpSubscriptionTable\n> + * Dump the definition of the given subscription table mapping. This will be\n> + * used only in binary-upgrade mode.\n> + */\n> +static void\n> +dumpSubscriptionTable(Archive *fout, const SubRelInfo *subrinfo)\n> +{\n> + DumpOptions *dopt = fout->dopt;\n> + SubscriptionInfo *subinfo = subrinfo->subinfo;\n> + PQExpBuffer query;\n> + char *tag;\n> +\n> + /* Do nothing in data-only dump */\n> + if (dopt->dataOnly)\n> + return;\n> +\n> + Assert(fout->dopt->binary_upgrade || fout->remoteVersion >= 170000);\n>\n> The function comment says this is only for binary-upgrade mode, so why\n> does the Assert use || (OR)?\n\nAdded comments\n\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 3. check_and_dump_old_cluster\n>\n> + /*\n> + * Subscription dependencies can be migrated since PG17. See comments atop\n> + * get_old_cluster_subscription_count().\n> + */\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n> + check_old_cluster_subscription_state(&old_cluster);\n> +\n>\n> Should this be combined with the other adjacent check so there is only\n> one \"if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\"\n> needed?\n\nModified\n\n> ~~~\n>\n> 4. check_new_cluster\n>\n> check_new_cluster_logical_replication_slots();\n> +\n> + check_new_cluster_subscription_configuration();\n>\n> When checking the old cluster, the subscription was checked before the\n> slots, but here for the new cluster, the slots are checked before the\n> subscription. Maybe it makes no difference but it might be tidier to\n> do these old/new checks in the same order.\n\nModified\n\n> ~~~\n>\n> 5. check_new_cluster_logical_replication_slots\n>\n> - /* Quick return if there are no logical slots to be migrated. */\n> + /* Quick return if there are no logical slots to be migrated */\n>\n> Change is not relevant for this patch.\n\nRemoved it\n\n> ~~~\n>\n> 6.\n>\n> + res = executeQueryOrDie(conn, \"SELECT setting FROM pg_settings \"\n> + \"WHERE name IN ('max_replication_slots') \"\n> + \"ORDER BY name DESC;\");\n>\n> Using IN and ORDER BY in this SQL seems unnecessary when you are only\n> searching for one name.\n\nModified\n\n> ======\n> src/bin/pg_upgrade/info.c\n>\n> 7. statics\n>\n> -\n> +static void get_old_cluster_subscription_count(DbInfo *dbinfo);\n>\n> This change also removes an existing blank line -- not sure if that\n> was intentional\n\nModified\n\n> ~~~\n>\n> 8.\n> @@ -365,7 +369,6 @@ get_template0_info(ClusterInfo *cluster)\n> PQfinish(conn);\n> }\n>\n> -\n> /*\n> * get_db_infos()\n> *\n>\n> This blank line change (before get_db_infos) should not be part of this patch.\n\nModified\n\n> ~~~\n>\n> 9. get_old_cluster_subscription_count\n>\n> It seems a slightly misleading function name because this is a PER-DB\n> count, not a cluster count.\n\nModified\n\n> ~~~\n>\n>\n> 10.\n> + /* Subscriptions can be migrated since PG17. */\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n> + return;\n>\n> IMO it is better to compare < 1700 instead of <= 1600. It keeps the\n> code more aligned with the comment.\n\nModified\n\n> ~~~\n>\n> 11. count_old_cluster_subscriptions\n>\n> +/*\n> + * count_old_cluster_subscriptions()\n> + *\n> + * Returns the number of subscription for all databases.\n> + *\n> + * Note: this function always returns 0 if the old_cluster is PG16 and prior\n> + * because we gather subscriptions only for cluster versions greater than or\n> + * equal to PG17. See get_old_cluster_subscription_count().\n> + */\n> +int\n> +count_old_cluster_subscriptions(void)\n> +{\n> + int nsubs = 0;\n> +\n> + for (int dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n> + nsubs += old_cluster.dbarr.dbs[dbnum].nsubs;\n> +\n> + return nsubs;\n> +}\n>\n> 11a.\n> /subscription/subscriptions/\n\nModified\n\n> ~\n>\n> 11b.\n> The code is now consistent with the slots code which looks good. OTOH\n> I thought that 'pg_catalog.pg_subscription' is shared across all\n> databases of the cluster, so isn't this code inefficient to be\n> querying again and again for every database (if there are many of\n> them) instead of just querying 1 time only for the whole cluster?\n\nMy earlier version was like that, changed it to keep the code\nconsistent to logical replication slots.\n\n> ======\n> src/bin/pg_upgrade/t/004_subscription.pl\n>\n> 12.\n> It is difficult to keep track of all the tables (upgraded and not\n> upgraded) at each step of these tests. Maybe the comments can be more\n> explicit along the way. e.g\n>\n> BEFORE\n> +# Add tab_not_upgraded1 to the publication\n>\n> SUGGESTION\n> +# Add tab_not_upgraded1 to the publication. Now publication has <blah blah>\n>\n> and\n>\n> BEFORE\n> +# Subscription relations should be preserved\n>\n> SUGGESTION\n> +# Subscription relations should be preserved. The upgraded won't know\n> about 'tab_not_upgraded1' because <blah blah>\n>\n> etc.\n\nModified\n\n> ~~~\n>\n> 13.\n> +$result =\n> + $new_sub->safe_psql('postgres', \"SELECT count(*) FROM tab_not_upgraded1\");\n> +is($result, qq(0),\n> + \"no change in table tab_not_upgraded1 afer enable subscription which\n> is not part of the publication\"\n>\n> /afer/after/\n\nModified\n\n> ~~~\n>\n> 14.\n> +# ------------------------------------------------------\n> +# Check that pg_upgrade refuses to run a) if there's a subscription with tables\n> +# in a state different than 'r' (ready), 'i' (init) and 's' (synchronized)\n> +# and/or b) if the subscription does not have a replication origin.\n> +# ------------------------------------------------------\n>\n> 14a,\n> /does not have a/has no/\n\nModified\n\n> ~\n>\n> 14b.\n> Maybe put a) and b) on newlines to be more readable.\n\nModified\n\nThe attached v16 version patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Tue, 21 Nov 2023 23:03:36 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Thanks for addressing my past review comments.\n\nHere are some more review comments for patch v16-0001\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n1.\n+ <para>\n+ Create all the new tables that were created in the publication during\n+ upgrade and refresh the publication by executing\n+ <link linkend=\"sql-altersubscription\"><command>ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION</command></link>.\n+ </para>\n\n\"Create all ... that were created\" sounds a bit strange.\n\nSUGGESTION (maybe like this or similar?)\nCreate equivalent subscriber tables for anything that became newly\npart of the publication during the upgrade and....\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n2. getSubscriptionTables\n\n+/*\n+ * getSubscriptionTables\n+ * Get information about subscription membership for dumpable tables. This\n+ * will be used only in binary-upgrade mode.\n+ */\n+void\n+getSubscriptionTables(Archive *fout)\n+{\n+ DumpOptions *dopt = fout->dopt;\n+ SubscriptionInfo *subinfo = NULL;\n+ SubRelInfo *subrinfo;\n+ PQExpBuffer query;\n+ PGresult *res;\n+ int i_srsubid;\n+ int i_srrelid;\n+ int i_srsubstate;\n+ int i_srsublsn;\n+ int ntups;\n+ Oid last_srsubid = InvalidOid;\n+\n+ if (dopt->no_subscriptions || !dopt->binary_upgrade ||\n+ fout->remoteVersion < 170000)\n+ return;\n\nThis function comment says \"used only in binary-upgrade mode.\" and the\nAssert says the same. But, is this compatible with the other function\ndumpSubscriptionTable() where it says \"used only in binary-upgrade\nmode and for PG17 or later versions\"?\n\n======\nsrc/bin/pg_upgrade/check.c\n\n3. check_new_cluster_subscription_configuration\n\n+static void\n+check_new_cluster_subscription_configuration(void)\n+{\n+ PGresult *res;\n+ PGconn *conn;\n+ int nsubs_on_old;\n+ int max_replication_slots;\n+\n+ /* Logical slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n+ return;\n\nIMO it is better to say < 1700 in this check, instead of <= 1600.\n\n~~~\n\n4.\n+ /* Quick return if there are no subscriptions to be migrated */\n+ if (nsubs_on_old == 0)\n+ return;\n\nMissing period in comment.\n\n~~~\n\n5.\n+/*\n+ * check_old_cluster_subscription_state()\n+ *\n+ * Verify that each of the subscriptions has all their corresponding tables in\n+ * i (initialize), r (ready) or s (synchronized) state.\n+ */\n+static void\n+check_old_cluster_subscription_state(ClusterInfo *cluster)\n\nThis function is only for the old cluster (hint: the function name) so\nthere is no need to pass the 'cluster' parameter here. Just directly\nuse old_cluster in the function body.\n\n======\nsrc/bin/pg_upgrade/t/004_subscription.pl\n\n6.\n+# Add tab_not_upgraded1 to the publication. Now publication has tab_upgraded1\n+# and tab_upgraded2 tables.\n+$publisher->safe_psql('postgres',\n+ \"ALTER PUBLICATION regress_pub ADD TABLE tab_upgraded2\");\n\nTypo in comment. You added tab_not_upgraded2, not tab_not_upgraded1\n\n~~\n\n7.\n+# Subscription relations should be preserved. The upgraded won't know\n+# about 'tab_not_upgraded1' because the subscription is not yet refreshed.\n\nTypo or missing word in comment?\n\n\"The upgraded\" ??\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 22 Nov 2023 12:17:30 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, 22 Nov 2023 at 06:48, Peter Smith <smithpb2250@gmail.com> wrote:\n> ======\n> doc/src/sgml/ref/pgupgrade.sgml\n>\n> 1.\n> + <para>\n> + Create all the new tables that were created in the publication during\n> + upgrade and refresh the publication by executing\n> + <link linkend=\"sql-altersubscription\"><command>ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION</command></link>.\n> + </para>\n>\n> \"Create all ... that were created\" sounds a bit strange.\n>\n> SUGGESTION (maybe like this or similar?)\n> Create equivalent subscriber tables for anything that became newly\n> part of the publication during the upgrade and....\n\nModified\n\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> 2. getSubscriptionTables\n>\n> +/*\n> + * getSubscriptionTables\n> + * Get information about subscription membership for dumpable tables. This\n> + * will be used only in binary-upgrade mode.\n> + */\n> +void\n> +getSubscriptionTables(Archive *fout)\n> +{\n> + DumpOptions *dopt = fout->dopt;\n> + SubscriptionInfo *subinfo = NULL;\n> + SubRelInfo *subrinfo;\n> + PQExpBuffer query;\n> + PGresult *res;\n> + int i_srsubid;\n> + int i_srrelid;\n> + int i_srsubstate;\n> + int i_srsublsn;\n> + int ntups;\n> + Oid last_srsubid = InvalidOid;\n> +\n> + if (dopt->no_subscriptions || !dopt->binary_upgrade ||\n> + fout->remoteVersion < 170000)\n> + return;\n>\n> This function comment says \"used only in binary-upgrade mode.\" and the\n> Assert says the same. But, is this compatible with the other function\n> dumpSubscriptionTable() where it says \"used only in binary-upgrade\n> mode and for PG17 or later versions\"?\n>\nModified\n\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 3. check_new_cluster_subscription_configuration\n>\n> +static void\n> +check_new_cluster_subscription_configuration(void)\n> +{\n> + PGresult *res;\n> + PGconn *conn;\n> + int nsubs_on_old;\n> + int max_replication_slots;\n> +\n> + /* Logical slots can be migrated since PG17. */\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n> + return;\n>\n> IMO it is better to say < 1700 in this check, instead of <= 1600.\n>\nModified\n\n> ~~~\n>\n> 4.\n> + /* Quick return if there are no subscriptions to be migrated */\n> + if (nsubs_on_old == 0)\n> + return;\n>\n> Missing period in comment.\n>\nModified\n\n> ~~~\n>\n> 5.\n> +/*\n> + * check_old_cluster_subscription_state()\n> + *\n> + * Verify that each of the subscriptions has all their corresponding tables in\n> + * i (initialize), r (ready) or s (synchronized) state.\n> + */\n> +static void\n> +check_old_cluster_subscription_state(ClusterInfo *cluster)\n>\n> This function is only for the old cluster (hint: the function name) so\n> there is no need to pass the 'cluster' parameter here. Just directly\n> use old_cluster in the function body.\n>\nModified\n\n> ======\n> src/bin/pg_upgrade/t/004_subscription.pl\n>\n> 6.\n> +# Add tab_not_upgraded1 to the publication. Now publication has tab_upgraded1\n> +# and tab_upgraded2 tables.\n> +$publisher->safe_psql('postgres',\n> + \"ALTER PUBLICATION regress_pub ADD TABLE tab_upgraded2\");\n>\n> Typo in comment. You added tab_not_upgraded2, not tab_not_upgraded1\n>\nModified\n\n> ~~\n>\n> 7.\n> +# Subscription relations should be preserved. The upgraded won't know\n> +# about 'tab_not_upgraded1' because the subscription is not yet refreshed.\n>\n> Typo or missing word in comment?\n>\n> \"The upgraded\" ??\n>\nModified\n\nAttached the v17 patch which have the same changes\n\nThanks,\nShlok Kumar Kyal",
"msg_date": "Wed, 22 Nov 2023 17:25:16 +0530",
"msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Here are some review comments for patch v17-0001\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n1. getSubscriptionTables\n\n+/*\n+ * getSubscriptionTables\n+ * Get information about subscription membership for dumpable tables. This\n+ * will be used only in binary-upgrade mode and for PG17 or later versions.\n+ */\n+void\n+getSubscriptionTables(Archive *fout)\n+{\n+ DumpOptions *dopt = fout->dopt;\n+ SubscriptionInfo *subinfo = NULL;\n+ SubRelInfo *subrinfo;\n+ PQExpBuffer query;\n+ PGresult *res;\n+ int i_srsubid;\n+ int i_srrelid;\n+ int i_srsubstate;\n+ int i_srsublsn;\n+ int ntups;\n+ Oid last_srsubid = InvalidOid;\n+\n+ if (dopt->no_subscriptions || !dopt->binary_upgrade ||\n+ fout->remoteVersion < 170000)\n+ return;\n\nI still felt that the function comment (\"used only in binary-upgrade\nmode and for PG17 or later\") was misleading. IMO that sounds like it\nwould be OK for PG17 regardless of the binary mode, but the code says\notherwise.\n\nAssuming the code is correct, perhaps the comment should say:\n\"... used only in binary-upgrade mode for PG17 or later versions.\"\n\n~~~\n\n2. dumpSubscriptionTable\n\n+/*\n+ * dumpSubscriptionTable\n+ * Dump the definition of the given subscription table mapping. This will be\n+ * used only in binary-upgrade mode and for PG17 or later versions.\n+ */\n+static void\n+dumpSubscriptionTable(Archive *fout, const SubRelInfo *subrinfo)\n\n(this is the same as the previous review comment #1)\n\nAssuming the code is correct, perhaps the comment should say:\n\"... used only in binary-upgrade mode for PG17 or later versions.\"\n\n======\nsrc/bin/pg_upgrade/check.c\n\n3.\n+static void\n+check_old_cluster_subscription_state()\n+{\n+ FILE *script = NULL;\n+ char output_path[MAXPGPATH];\n+ int ntup;\n+ ClusterInfo *cluster = &old_cluster;\n+\n+ prep_status(\"Checking for subscription state\");\n+\n+ snprintf(output_path, sizeof(output_path), \"%s/%s\",\n+ log_opts.basedir,\n+ \"subs_invalid.txt\");\n+ for (int dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\n+ {\n+ PGresult *res;\n+ DbInfo *active_db = &cluster->dbarr.dbs[dbnum];\n+ PGconn *conn = connectToServer(cluster, active_db->db_name);\n\nThere seems no need for an extra variable ('cluster') here when you\ncan just reference 'old_cluster' directly in the code, the same as\nother functions in this file do all the time.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 23 Nov 2023 11:25:44 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, 21 Nov 2023 at 07:11, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Nov 20, 2023 at 09:49:41AM +0530, Amit Kapila wrote:\n> > On Tue, Nov 14, 2023 at 7:21 AM vignesh C <vignesh21@gmail.com> wrote:\n> >> There are couple of things happening here: a) In the first part we\n> >> take care of setting subscription relation to SYNCDONE and dropping\n> >> the replication slot at publisher node, only if drop replication slot\n> >> is successful the relation state will be set to SYNCDONE , if drop\n> >> replication slot fails the relation state will still be in\n> >> FINISHEDCOPY. So if there is a failure in the drop replication slot we\n> >> will not have an issue as the tablesync worker will be in\n> >> FINISHEDCOPYstate and this state is not allowed for upgrade. When the\n> >> state is in SYNCDONE the tablesync slot will not be present. b) In the\n> >> second part we drop the replication origin, even if there is a chance\n> >> that drop replication origin fails due to some reason, there will be\n> >> no problem as we do not copy the table sync replication origin to the\n> >> new cluster while upgrading. Since the table sync replication origin\n> >> is not copied to the new cluster there will be no replication origin\n> >> leaks.\n> >\n> > And, this will work because in the SYNCDONE state, while removing the\n> > origin, we are okay with missing origins. It seems not copying the\n> > origin for tablesync workers in this state (SYNCDONE) relies on the\n> > fact that currently, we don't use those origins once the system\n> > reaches the SYNCDONE state but I am not sure it is a good idea to have\n> > such a dependency and that upgrade assuming such things doesn't seems\n> > ideal to me.\n>\n> Hmm, yeah, you mean the replorigin_drop_by_name() calls in\n> tablesync.c. I did not pay much attention about that in the code, but\n> your point sounds sensible.\n>\n> (I have not been able to complete an analysis of the risks behind 's'\n> to convince myself that it is entirely safe, but leaks are scary as\n> hell if this gets automated across a large fleet of nodes..)\n>\n> > Personally, I think allowing an upgrade in 'i'\n> > (initialize) state or 'r' (ready) state seems safe because in those\n> > states either slots/origins don't exist or are dropped. What do you\n> > think?\n>\n> I share a similar impression about 's'. From a design point of view,\n> making the conditions to reach harder in the first implementation\n> makes the user experience stricter, but that's safer regarding leaks\n> and it is still possible to relax these choices in the future\n> depending on the improvement pieces we are able to figure out.\n\nBased on the suggestions just to have safe init and ready state, I\nhave made the changes to handle the same in v18 version patch\nattached.\n\nRegards,\nVignesh",
"msg_date": "Thu, 23 Nov 2023 14:13:45 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 23 Nov 2023 at 05:56, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v17-0001\n>\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> 1. getSubscriptionTables\n>\n> +/*\n> + * getSubscriptionTables\n> + * Get information about subscription membership for dumpable tables. This\n> + * will be used only in binary-upgrade mode and for PG17 or later versions.\n> + */\n> +void\n> +getSubscriptionTables(Archive *fout)\n> +{\n> + DumpOptions *dopt = fout->dopt;\n> + SubscriptionInfo *subinfo = NULL;\n> + SubRelInfo *subrinfo;\n> + PQExpBuffer query;\n> + PGresult *res;\n> + int i_srsubid;\n> + int i_srrelid;\n> + int i_srsubstate;\n> + int i_srsublsn;\n> + int ntups;\n> + Oid last_srsubid = InvalidOid;\n> +\n> + if (dopt->no_subscriptions || !dopt->binary_upgrade ||\n> + fout->remoteVersion < 170000)\n> + return;\n>\n> I still felt that the function comment (\"used only in binary-upgrade\n> mode and for PG17 or later\") was misleading. IMO that sounds like it\n> would be OK for PG17 regardless of the binary mode, but the code says\n> otherwise.\n>\n> Assuming the code is correct, perhaps the comment should say:\n> \"... used only in binary-upgrade mode for PG17 or later versions.\"\n\nModified\n\n> ~~~\n>\n> 2. dumpSubscriptionTable\n>\n> +/*\n> + * dumpSubscriptionTable\n> + * Dump the definition of the given subscription table mapping. This will be\n> + * used only in binary-upgrade mode and for PG17 or later versions.\n> + */\n> +static void\n> +dumpSubscriptionTable(Archive *fout, const SubRelInfo *subrinfo)\n>\n> (this is the same as the previous review comment #1)\n>\n> Assuming the code is correct, perhaps the comment should say:\n> \"... used only in binary-upgrade mode for PG17 or later versions.\"\n\nModified\n\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 3.\n> +static void\n> +check_old_cluster_subscription_state()\n> +{\n> + FILE *script = NULL;\n> + char output_path[MAXPGPATH];\n> + int ntup;\n> + ClusterInfo *cluster = &old_cluster;\n> +\n> + prep_status(\"Checking for subscription state\");\n> +\n> + snprintf(output_path, sizeof(output_path), \"%s/%s\",\n> + log_opts.basedir,\n> + \"subs_invalid.txt\");\n> + for (int dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\n> + {\n> + PGresult *res;\n> + DbInfo *active_db = &cluster->dbarr.dbs[dbnum];\n> + PGconn *conn = connectToServer(cluster, active_db->db_name);\n>\n> There seems no need for an extra variable ('cluster') here when you\n> can just reference 'old_cluster' directly in the code, the same as\n> other functions in this file do all the time.\n\nModified\n\nThe v18 version patch attached at [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm3wyYY5ywFpCwUVW1_Di1af3WxeZggGEDQEu8qa58a7FQ%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 23 Nov 2023 14:18:52 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "I have only trivial review comments for patch v18-0001\n\n======\nsrc/bin/pg_upgrade/check.c\n\n1. check_new_cluster_subscription_configuration\n\n+ /*\n+ * A slot not created yet refers to the 'i' (initialize) state, while\n+ * 'r' (ready) state refer to a slot created previously but already\n+ * dropped. These states are supported states for upgrade. The other\n+ * states listed below are not ok:\n+ *\n+ * a) SUBREL_STATE_DATASYNC: A relation upgraded while in this state\n+ * would retain a replication slot, which could not be dropped by the\n+ * sync worker spawned after the upgrade because the subscription ID\n+ * tracked by the publisher does not match anymore.\n+ *\n+ * b) SUBREL_STATE_SYNCDONE: A relation upgraded while in this state\n+ * would retain the replication origin in certain cases.\n+ *\n+ * c) SUBREL_STATE_FINISHEDCOPY: A tablesync worker spawned to work on\n+ * a relation upgraded while in this state would expect an origin ID\n+ * with the OID of the subscription used before the upgrade, causing\n+ * it to fail.\n+ *\n+ * d) SUBREL_STATE_SYNCWAIT, SUBREL_STATE_CATCHUP and\n+ * SUBREL_STATE_UNKNOWN: These states are not stored in the catalog,\n+ * so we need not allow these states.\n+ */\n\n1a.\n/while 'r' (ready) state refer to a slot/while 'r' (ready) state\nrefers to a slot/\n\n1b.\n/These states are supported states for upgrade./These states are\nsupported for pg_upgrade./\n\n1c\n/The other states listed below are not ok./The other states listed\nbelow are not supported./\n\n======\nsrc/bin/pg_upgrade/t/004_subscription.pl\n\n2.\n+# ------------------------------------------------------\n+# Check that pg_upgrade refuses to run in:\n+# a) if there's a subscription with tables in a state different than\n+# 'r' (ready) or 'i' (init) state and/or\n+# b) if the subscription has no replication origin.\n+# ------------------------------------------------------\n\n/if there's a subscription with tables in a state different than 'r'\n(ready) or 'i' (init) state and/if there's a subscription with tables\nin a state other than 'r' (ready) or 'i' (init) and/\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 24 Nov 2023 12:29:45 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, 24 Nov 2023 at 07:00, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I have only trivial review comments for patch v18-0001\n>\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 1. check_new_cluster_subscription_configuration\n>\n> + /*\n> + * A slot not created yet refers to the 'i' (initialize) state, while\n> + * 'r' (ready) state refer to a slot created previously but already\n> + * dropped. These states are supported states for upgrade. The other\n> + * states listed below are not ok:\n> + *\n> + * a) SUBREL_STATE_DATASYNC: A relation upgraded while in this state\n> + * would retain a replication slot, which could not be dropped by the\n> + * sync worker spawned after the upgrade because the subscription ID\n> + * tracked by the publisher does not match anymore.\n> + *\n> + * b) SUBREL_STATE_SYNCDONE: A relation upgraded while in this state\n> + * would retain the replication origin in certain cases.\n> + *\n> + * c) SUBREL_STATE_FINISHEDCOPY: A tablesync worker spawned to work on\n> + * a relation upgraded while in this state would expect an origin ID\n> + * with the OID of the subscription used before the upgrade, causing\n> + * it to fail.\n> + *\n> + * d) SUBREL_STATE_SYNCWAIT, SUBREL_STATE_CATCHUP and\n> + * SUBREL_STATE_UNKNOWN: These states are not stored in the catalog,\n> + * so we need not allow these states.\n> + */\n>\n> 1a.\n> /while 'r' (ready) state refer to a slot/while 'r' (ready) state\n> refers to a slot/\n\nModified\n\n> 1b.\n> /These states are supported states for upgrade./These states are\n> supported for pg_upgrade./\n\nModified\n\n> 1c\n> /The other states listed below are not ok./The other states listed\n> below are not supported./\n\nModified\n\n> ======\n> src/bin/pg_upgrade/t/004_subscription.pl\n>\n> 2.\n> +# ------------------------------------------------------\n> +# Check that pg_upgrade refuses to run in:\n> +# a) if there's a subscription with tables in a state different than\n> +# 'r' (ready) or 'i' (init) state and/or\n> +# b) if the subscription has no replication origin.\n> +# ------------------------------------------------------\n>\n> /if there's a subscription with tables in a state different than 'r'\n> (ready) or 'i' (init) state and/if there's a subscription with tables\n> in a state other than 'r' (ready) or 'i' (init) and/\n\nModified\n\nThe attached v19 version patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Fri, 24 Nov 2023 21:05:26 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 20 Nov 2023 at 05:27, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Nov 19, 2023 at 06:56:05AM +0530, vignesh C wrote:\n> > On Sun, 19 Nov 2023 at 06:52, vignesh C <vignesh21@gmail.com> wrote:\n> >> On Fri, 10 Nov 2023 at 19:26, vignesh C <vignesh21@gmail.com> wrote:\n> >>> I will analyze more on this and post the analysis in the subsequent mail.\n> >>\n> >> I analyzed further and felt that retaining subscription oid would be\n> >> cleaner as subscription/subscription_rel/replication_origin/replication_origin_status\n> >> all of these will be using the same oid as earlier and also probably\n> >> help in supporting upgrade of subscription in more scenarios later.\n> >> Here is a patch to handle the same.\n> >\n> > Sorry I had attached the older patch, here is the correct updated one.\n>\n> Thanks for digging into that. I think that we should consider that\n> once the main patch is merged and stable in the tree for v17 to get a\n> more consistent experience.\n\nYes, that approach makes sense.\n\n> Shouldn't this include a test in the new\n> TAP test for the upgrade of subscriptions? It should be as simple as\n> cross-checking the OIDs of the subscriptions before and after the\n> upgrade.\n\nAdded a test for the same.\n\nThe changes for the same are present in v19-0002 patch.\n\nRegards,\nVignesh",
"msg_date": "Sat, 25 Nov 2023 07:21:04 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sat, Nov 25, 2023 at 7:21 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n\nFew comments on v19:\n==================\n1.\n+ <para>\n+ The subscriptions will be migrated to the new cluster in a disabled state.\n+ After migration, do this:\n+ </para>\n+\n+ <itemizedlist>\n+ <listitem>\n+ <para>\n+ Enable the subscriptions by executing\n+ <link linkend=\"sql-altersubscription\"><command>ALTER\nSUBSCRIPTION ... ENABLE</command></link>.\n\nThe reason for this restriction is not very clear to me. Is it because\nwe are using pg_dump for subscription and the existing functionality\nis doing it? If so, I think currently even connect is false.\n\n2.\n+ * b) SUBREL_STATE_SYNCDONE: A relation upgraded while in this state\n+ * would retain the replication origin in certain cases.\n\nI think this is vague. Can we briefly describe cases where the origins\nwould be retained?\n\n3. I think the cases where the publisher is also upgraded restoring\nthe origin's LSN is of no use. Currently, I can't see a problem with\nrestoring stale originLSN in such cases as we won't be able to\ndistinguish during the upgrade but I think we should document it in\nthe comments somewhere in the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 25 Nov 2023 17:50:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Here are some review comments for patch set v19*\n\n//////\n\nv19-0001.\n\nNo comments\n\n///////\n\nv19-0002.\n\n(I saw that both changes below seemed cut/paste from similar\nfunctions, but I will ask the questions anyway).\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n1.\n+/* Potentially set by pg_upgrade_support functions */\n+Oid binary_upgrade_next_pg_subscription_oid = InvalidOid;\n+\n\nThe comment \"by pg_upgrade_support functions\" seemed a bit vague. IMO\nyou might as well tell the name of the function that sets this.\n\nSUGGESTION\nPotentially set by the pg_upgrade_support function --\nbinary_upgrade_set_next_pg_subscription_oid().\n\n~~~\n\n2. CreateSubscription\n\n+ if (!OidIsValid(binary_upgrade_next_pg_subscription_oid))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"pg_subscription OID value not set when in binary upgrade mode\")));\n\nDoesn't this condition mean some kind of impossible internal error\noccurred -- i.e. should this be elog instead of ereport?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 27 Nov 2023 12:22:38 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sat, 25 Nov 2023 at 17:50, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Nov 25, 2023 at 7:21 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> Few comments on v19:\n> ==================\n> 1.\n> + <para>\n> + The subscriptions will be migrated to the new cluster in a disabled state.\n> + After migration, do this:\n> + </para>\n> +\n> + <itemizedlist>\n> + <listitem>\n> + <para>\n> + Enable the subscriptions by executing\n> + <link linkend=\"sql-altersubscription\"><command>ALTER\n> SUBSCRIPTION ... ENABLE</command></link>.\n>\n> The reason for this restriction is not very clear to me. Is it because\n> we are using pg_dump for subscription and the existing functionality\n> is doing it? If so, I think currently even connect is false.\n\nThis was done this way so that the apply worker doesn't get started\nwhile the upgrade is happening. Now that we have set\nmax_logical_replication_workers to 0, the apply workers will not get\nstarted during the upgrade process. I think now we can create the\nsubscriptions with the same options as the old cluster in case of\nupgrade.\n\n> 2.\n> + * b) SUBREL_STATE_SYNCDONE: A relation upgraded while in this state\n> + * would retain the replication origin in certain cases.\n>\n> I think this is vague. Can we briefly describe cases where the origins\n> would be retained?\n\nI will modify this in the next version\n\n> 3. I think the cases where the publisher is also upgraded restoring\n> the origin's LSN is of no use. Currently, I can't see a problem with\n> restoring stale originLSN in such cases as we won't be able to\n> distinguish during the upgrade but I think we should document it in\n> the comments somewhere in the patch.\n\nI will add a comment for this in the next version\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 27 Nov 2023 15:18:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 3:18 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sat, 25 Nov 2023 at 17:50, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Nov 25, 2023 at 7:21 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> >\n> > Few comments on v19:\n> > ==================\n> > 1.\n> > + <para>\n> > + The subscriptions will be migrated to the new cluster in a disabled state.\n> > + After migration, do this:\n> > + </para>\n> > +\n> > + <itemizedlist>\n> > + <listitem>\n> > + <para>\n> > + Enable the subscriptions by executing\n> > + <link linkend=\"sql-altersubscription\"><command>ALTER\n> > SUBSCRIPTION ... ENABLE</command></link>.\n> >\n> > The reason for this restriction is not very clear to me. Is it because\n> > we are using pg_dump for subscription and the existing functionality\n> > is doing it? If so, I think currently even connect is false.\n>\n> This was done this way so that the apply worker doesn't get started\n> while the upgrade is happening. Now that we have set\n> max_logical_replication_workers to 0, the apply workers will not get\n> started during the upgrade process. I think now we can create the\n> subscriptions with the same options as the old cluster in case of\n> upgrade.\n>\n\nOkay, but what is your plan to change it. Currently, we are relying on\nexisting pg_dump code to dump subscriptions data, do you want to\nchange that? There is a reason for the current behavior of pg_dump\nwhich as mentioned in docs is: \"When dumping logical replication\nsubscriptions, pg_dump will generate CREATE SUBSCRIPTION commands that\nuse the connect = false option, so that restoring the subscription\ndoes not make remote connections for creating a replication slot or\nfor initial table copy. That way, the dump can be restored without\nrequiring network access to the remote servers. It is then up to the\nuser to reactivate the subscriptions in a suitable way. If the\ninvolved hosts have changed, the connection information might have to\nbe changed. It might also be appropriate to truncate the target tables\nbefore initiating a new full table copy.\"\n\nI guess one reason to not enable subscription after restore was that\nit can't work without origins, and also one can restore the dump in a\ntotally different environment, and one may choose not to dump all the\ncorresponding tables which I don't think is true for an upgrade. So,\nthat could be one reason to do differently for upgrades. Do we see\nreasons similar to pg_dump/restore due to which after upgrade\nsubscriptions may not work?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Nov 2023 17:12:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 27 Nov 2023 at 17:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 27, 2023 at 3:18 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sat, 25 Nov 2023 at 17:50, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sat, Nov 25, 2023 at 7:21 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > >\n> > > Few comments on v19:\n> > > ==================\n> > > 1.\n> > > + <para>\n> > > + The subscriptions will be migrated to the new cluster in a disabled state.\n> > > + After migration, do this:\n> > > + </para>\n> > > +\n> > > + <itemizedlist>\n> > > + <listitem>\n> > > + <para>\n> > > + Enable the subscriptions by executing\n> > > + <link linkend=\"sql-altersubscription\"><command>ALTER\n> > > SUBSCRIPTION ... ENABLE</command></link>.\n> > >\n> > > The reason for this restriction is not very clear to me. Is it because\n> > > we are using pg_dump for subscription and the existing functionality\n> > > is doing it? If so, I think currently even connect is false.\n> >\n> > This was done this way so that the apply worker doesn't get started\n> > while the upgrade is happening. Now that we have set\n> > max_logical_replication_workers to 0, the apply workers will not get\n> > started during the upgrade process. I think now we can create the\n> > subscriptions with the same options as the old cluster in case of\n> > upgrade.\n> >\n>\n> Okay, but what is your plan to change it. Currently, we are relying on\n> existing pg_dump code to dump subscriptions data, do you want to\n> change that? There is a reason for the current behavior of pg_dump\n> which as mentioned in docs is: \"When dumping logical replication\n> subscriptions, pg_dump will generate CREATE SUBSCRIPTION commands that\n> use the connect = false option, so that restoring the subscription\n> does not make remote connections for creating a replication slot or\n> for initial table copy. That way, the dump can be restored without\n> requiring network access to the remote servers. It is then up to the\n> user to reactivate the subscriptions in a suitable way. If the\n> involved hosts have changed, the connection information might have to\n> be changed. It might also be appropriate to truncate the target tables\n> before initiating a new full table copy.\"\n>\n> I guess one reason to not enable subscription after restore was that\n> it can't work without origins, and also one can restore the dump in a\n> totally different environment, and one may choose not to dump all the\n> corresponding tables which I don't think is true for an upgrade. So,\n> that could be one reason to do differently for upgrades. Do we see\n> reasons similar to pg_dump/restore due to which after upgrade\n> subscriptions may not work?\n\nI felt that the behavior for upgrade can be slightly different than\nthe dump as the subscription relations and the replication origin will\nbe updated when the subscriber is upgraded. And as the logical\nreplication workers will not be started during the upgrade we can\npreserve the subscription enabled status too. I felt just adding an\n\"ALTER SUBSCRIPTION sub-name ENABLE\" for the subscriptions that were\nenabled in the old cluster in case of upgrade like in the attached\npatch should be fine. The behavior of dump is not changed it is\nretained as it is.\n\nRegards,\nVignesh",
"msg_date": "Tue, 28 Nov 2023 16:12:23 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sat, 25 Nov 2023 at 17:50, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> 2.\n> + * b) SUBREL_STATE_SYNCDONE: A relation upgraded while in this state\n> + * would retain the replication origin in certain cases.\n>\n> I think this is vague. Can we briefly describe cases where the origins\n> would be retained?\n\nModified\n\n> 3. I think the cases where the publisher is also upgraded restoring\n> the origin's LSN is of no use. Currently, I can't see a problem with\n> restoring stale originLSN in such cases as we won't be able to\n> distinguish during the upgrade but I think we should document it in\n> the comments somewhere in the patch.\n\nAdded comments\n\nThese are handled in the v20 version patch attached at:\nhttps://www.postgresql.org/message-id/CALDaNm0ST1iSrJLD_CV6hQs%3Dw4GZRCRdftQvQA3cO8Hq3QUvYw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 28 Nov 2023 21:51:55 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 27 Nov 2023 at 06:53, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch set v19*\n>\n> //////\n>\n> v19-0001.\n>\n> No comments\n>\n> ///////\n>\n> v19-0002.\n>\n> (I saw that both changes below seemed cut/paste from similar\n> functions, but I will ask the questions anyway).\n>\n> ======\n> src/backend/commands/subscriptioncmds.c\n>\n> 1.\n> +/* Potentially set by pg_upgrade_support functions */\n> +Oid binary_upgrade_next_pg_subscription_oid = InvalidOid;\n> +\n>\n> The comment \"by pg_upgrade_support functions\" seemed a bit vague. IMO\n> you might as well tell the name of the function that sets this.\n>\n> SUGGESTION\n> Potentially set by the pg_upgrade_support function --\n> binary_upgrade_set_next_pg_subscription_oid().\n\nModified\n\n> ~~~\n>\n> 2. CreateSubscription\n>\n> + if (!OidIsValid(binary_upgrade_next_pg_subscription_oid))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"pg_subscription OID value not set when in binary upgrade mode\")));\n>\n> Doesn't this condition mean some kind of impossible internal error\n> occurred -- i.e. should this be elog instead of ereport?\n\nThis is kind of a sanity check to prevent setting the subscription id\nwith an invalid oid. This can happen if the server is started in\nbinary upgrade mode and create subscription is called without calling\nbinary_upgrade_set_next_pg_subscription_oid.\n\nThe comment is handled in the v20 version patch attached at:\nhttps://www.postgresql.org/message-id/CALDaNm0ST1iSrJLD_CV6hQs%3Dw4GZRCRdftQvQA3cO8Hq3QUvYw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 28 Nov 2023 21:53:28 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Nov 28, 2023 at 4:12 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n\nFew comments on the latest patch:\n===========================\n1.\n+ if (fout->remoteVersion >= 170000)\n+ appendPQExpBufferStr(query, \" o.remote_lsn AS suboriginremotelsn,\\n\");\n+ else\n+ appendPQExpBufferStr(query, \" NULL AS suboriginremotelsn,\\n\");\n+\n+ if (dopt->binary_upgrade && fout->remoteVersion >= 170000)\n+ appendPQExpBufferStr(query, \" s.subenabled\\n\");\n+ else\n+ appendPQExpBufferStr(query, \" false AS subenabled\\n\");\n+\n+ appendPQExpBufferStr(query,\n+ \"FROM pg_subscription s\\n\");\n+\n+ if (fout->remoteVersion >= 170000)\n+ appendPQExpBufferStr(query,\n+ \"LEFT JOIN pg_catalog.pg_replication_origin_status o \\n\"\n+ \" ON o.external_id = 'pg_' || s.oid::text \\n\");\n\nWhy 'subenabled' have a check for binary_upgrade but\n'suboriginremotelsn' doesn't?\n\n2.\n+Datum\n+binary_upgrade_add_sub_rel_state(PG_FUNCTION_ARGS)\n+{\n+ Relation rel;\n+ HeapTuple tup;\n+ Oid subid;\n+ Form_pg_subscription form;\n+ char *subname;\n+ Oid relid;\n+ char relstate;\n+ XLogRecPtr sublsn;\n+\n+ CHECK_IS_BINARY_UPGRADE;\n+\n+ /* We must check these things before dereferencing the arguments */\n+ if (PG_ARGISNULL(0) || PG_ARGISNULL(1) || PG_ARGISNULL(2))\n+ elog(ERROR, \"null argument to binary_upgrade_add_sub_rel_state is\nnot allowed\");\n+\n+ subname = text_to_cstring(PG_GETARG_TEXT_PP(0));\n+ relid = PG_GETARG_OID(1);\n+ relstate = PG_GETARG_CHAR(2);\n+ sublsn = PG_ARGISNULL(3) ? InvalidXLogRecPtr : PG_GETARG_LSN(3);\n+\n+ tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));\n+ if (!HeapTupleIsValid(tup))\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"relation %u does not exist\", relid));\n+ ReleaseSysCache(tup);\n+\n+ rel = table_open(SubscriptionRelationId, RowExclusiveLock);\n\nWhy there is no locking for relation? I see that during subscription\noperation, we do acquire AccessShareLock on the relation before adding\na corresponding entry in pg_subscription_rel. See the following code:\n\nCreateSubscription()\n{\n...\nforeach(lc, tables)\n{\nRangeVar *rv = (RangeVar *) lfirst(lc);\nOid relid;\n\nrelid = RangeVarGetRelid(rv, AccessShareLock, false);\n\n/* Check for supported relkind. */\nCheckSubscriptionRelkind(get_rel_relkind(relid),\nrv->schemaname, rv->relname);\n\nAddSubscriptionRelState(subid, relid, table_state,\nInvalidXLogRecPtr);\n...\n}\n\n3.\n+Datum\n+binary_upgrade_add_sub_rel_state(PG_FUNCTION_ARGS)\n{\n...\n...\n+ AddSubscriptionRelState(subid, relid, relstate, sublsn);\n...\n}\n\nI see a problem with directly using this function which is that it\ndoesn't release locks which means it expects either the caller to\nrelease those locks or postpone to release them at the transaction\nend. However, all the other binary_upgrade support functions don't\npostpone releasing locks till the transaction ends. I think we should\nadd an additional parameter to indicate whether we want to release\nlocks and then pass it true from the binary upgrade support function.\n\n4.\nextern void getPublicationTables(Archive *fout, TableInfo tblinfo[],\n int numTables);\n extern void getSubscriptions(Archive *fout);\n+extern void getSubscriptionTables(Archive *fout);\n\ngetSubscriptions() and getSubscriptionTables() are defined in the\nopposite order in .c file. I think it is better to change the order in\n.c file unless there is a reason for not doing so.\n\n5. At this stage, no need to update/send the 0002 patch, we can look\nat it after the main patch is committed. That is anyway not directly\nrelated to the main patch.\n\nApart from the above, I have modified a few comments and messages in\nthe attached. Kindly review and include the changes if you are fine\nwith those.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 29 Nov 2023 15:02:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Here are some review comments for patch v20-0001\n\n======\n\n1. getSubscriptions\n\n+ if (dopt->binary_upgrade && fout->remoteVersion >= 170000)\n+ appendPQExpBufferStr(query, \" s.subenabled\\n\");\n+ else\n+ appendPQExpBufferStr(query, \" false AS subenabled\\n\");\n\nProbably I misunderstood this logic... AFAIK the CREATE SUBSCRIPTION\nis normally default *enabled*, so why does this code set default\ndifferently as 'false'. OTOH, if this is some special case default\nneeded because the subscription upgrade is not supported before PG17\nthen maybe it needs a comment to explain.\n\n~~~\n\n2. dumpSubscription\n\n+ if (strcmp(subinfo->subenabled, \"t\") == 0)\n+ {\n+ appendPQExpBufferStr(query,\n+ \"\\n-- For binary upgrade, must preserve the subscriber's running state.\\n\");\n+ appendPQExpBuffer(query, \"ALTER SUBSCRIPTION %s ENABLE;\\n\", qsubname);\n+ }\n\n(this is a bit similar to previous comment)\n\nProbably I misunderstood this logic... but AFAIK the CREATE\nSUBSCRIPTION is normally default *enabled*. In the CREATE SUBSCRIPTION\ntop of this function I did not see any \"enabled=xxx\" code, so won't\nthis just default to enabled=true per normal. In other words, what\nhappens if the subscription being upgraded was already DISABLED -- How\ndoes it remain disabled still after upgrade?\n\nBut I saw there is a test case for this so perhaps the code is fine?\nMaybe it just needs more explanatory comments for this area?\n\n======\nsrc/bin/pg_upgrade/t/004_subscription.pl\n\n3.\n+# The subscription's running status should be preserved\n+my $result =\n+ $new_sub->safe_psql('postgres',\n+ \"SELECT subenabled FROM pg_subscription WHERE subname = 'regress_sub'\");\n+is($result, qq(f),\n+ \"check that the subscriber that was disable on the old subscriber\nshould be disabled in the new subscriber\"\n+);\n+$result =\n+ $new_sub->safe_psql('postgres',\n+ \"SELECT subenabled FROM pg_subscription WHERE subname = 'regress_sub1'\");\n+is($result, qq(t),\n+ \"check that the subscriber that was enabled on the old subscriber\nshould be enabled in the new subscriber\"\n+);\n+$new_sub->safe_psql('postgres', \"DROP SUBSCRIPTION regress_sub1\");\n+\n\nBEFORE\ncheck that the subscriber that was disable on the old subscriber\nshould be disabled in the new subscriber\n\nSUGGESTION\ncheck that a subscriber that was disabled on the old subscriber is\ndisabled on the new subscriber\n\n~\n\nBEFORE\ncheck that the subscriber that was enabled on the old subscriber\nshould be enabled in the new subscriber\n\nSUGGESTION\ncheck that a subscriber that was enabled on the old subscriber is\nenabled on the new subscriber\n\n~~~\n\n4.\n+is($result, qq($remote_lsn), \"remote_lsn should have been preserved\");\n+\n+\n+# Check the number of rows for each table on each server\n\n\nDouble blank lines.\n\n~~~\n\n5.\n+$old_sub->safe_psql('postgres', \"ALTER SUBSCRIPTION regress_sub1 DISABLE\");\n+$old_sub->safe_psql('postgres',\n+ \"ALTER SUBSCRIPTION regress_sub1 SET (slot_name = none)\");\n+$old_sub->safe_psql('postgres', \"DROP SUBSCRIPTION regress_sub1\");\n+\n\nProbably it would be tidier to combine all of those.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 30 Nov 2023 12:06:48 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Nov 30, 2023 at 12:06 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v20-0001\n>\n> 3.\n> +# The subscription's running status should be preserved\n> +my $result =\n> + $new_sub->safe_psql('postgres',\n> + \"SELECT subenabled FROM pg_subscription WHERE subname = 'regress_sub'\");\n> +is($result, qq(f),\n> + \"check that the subscriber that was disable on the old subscriber\n> should be disabled in the new subscriber\"\n> +);\n> +$result =\n> + $new_sub->safe_psql('postgres',\n> + \"SELECT subenabled FROM pg_subscription WHERE subname = 'regress_sub1'\");\n> +is($result, qq(t),\n> + \"check that the subscriber that was enabled on the old subscriber\n> should be enabled in the new subscriber\"\n> +);\n> +$new_sub->safe_psql('postgres', \"DROP SUBSCRIPTION regress_sub1\");\n> +\n>\n> BEFORE\n> check that the subscriber that was disable on the old subscriber\n> should be disabled in the new subscriber\n>\n> SUGGESTION\n> check that a subscriber that was disabled on the old subscriber is\n> disabled on the new subscriber\n>\n> ~\n>\n> BEFORE\n> check that the subscriber that was enabled on the old subscriber\n> should be enabled in the new subscriber\n>\n> SUGGESTION\n> check that a subscriber that was enabled on the old subscriber is\n> enabled on the new subscriber\n>\n\nOops. I think that should have been \"subscription\", not \"subscriber\". i.e.\n\nSUGGESTION\ncheck that a subscription that was disabled on the old subscriber is\ndisabled on the new subscriber\n\nand\n\nSUGGESTION\ncheck that a subscription that was enabled on the old subscriber is\nenabled on the new subscriber\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 30 Nov 2023 12:13:46 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Nov 30, 2023 at 6:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v20-0001\n>\n> ======\n>\n> 1. getSubscriptions\n>\n> + if (dopt->binary_upgrade && fout->remoteVersion >= 170000)\n> + appendPQExpBufferStr(query, \" s.subenabled\\n\");\n> + else\n> + appendPQExpBufferStr(query, \" false AS subenabled\\n\");\n>\n> Probably I misunderstood this logic... AFAIK the CREATE SUBSCRIPTION\n> is normally default *enabled*, so why does this code set default\n> differently as 'false'. OTOH, if this is some special case default\n> needed because the subscription upgrade is not supported before PG17\n> then maybe it needs a comment to explain.\n>\n\nYes, it is for prior versions. By default subscriptions are restored\ndisabled even if they are enabled before dump. See docs [1] for\nreasons (When dumping logical replication subscriptions, ..). I don't\nthink we need a comment here as that is a norm we use at other similar\nplaces where we do version checking. We can argue that there could be\nmore comments as to why the 'connect' is false and if those are really\nrequired, we should do that as a separate patch.\n\n[1] - https://www.postgresql.org/docs/devel/app-pgdump.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Nov 2023 08:28:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 3:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n\nIn general, the test cases are a bit complex to understand, so, it\nwill be difficult to enhance these later. The complexity comes from\nthe fact that one upgrade test is trying to test multiple things (a)\nEnabled/Disabled subscriptions; (b) relation states 'i' and 'r' are\npreserved after the upgrade. (c) rows from non-refreshed tables are\nnot copied, etc. I understand that you may want to cover as many\nthings possible in one test to have fewer upgrade tests which could\nsave some time but I think it makes the test somewhat difficult to\nunderstand and enhance. Can we try to split it such that (a) and (b)\nare tested in one test and others could be separated out?\n\nFew other comments:\n===================\n1.\n+$old_sub->safe_psql('postgres',\n+ \"CREATE SUBSCRIPTION regress_sub CONNECTION '$connstr' PUBLICATION\nregress_pub\"\n+);\n+\n+$old_sub->wait_for_subscription_sync($publisher, 'regress_sub');\n+\n+# After the above wait_for_subscription_sync call the table can be either in\n+# 'syncdone' or in 'ready' state. Now wait till the table reaches\n'ready' state.\n+my $synced_query =\n+ \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'r'\";\n+$old_sub->poll_query_until('postgres', $synced_query)\n+ or die \"Timed out while waiting for the table to reach ready state\";\n\nCan the table be in 'i' state after above test? If not, then above\ncomment is misleading.\n\n2.\n+# ------------------------------------------------------\n+# Check that pg_upgrade is successful when all tables are in ready or in\n+# init state.\n+# ------------------------------------------------------\n+$publisher->safe_psql('postgres',\n+ \"INSERT INTO tab_upgraded1 VALUES (generate_series(2,50), 'before\ninitial sync')\"\n+);\n+$publisher->wait_for_catchup('regress_sub');\n\nThe previous comment applies to this one as well.\n\n3.\n+$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n+$old_sub->safe_psql('postgres',\n+ \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr' PUBLICATION\nregress_pub1\"\n+);\n+$old_sub->wait_for_subscription_sync($publisher, 'regress_sub1');\n+\n+# Change configuration to prepare a subscription table in init state\n+$old_sub->append_conf('postgresql.conf',\n+ \"max_logical_replication_workers = 0\");\n+$old_sub->restart;\n+\n+# Add tab_upgraded2 to the publication. Now publication has tab_upgraded1\n+# and tab_upgraded2 tables.\n+$publisher->safe_psql('postgres',\n+ \"ALTER PUBLICATION regress_pub ADD TABLE tab_upgraded2\");\n+\n+$old_sub->safe_psql('postgres',\n+ \"ALTER SUBSCRIPTION regress_sub REFRESH PUBLICATION\");\n\nThese two cases for Create and Alter look confusing. I think it would\nbe better if Alter's case is moved before the comment: \"Check that\npg_upgrade is successful when all tables are in ready or in init\nstate.\".\n\n4.\n+# Insert a row in tab_upgraded1 and tab_not_upgraded1 publisher table while\n+# it's down.\n+insert_line_at_pub('while old_sub is down');\n\nIsn't sub routine insert_line_at_pub() inserts in all three tables? If\nso, then the above comment seems to be wrong and I think it is better\nto explain the intention of this insert.\n\n5.\n+my $result =\n+ $new_sub->safe_psql('postgres',\n+ \"SELECT subenabled FROM pg_subscription WHERE subname = 'regress_sub'\");\n+is($result, qq(f),\n+ \"check that the subscriber that was disable on the old subscriber\nshould be disabled in the new subscriber\"\n+);\n+$result =\n+ $new_sub->safe_psql('postgres',\n+ \"SELECT subenabled FROM pg_subscription WHERE subname = 'regress_sub1'\");\n+is($result, qq(t),\n+ \"check that the subscriber that was enabled on the old subscriber\nshould be enabled in the new subscriber\"\n+);\n\nCan't the above be tested with a single query?\n\n6.\n+$new_sub->safe_psql('postgres', \"DROP SUBSCRIPTION regress_sub1\");\n+\n+# Subscription relations should be preserved. The upgraded subscriber\nwon't know\n+# about 'tab_not_upgraded1' because the subscription is not yet refreshed.\n+$result =\n+ $new_sub->safe_psql('postgres', \"SELECT count(*) FROM pg_subscription_rel\");\n+is($result, qq(2),\n+ \"there should be 2 rows in pg_subscription_rel(representing\ntab_upgraded1 and tab_upgraded2)\"\n+);\n\nHere the DROP SUBSCRIPTION looks confusing. Let's try to move it after\nthe verification of objects after the upgrade.\n\n7.\n1.\n+sub insert_line_at_pub\n+{\n+ my $payload = shift;\n+\n+ foreach (\"tab_upgraded1\", \"tab_upgraded2\", \"tab_not_upgraded1\")\n+ {\n+ $publisher->safe_psql('postgres',\n+ \"INSERT INTO \" . $_ . \" (val) VALUES('$payload')\");\n+ }\n+}\n+\n+# Initial setup\n+foreach (\"tab_upgraded1\", \"tab_upgraded2\", \"tab_not_upgraded1\")\n+{\n+ $publisher->safe_psql('postgres',\n+ \"CREATE TABLE \" . $_ . \" (id serial, val text)\");\n+ $old_sub->safe_psql('postgres',\n+ \"CREATE TABLE \" . $_ . \" (id serial, val text)\");\n+}\n+insert_line_at_pub('before initial sync');\n\nThis makes the test slightly difficult to understand and we don't seem\nto achieve much by writing sub routines.\n\n--\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Nov 2023 13:35:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, 29 Nov 2023 at 15:02, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 28, 2023 at 4:12 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> Few comments on the latest patch:\n> ===========================\n> 1.\n> + if (fout->remoteVersion >= 170000)\n> + appendPQExpBufferStr(query, \" o.remote_lsn AS suboriginremotelsn,\\n\");\n> + else\n> + appendPQExpBufferStr(query, \" NULL AS suboriginremotelsn,\\n\");\n> +\n> + if (dopt->binary_upgrade && fout->remoteVersion >= 170000)\n> + appendPQExpBufferStr(query, \" s.subenabled\\n\");\n> + else\n> + appendPQExpBufferStr(query, \" false AS subenabled\\n\");\n> +\n> + appendPQExpBufferStr(query,\n> + \"FROM pg_subscription s\\n\");\n> +\n> + if (fout->remoteVersion >= 170000)\n> + appendPQExpBufferStr(query,\n> + \"LEFT JOIN pg_catalog.pg_replication_origin_status o \\n\"\n> + \" ON o.external_id = 'pg_' || s.oid::text \\n\");\n>\n> Why 'subenabled' have a check for binary_upgrade but\n> 'suboriginremotelsn' doesn't?\n\nCombined these two now.\n\n> 2.\n> +Datum\n> +binary_upgrade_add_sub_rel_state(PG_FUNCTION_ARGS)\n> +{\n> + Relation rel;\n> + HeapTuple tup;\n> + Oid subid;\n> + Form_pg_subscription form;\n> + char *subname;\n> + Oid relid;\n> + char relstate;\n> + XLogRecPtr sublsn;\n> +\n> + CHECK_IS_BINARY_UPGRADE;\n> +\n> + /* We must check these things before dereferencing the arguments */\n> + if (PG_ARGISNULL(0) || PG_ARGISNULL(1) || PG_ARGISNULL(2))\n> + elog(ERROR, \"null argument to binary_upgrade_add_sub_rel_state is\n> not allowed\");\n> +\n> + subname = text_to_cstring(PG_GETARG_TEXT_PP(0));\n> + relid = PG_GETARG_OID(1);\n> + relstate = PG_GETARG_CHAR(2);\n> + sublsn = PG_ARGISNULL(3) ? InvalidXLogRecPtr : PG_GETARG_LSN(3);\n> +\n> + tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));\n> + if (!HeapTupleIsValid(tup))\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"relation %u does not exist\", relid));\n> + ReleaseSysCache(tup);\n> +\n> + rel = table_open(SubscriptionRelationId, RowExclusiveLock);\n>\n> Why there is no locking for relation? I see that during subscription\n> operation, we do acquire AccessShareLock on the relation before adding\n> a corresponding entry in pg_subscription_rel. See the following code:\n>\n> CreateSubscription()\n> {\n> ...\n> foreach(lc, tables)\n> {\n> RangeVar *rv = (RangeVar *) lfirst(lc);\n> Oid relid;\n>\n> relid = RangeVarGetRelid(rv, AccessShareLock, false);\n>\n> /* Check for supported relkind. */\n> CheckSubscriptionRelkind(get_rel_relkind(relid),\n> rv->schemaname, rv->relname);\n>\n> AddSubscriptionRelState(subid, relid, table_state,\n> InvalidXLogRecPtr);\n> ...\n> }\n\nModified\n\n> 3.\n> +Datum\n> +binary_upgrade_add_sub_rel_state(PG_FUNCTION_ARGS)\n> {\n> ...\n> ...\n> + AddSubscriptionRelState(subid, relid, relstate, sublsn);\n> ...\n> }\n>\n> I see a problem with directly using this function which is that it\n> doesn't release locks which means it expects either the caller to\n> release those locks or postpone to release them at the transaction\n> end. However, all the other binary_upgrade support functions don't\n> postpone releasing locks till the transaction ends. I think we should\n> add an additional parameter to indicate whether we want to release\n> locks and then pass it true from the binary upgrade support function.\n\nModified\n\n> 4.\n> extern void getPublicationTables(Archive *fout, TableInfo tblinfo[],\n> int numTables);\n> extern void getSubscriptions(Archive *fout);\n> +extern void getSubscriptionTables(Archive *fout);\n>\n> getSubscriptions() and getSubscriptionTables() are defined in the\n> opposite order in .c file. I think it is better to change the order in\n> .c file unless there is a reason for not doing so.\n\nModified\n\n> 5. At this stage, no need to update/send the 0002 patch, we can look\n> at it after the main patch is committed. That is anyway not directly\n> related to the main patch.\n\nRemoved it from this version.\n\n> Apart from the above, I have modified a few comments and messages in\n> the attached. Kindly review and include the changes if you are fine\n> with those.\n\nMerged them.\n\nThe attached v21 version patch has the change for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 30 Nov 2023 21:55:21 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 30 Nov 2023 at 06:37, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v20-0001\n>\n> ======\n>\n> 1. getSubscriptions\n>\n> + if (dopt->binary_upgrade && fout->remoteVersion >= 170000)\n> + appendPQExpBufferStr(query, \" s.subenabled\\n\");\n> + else\n> + appendPQExpBufferStr(query, \" false AS subenabled\\n\");\n>\n> Probably I misunderstood this logic... AFAIK the CREATE SUBSCRIPTION\n> is normally default *enabled*, so why does this code set default\n> differently as 'false'. OTOH, if this is some special case default\n> needed because the subscription upgrade is not supported before PG17\n> then maybe it needs a comment to explain.\n\nNo changes needed to be done in this case, explanation for the same is\ngiven at [1]\n\n> ~~~\n>\n> 2. dumpSubscription\n>\n> + if (strcmp(subinfo->subenabled, \"t\") == 0)\n> + {\n> + appendPQExpBufferStr(query,\n> + \"\\n-- For binary upgrade, must preserve the subscriber's running state.\\n\");\n> + appendPQExpBuffer(query, \"ALTER SUBSCRIPTION %s ENABLE;\\n\", qsubname);\n> + }\n>\n> (this is a bit similar to previous comment)\n>\n> Probably I misunderstood this logic... but AFAIK the CREATE\n> SUBSCRIPTION is normally default *enabled*. In the CREATE SUBSCRIPTION\n> top of this function I did not see any \"enabled=xxx\" code, so won't\n> this just default to enabled=true per normal. In other words, what\n> happens if the subscription being upgraded was already DISABLED -- How\n> does it remain disabled still after upgrade?\n>\n> But I saw there is a test case for this so perhaps the code is fine?\n> Maybe it just needs more explanatory comments for this area?\n\nNo changes needed to be done in this case, explanation for the same is\ngiven at [1]\n\n> ======\n> src/bin/pg_upgrade/t/004_subscription.pl\n>\n> 3.\n> +# The subscription's running status should be preserved\n> +my $result =\n> + $new_sub->safe_psql('postgres',\n> + \"SELECT subenabled FROM pg_subscription WHERE subname = 'regress_sub'\");\n> +is($result, qq(f),\n> + \"check that the subscriber that was disable on the old subscriber\n> should be disabled in the new subscriber\"\n> +);\n> +$result =\n> + $new_sub->safe_psql('postgres',\n> + \"SELECT subenabled FROM pg_subscription WHERE subname = 'regress_sub1'\");\n> +is($result, qq(t),\n> + \"check that the subscriber that was enabled on the old subscriber\n> should be enabled in the new subscriber\"\n> +);\n> +$new_sub->safe_psql('postgres', \"DROP SUBSCRIPTION regress_sub1\");\n> +\n>\n> BEFORE\n> check that the subscriber that was disable on the old subscriber\n> should be disabled in the new subscriber\n>\n> SUGGESTION\n> check that a subscriber that was disabled on the old subscriber is\n> disabled on the new subscriber\n> ~\n>\n> BEFORE\n> check that the subscriber that was enabled on the old subscriber\n> should be enabled in the new subscriber\n>\n> SUGGESTION\n> check that a subscriber that was enabled on the old subscriber is\n> enabled on the new subscriber\n\nThese statements are combined now\n\n> ~~~\n>\n> 4.\n> +is($result, qq($remote_lsn), \"remote_lsn should have been preserved\");\n> +\n> +\n> +# Check the number of rows for each table on each server\n>\n>\n> Double blank lines.\n\nModified\n\n> ~~~\n>\n> 5.\n> +$old_sub->safe_psql('postgres', \"ALTER SUBSCRIPTION regress_sub1 DISABLE\");\n> +$old_sub->safe_psql('postgres',\n> + \"ALTER SUBSCRIPTION regress_sub1 SET (slot_name = none)\");\n> +$old_sub->safe_psql('postgres', \"DROP SUBSCRIPTION regress_sub1\");\n> +\n>\n> Probably it would be tidier to combine all of those.\n\nModified\n\nThe changes for the same is present in the v21 version patch attached at [2]\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JpWkRBFMDC3wOCK%3DHzCXg8XT1jH-tWb%3Db%2B%2B_8YS2%3DQSQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CALDaNm37E4tmSZd%2Bk1ixtKevX3eucmhdOnw4pGmykZk4C1Nm4Q%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 30 Nov 2023 22:16:04 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 30 Nov 2023 at 13:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 29, 2023 at 3:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> In general, the test cases are a bit complex to understand, so, it\n> will be difficult to enhance these later. The complexity comes from\n> the fact that one upgrade test is trying to test multiple things (a)\n> Enabled/Disabled subscriptions; (b) relation states 'i' and 'r' are\n> preserved after the upgrade. (c) rows from non-refreshed tables are\n> not copied, etc. I understand that you may want to cover as many\n> things possible in one test to have fewer upgrade tests which could\n> save some time but I think it makes the test somewhat difficult to\n> understand and enhance. Can we try to split it such that (a) and (b)\n> are tested in one test and others could be separated out?\n\nYes, I had tried to combine a few tests as it was taking more time to\nrun. I have refactored the tests by removing tab_not_upgraded1 related\ntest which is more of a logical replication test, adding more\ncomments, removing intermediate select count checks. So now we have\ntest1) which checks for upgrade with subscriber having table in\ninit/ready state, test2) Check that the data inserted to the publisher\nwhen the subscriber is down will be replicated to the new subscriber\nonce the new subscriber is started (these are done as continuation of\nthe previous test). test3) Check that pg_upgrade fails when\nmax_replication_slots configured in the new cluster is less than the\nnumber of subscriptions in the old cluster. test4) Check upgrade fails\nwith old instance with relation in 'd' datasync(invalid) state and\nmissing replication origin.\nIn test4 I have combined both datasync relation state and missing\nreplication origin as the validation for both is in the same file. I\nfelt the readability is better now, do let me know if any of the test\nis still difficult to understand.\n\n> Few other comments:\n> ===================\n> 1.\n> +$old_sub->safe_psql('postgres',\n> + \"CREATE SUBSCRIPTION regress_sub CONNECTION '$connstr' PUBLICATION\n> regress_pub\"\n> +);\n> +\n> +$old_sub->wait_for_subscription_sync($publisher, 'regress_sub');\n> +\n> +# After the above wait_for_subscription_sync call the table can be either in\n> +# 'syncdone' or in 'ready' state. Now wait till the table reaches\n> 'ready' state.\n> +my $synced_query =\n> + \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'r'\";\n> +$old_sub->poll_query_until('postgres', $synced_query)\n> + or die \"Timed out while waiting for the table to reach ready state\";\n>\n> Can the table be in 'i' state after above test? If not, then above\n> comment is misleading.\n\nThis part of test is to get the table in ready state/ modified the\ncomments appropriately\n\n> 2.\n> +# ------------------------------------------------------\n> +# Check that pg_upgrade is successful when all tables are in ready or in\n> +# init state.\n> +# ------------------------------------------------------\n> +$publisher->safe_psql('postgres',\n> + \"INSERT INTO tab_upgraded1 VALUES (generate_series(2,50), 'before\n> initial sync')\"\n> +);\n> +$publisher->wait_for_catchup('regress_sub');\n>\n> The previous comment applies to this one as well.\n\nI have removed this comment and moved it before the upgrade command as\nit is more appropriate there.\n\n> 3.\n> +$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n> +$old_sub->safe_psql('postgres',\n> + \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr' PUBLICATION\n> regress_pub1\"\n> +);\n> +$old_sub->wait_for_subscription_sync($publisher, 'regress_sub1');\n> +\n> +# Change configuration to prepare a subscription table in init state\n> +$old_sub->append_conf('postgresql.conf',\n> + \"max_logical_replication_workers = 0\");\n> +$old_sub->restart;\n> +\n> +# Add tab_upgraded2 to the publication. Now publication has tab_upgraded1\n> +# and tab_upgraded2 tables.\n> +$publisher->safe_psql('postgres',\n> + \"ALTER PUBLICATION regress_pub ADD TABLE tab_upgraded2\");\n> +\n> +$old_sub->safe_psql('postgres',\n> + \"ALTER SUBSCRIPTION regress_sub REFRESH PUBLICATION\");\n>\n> These two cases for Create and Alter look confusing. I think it would\n> be better if Alter's case is moved before the comment: \"Check that\n> pg_upgrade is successful when all tables are in ready or in init\n> state.\".\n\nI have added more comments to make it clear now. I have moved the\n\"check that pg_upgrade is successful when all tables ...\" before the\nupgrade command to be more clearer. Added comment \"Pre-setup for\npreparing subscription table in init state. Add tab_upgraded2 to the\npublication.\" and \"# The table tab_upgraded2 will be in init state as\nthe subscriber configuration for max_logical_replication_workers is\nset to 0.\"\n\n> 4.\n> +# Insert a row in tab_upgraded1 and tab_not_upgraded1 publisher table while\n> +# it's down.\n> +insert_line_at_pub('while old_sub is down');\n>\n> Isn't sub routine insert_line_at_pub() inserts in all three tables? If\n> so, then the above comment seems to be wrong and I think it is better\n> to explain the intention of this insert.\n\nModified\n\n> 5.\n> +my $result =\n> + $new_sub->safe_psql('postgres',\n> + \"SELECT subenabled FROM pg_subscription WHERE subname = 'regress_sub'\");\n> +is($result, qq(f),\n> + \"check that the subscriber that was disable on the old subscriber\n> should be disabled in the new subscriber\"\n> +);\n> +$result =\n> + $new_sub->safe_psql('postgres',\n> + \"SELECT subenabled FROM pg_subscription WHERE subname = 'regress_sub1'\");\n> +is($result, qq(t),\n> + \"check that the subscriber that was enabled on the old subscriber\n> should be enabled in the new subscriber\"\n> +);\n>\n> Can't the above be tested with a single query?\n\nModified\n\n> 6.\n> +$new_sub->safe_psql('postgres', \"DROP SUBSCRIPTION regress_sub1\");\n> +\n> +# Subscription relations should be preserved. The upgraded subscriber\n> won't know\n> +# about 'tab_not_upgraded1' because the subscription is not yet refreshed.\n> +$result =\n> + $new_sub->safe_psql('postgres', \"SELECT count(*) FROM pg_subscription_rel\");\n> +is($result, qq(2),\n> + \"there should be 2 rows in pg_subscription_rel(representing\n> tab_upgraded1 and tab_upgraded2)\"\n> +);\n>\n> Here the DROP SUBSCRIPTION looks confusing. Let's try to move it after\n> the verification of objects after the upgrade.\n\nI have removed this now, no need to move it to down as we will be\nstopping the newsub server at the end of this test and this newsub\nwill not be used later.\n\n> 7.\n> 1.\n> +sub insert_line_at_pub\n> +{\n> + my $payload = shift;\n> +\n> + foreach (\"tab_upgraded1\", \"tab_upgraded2\", \"tab_not_upgraded1\")\n> + {\n> + $publisher->safe_psql('postgres',\n> + \"INSERT INTO \" . $_ . \" (val) VALUES('$payload')\");\n> + }\n> +}\n> +\n> +# Initial setup\n> +foreach (\"tab_upgraded1\", \"tab_upgraded2\", \"tab_not_upgraded1\")\n> +{\n> + $publisher->safe_psql('postgres',\n> + \"CREATE TABLE \" . $_ . \" (id serial, val text)\");\n> + $old_sub->safe_psql('postgres',\n> + \"CREATE TABLE \" . $_ . \" (id serial, val text)\");\n> +}\n> +insert_line_at_pub('before initial sync');\n>\n> This makes the test slightly difficult to understand and we don't seem\n> to achieve much by writing sub routines.\n\nRemoved the subroutines.\n\nThe changes for the same is available at:\nhttps://www.postgresql.org/message-id/CALDaNm37E4tmSZd%2Bk1ixtKevX3eucmhdOnw4pGmykZk4C1Nm4Q%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 30 Nov 2023 22:35:21 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Here are review comments for patch v21-0001\n\n======\nsrc/bin/pg_upgrade/check.c\n\n1. check_old_cluster_subscription_state\n\n+/*\n+ * check_old_cluster_subscription_state()\n+ *\n+ * Verify that each of the subscriptions has all their corresponding tables in\n+ * i (initialize) or r (ready).\n+ */\n+static void\n+check_old_cluster_subscription_state(void)\n\nFunction comment should also mention it also validates the origin.\n\n~~~\n\n2.\nIn this function there are a couple of errors written to the\n\"subs_invalid.txt\" file:\n\n+ fprintf(script, \"replication origin is missing for database:\\\"%s\\\"\nsubscription:\\\"%s\\\"\\n\",\n+ PQgetvalue(res, i, 0),\n+ PQgetvalue(res, i, 1));\n\nand\n\n+ fprintf(script, \"database:\\\"%s\\\" subscription:\\\"%s\\\" schema:\\\"%s\\\"\nrelation:\\\"%s\\\" state:\\\"%s\\\" not in required state\\n\",\n+ active_db->db_name,\n+ PQgetvalue(res, i, 0),\n+ PQgetvalue(res, i, 1),\n+ PQgetvalue(res, i, 2),\n+ PQgetvalue(res, i, 3));\n\nThe format of those messages is not consistent. It could be improved\nin a number of ways to make them more similar. e.g. below.\n\nSUGGESTION #1\nthe replication origin is missing for database:\\\"%s\\\" subscription:\\\"%s\\\"\\n\nthe table sync state \\\"%s\\\" is not allowed for database:\\\"%s\\\"\nsubscription:\\\"%s\\\" schema:\\\"%s\\\" relation:\\\"%s\\\"\\n\n\nSUGGESTION #2\ndatabase:\\\"%s\\\" subscription:\\\"%s\\\" -- replication origin is missing\\n\ndatabase:\\\"%s\\\" subscription:\\\"%s\\\" schema:\\\"%s\\\" relation:\\\"%s\\\" --\nupgrade when table sync state is \\\"%s\\\" is not supported\\n\n\netc.\n\n======\nsrc/bin/pg_upgrade/t/004_subscription.pl\n\n3.\n+# Initial setup\n+$publisher->safe_psql('postgres', \"CREATE TABLE tab_upgraded1(id int)\");\n+$publisher->safe_psql('postgres', \"CREATE TABLE tab_upgraded2(id int)\");\n+$old_sub->safe_psql('postgres', \"CREATE TABLE tab_upgraded1(id int)\");\n+$old_sub->safe_psql('postgres', \"CREATE TABLE tab_upgraded2(id int)\");\n\nIMO it is tidier to combine multiple DDLS whenever you can.\n\n~~~\n\n4.\n+# Create a subscription in enabled state before upgrade\n+$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n+$old_sub->safe_psql('postgres',\n+ \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr' PUBLICATION\nregress_pub1\"\n+);\n+$old_sub->wait_for_subscription_sync($publisher, 'regress_sub1');\n\nThat publication has an empty set of tables. Should there be some\ncomment to explain why it is OK like this?\n\n~~~\n\n5.\n+# Wait till the table tab_upgraded1 reaches 'ready' state\n+my $synced_query =\n+ \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'r'\";\n+$old_sub->poll_query_until('postgres', $synced_query)\n+ or die \"Timed out while waiting for the table to reach ready state\";\n+\n+$publisher->safe_psql('postgres',\n+ \"INSERT INTO tab_upgraded1 VALUES (generate_series(1,50))\"\n+);\n+$publisher->wait_for_catchup('regress_sub2');\n\nIMO better without the blank line, so then everything more clearly\nbelongs to this same comment.\n\n~~~\n\n6.\n+# Pre-setup for preparing subscription table in init state. Add tab_upgraded2\n+# to the publication.\n+$publisher->safe_psql('postgres',\n+ \"ALTER PUBLICATION regress_pub2 ADD TABLE tab_upgraded2\");\n+\n+$old_sub->safe_psql('postgres',\n+ \"ALTER SUBSCRIPTION regress_sub2 REFRESH PUBLICATION\");\n\nDitto. IMO better without the blank line, so then everything more\nclearly belongs to this same comment.\n\n~~~\n\n7.\n+command_ok(\n+ [\n+ 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n+ '-D', $new_sub->data_dir, '-b', $oldbindir,\n+ '-B', $newbindir, '-s', $new_sub->host,\n+ '-p', $old_sub->port, '-P', $new_sub->port,\n+ $mode\n+ ],\n+ 'run of pg_upgrade for old instance when the subscription tables are\nin init/ready state'\n+);\n\nMaybe those 'command_ok' args can be formatted neatly (like you've\ndone later for the 'command_checks_all').\n\n~~~\n\n8.\n+# ------------------------------------------------------\n+# Check that the data inserted to the publisher when the subscriber\nis down will\n+# be replicated to the new subscriber once the new subscriber is started.\n+# ------------------------------------------------------\n\n8a.\nSUGGESTION\n...when the new subscriber is down will be replicated once it is started.\n\n~\n\n8b.\nI thought this main comment should also say something like \"Also check\nthat the old subscription states and relations origins are all\npreserved.\"\n\n~~~\n\n9.\n+$publisher->safe_psql('postgres', \"INSERT INTO tab_upgraded1 VALUES(51)\");\n+$publisher->safe_psql('postgres', \"INSERT INTO tab_upgraded2 VALUES(1)\");\n\nIMO it is tidier to combine multiple DDLS whenever you can.\n\n~~~\n\n10.\n+# The subscription's running status should be preserved\n+$result =\n+ $new_sub->safe_psql('postgres',\n+ \"SELECT subenabled FROM pg_subscription ORDER BY subname\");\n+is($result, qq(t\n+f),\n+ \"check that the subscription's running status are preserved\"\n+);\n\nI felt this was a bit too tricky. It might be more readable to do 2\nseparate SELECTs with explicit subnames. Alternatively, leave the code\nas-is but improve the comment to explicitly say something like:\n\n# old subscription regress_sub was enabled\n# old subscription regress_sub1 was disabled\n\n~~~\n\n11.\n+# ------------------------------------------------------\n+# Check that pg_upgrade fails when max_replication_slots configured in the new\n+# cluster is less than number of subscriptions in the old cluster.\n+# ------------------------------------------------------\n+my $new_sub1 = PostgreSQL::Test::Cluster->new('new_sub1');\n+$new_sub1->init;\n+$new_sub1->append_conf('postgresql.conf', \"max_replication_slots = 0\");\n+\n+$old_sub->stop;\n\n/than number/than the number/\n\nShould that old_sub->stop have been part of the previous cleanup steps?\n\n~~~\n\n12.\n+$old_sub->start;\n+\n+# Drop the subscription\n+$old_sub->safe_psql('postgres', \"DROP SUBSCRIPTION regress_sub2\");\n\nMaybe it is tidier puttin that 'start' below the comment.\n\n~~~\n\n13.\n+# ------------------------------------------------------\n+# Check that pg_upgrade refuses to run in:\n+# a) if there's a subscription with tables in a state other than 'r' (ready) or\n+# 'i' (init) and/or\n+# b) if the subscription has no replication origin.\n+# ------------------------------------------------------\n\n13a.\n/refuses to run in:/refuses to run if:/\n\n~\n\n13b.\n/a) if/a)/\n\n~\n\n13c.\n/b) if/b)/\n\n~~~\n\n14.\n+# Create another subscription and drop the subscription's replication origin\n+$old_sub->safe_psql('postgres',\n+ \"CREATE SUBSCRIPTION regress_sub4 CONNECTION '$connstr' PUBLICATION\nregress_pub3 WITH (enabled=false)\"\n+);\n+\n+my $subid = $old_sub->safe_psql('postgres',\n+ \"SELECT oid FROM pg_subscription WHERE subname = 'regress_sub4'\");\n+my $reporigin = 'pg_' . qq($subid);\n+\n+# Drop the subscription's replication origin\n+$old_sub->safe_psql('postgres',\n+ \"SELECT pg_replication_origin_drop('$reporigin')\");\n+\n+$old_sub->stop;\n\n14a.\nIMO better to have all this without blank lines, because it all\nbelongs to the first comment.\n\n~\n\n14b.\nThat 2nd comment \"# Drop the...\" is not required because the first\ncomment already says the same.\n\n======\nsrc/include/catalog/pg_subscription_rel.h\n\n15.\n extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,\n- XLogRecPtr sublsn);\n+ XLogRecPtr sublsn, bool upgrade);\n\nShouldn't this 'upgrade' really be 'binary_upgrade' so it better\nmatches the comment you added in that function?\n\nIf you agree, then change it here and also in the function definition.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 1 Dec 2023 16:26:43 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Dec 1, 2023 at 10:57 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are review comments for patch v21-0001\n>\n>\n> 2.\n> In this function there are a couple of errors written to the\n> \"subs_invalid.txt\" file:\n>\n> + fprintf(script, \"replication origin is missing for database:\\\"%s\\\"\n> subscription:\\\"%s\\\"\\n\",\n> + PQgetvalue(res, i, 0),\n> + PQgetvalue(res, i, 1));\n>\n> and\n>\n> + fprintf(script, \"database:\\\"%s\\\" subscription:\\\"%s\\\" schema:\\\"%s\\\"\n> relation:\\\"%s\\\" state:\\\"%s\\\" not in required state\\n\",\n> + active_db->db_name,\n> + PQgetvalue(res, i, 0),\n> + PQgetvalue(res, i, 1),\n> + PQgetvalue(res, i, 2),\n> + PQgetvalue(res, i, 3));\n>\n> The format of those messages is not consistent. It could be improved\n> in a number of ways to make them more similar. e.g. below.\n>\n> SUGGESTION #1\n> the replication origin is missing for database:\\\"%s\\\" subscription:\\\"%s\\\"\\n\n> the table sync state \\\"%s\\\" is not allowed for database:\\\"%s\\\"\n> subscription:\\\"%s\\\" schema:\\\"%s\\\" relation:\\\"%s\\\"\\n\n>\n\n+1. Shall we keep 'the' as 'The' in the message? Few other messages in\nthe same file start with capital letter.\n\n>\n> 4.\n> +# Create a subscription in enabled state before upgrade\n> +$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n> +$old_sub->safe_psql('postgres',\n> + \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr' PUBLICATION\n> regress_pub1\"\n> +);\n> +$old_sub->wait_for_subscription_sync($publisher, 'regress_sub1');\n>\n> That publication has an empty set of tables. Should there be some\n> comment to explain why it is OK like this?\n>\n\nI think we can add a comment to state the intention of overall test\nwhich this is part of.\n\n>\n> 10.\n> +# The subscription's running status should be preserved\n> +$result =\n> + $new_sub->safe_psql('postgres',\n> + \"SELECT subenabled FROM pg_subscription ORDER BY subname\");\n> +is($result, qq(t\n> +f),\n> + \"check that the subscription's running status are preserved\"\n> +);\n>\n> I felt this was a bit too tricky. It might be more readable to do 2\n> separate SELECTs with explicit subnames. Alternatively, leave the code\n> as-is but improve the comment to explicitly say something like:\n>\n> # old subscription regress_sub was enabled\n> # old subscription regress_sub1 was disabled\n>\n\nI don't see the need to have separate queries though adding comments\nis a good idea.\n\n>\n> 15.\n> extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,\n> - XLogRecPtr sublsn);\n> + XLogRecPtr sublsn, bool upgrade);\n>\n> Shouldn't this 'upgrade' really be 'binary_upgrade' so it better\n> matches the comment you added in that function?\n>\n\nIt is better to name this parameter as retain_lock and then explain it\nin the function header. The bigger problem with change is that we\nshould release the other lock\n(LockSharedObject(SubscriptionRelationId, subid, 0, AccessShareLock);)\ntaken in the function as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Dec 2023 14:45:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, 1 Dec 2023 at 10:57, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are review comments for patch v21-0001\n>\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 1. check_old_cluster_subscription_state\n>\n> +/*\n> + * check_old_cluster_subscription_state()\n> + *\n> + * Verify that each of the subscriptions has all their corresponding tables in\n> + * i (initialize) or r (ready).\n> + */\n> +static void\n> +check_old_cluster_subscription_state(void)\n>\n> Function comment should also mention it also validates the origin.\n\nModified\n\n> ~~~\n>\n> 2.\n> In this function there are a couple of errors written to the\n> \"subs_invalid.txt\" file:\n>\n> + fprintf(script, \"replication origin is missing for database:\\\"%s\\\"\n> subscription:\\\"%s\\\"\\n\",\n> + PQgetvalue(res, i, 0),\n> + PQgetvalue(res, i, 1));\n>\n> and\n>\n> + fprintf(script, \"database:\\\"%s\\\" subscription:\\\"%s\\\" schema:\\\"%s\\\"\n> relation:\\\"%s\\\" state:\\\"%s\\\" not in required state\\n\",\n> + active_db->db_name,\n> + PQgetvalue(res, i, 0),\n> + PQgetvalue(res, i, 1),\n> + PQgetvalue(res, i, 2),\n> + PQgetvalue(res, i, 3));\n>\n> The format of those messages is not consistent. It could be improved\n> in a number of ways to make them more similar. e.g. below.\n>\n> SUGGESTION #1\n> the replication origin is missing for database:\\\"%s\\\" subscription:\\\"%s\\\"\\n\n> the table sync state \\\"%s\\\" is not allowed for database:\\\"%s\\\"\n> subscription:\\\"%s\\\" schema:\\\"%s\\\" relation:\\\"%s\\\"\\n\n>\n> SUGGESTION #2\n> database:\\\"%s\\\" subscription:\\\"%s\\\" -- replication origin is missing\\n\n> database:\\\"%s\\\" subscription:\\\"%s\\\" schema:\\\"%s\\\" relation:\\\"%s\\\" --\n> upgrade when table sync state is \\\"%s\\\" is not supported\\n\n>\n> etc.\n\nModified based on SUGGESTION#1\n\n> ======\n> src/bin/pg_upgrade/t/004_subscription.pl\n>\n> 3.\n> +# Initial setup\n> +$publisher->safe_psql('postgres', \"CREATE TABLE tab_upgraded1(id int)\");\n> +$publisher->safe_psql('postgres', \"CREATE TABLE tab_upgraded2(id int)\");\n> +$old_sub->safe_psql('postgres', \"CREATE TABLE tab_upgraded1(id int)\");\n> +$old_sub->safe_psql('postgres', \"CREATE TABLE tab_upgraded2(id int)\");\n>\n> IMO it is tidier to combine multiple DDLS whenever you can.\n\nModified\n\n> ~~~\n>\n> 4.\n> +# Create a subscription in enabled state before upgrade\n> +$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n> +$old_sub->safe_psql('postgres',\n> + \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr' PUBLICATION\n> regress_pub1\"\n> +);\n> +$old_sub->wait_for_subscription_sync($publisher, 'regress_sub1');\n>\n> That publication has an empty set of tables. Should there be some\n> comment to explain why it is OK like this?\n\nThis test is just to verify that the enabled subscriptions will be\nenabled after upgrade, we don't need data for this. Data validation\nhappens with a different subscriptin. Modified comments\n\n> ~~~\n>\n> 5.\n> +# Wait till the table tab_upgraded1 reaches 'ready' state\n> +my $synced_query =\n> + \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'r'\";\n> +$old_sub->poll_query_until('postgres', $synced_query)\n> + or die \"Timed out while waiting for the table to reach ready state\";\n> +\n> +$publisher->safe_psql('postgres',\n> + \"INSERT INTO tab_upgraded1 VALUES (generate_series(1,50))\"\n> +);\n> +$publisher->wait_for_catchup('regress_sub2');\n>\n> IMO better without the blank line, so then everything more clearly\n> belongs to this same comment.\n\nModified\n\n> ~~~\n>\n> 6.\n> +# Pre-setup for preparing subscription table in init state. Add tab_upgraded2\n> +# to the publication.\n> +$publisher->safe_psql('postgres',\n> + \"ALTER PUBLICATION regress_pub2 ADD TABLE tab_upgraded2\");\n> +\n> +$old_sub->safe_psql('postgres',\n> + \"ALTER SUBSCRIPTION regress_sub2 REFRESH PUBLICATION\");\n>\n> Ditto. IMO better without the blank line, so then everything more\n> clearly belongs to this same comment.\n\nModified\n\n> ~~~\n>\n> 7.\n> +command_ok(\n> + [\n> + 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n> + '-D', $new_sub->data_dir, '-b', $oldbindir,\n> + '-B', $newbindir, '-s', $new_sub->host,\n> + '-p', $old_sub->port, '-P', $new_sub->port,\n> + $mode\n> + ],\n> + 'run of pg_upgrade for old instance when the subscription tables are\n> in init/ready state'\n> +);\n>\n> Maybe those 'command_ok' args can be formatted neatly (like you've\n> done later for the 'command_checks_all').\n\nThis is based on the run from pgperlytidy. Even if i format it\npgperltidy reverts the formatting that I have done. I have seen the\nsame is the case with other upgrade commands in few places. So not\nmaking any changes for this.\n\n> ~~~\n>\n> 8.\n> +# ------------------------------------------------------\n> +# Check that the data inserted to the publisher when the subscriber\n> is down will\n> +# be replicated to the new subscriber once the new subscriber is started.\n> +# ------------------------------------------------------\n>\n> 8a.\n> SUGGESTION\n> ...when the new subscriber is down will be replicated once it is started.\n>\n\nModified\n\n> ~\n>\n> 8b.\n> I thought this main comment should also say something like \"Also check\n> that the old subscription states and relations origins are all\n> preserved.\"\n\nModified\n\n> ~~~\n>\n> 9.\n> +$publisher->safe_psql('postgres', \"INSERT INTO tab_upgraded1 VALUES(51)\");\n> +$publisher->safe_psql('postgres', \"INSERT INTO tab_upgraded2 VALUES(1)\");\n>\n> IMO it is tidier to combine multiple DDLS whenever you can.\n\nModified\n\n> ~~~\n>\n> 10.\n> +# The subscription's running status should be preserved\n> +$result =\n> + $new_sub->safe_psql('postgres',\n> + \"SELECT subenabled FROM pg_subscription ORDER BY subname\");\n> +is($result, qq(t\n> +f),\n> + \"check that the subscription's running status are preserved\"\n> +);\n>\n> I felt this was a bit too tricky. It might be more readable to do 2\n> separate SELECTs with explicit subnames. Alternatively, leave the code\n> as-is but improve the comment to explicitly say something like:\n>\n> # old subscription regress_sub was enabled\n> # old subscription regress_sub1 was disabled\n\nModified to add comments.\n\n> ~~~\n>\n> 11.\n> +# ------------------------------------------------------\n> +# Check that pg_upgrade fails when max_replication_slots configured in the new\n> +# cluster is less than number of subscriptions in the old cluster.\n> +# ------------------------------------------------------\n> +my $new_sub1 = PostgreSQL::Test::Cluster->new('new_sub1');\n> +$new_sub1->init;\n> +$new_sub1->append_conf('postgresql.conf', \"max_replication_slots = 0\");\n> +\n> +$old_sub->stop;\n>\n> /than number/than the number/\n>\n> Should that old_sub->stop have been part of the previous cleanup steps?\n\nModified\n\n> ~~~\n>\n> 12.\n> +$old_sub->start;\n> +\n> +# Drop the subscription\n> +$old_sub->safe_psql('postgres', \"DROP SUBSCRIPTION regress_sub2\");\n>\n> Maybe it is tidier puttin that 'start' below the comment.\n\nModified\n\n> ~~~\n>\n> 13.\n> +# ------------------------------------------------------\n> +# Check that pg_upgrade refuses to run in:\n> +# a) if there's a subscription with tables in a state other than 'r' (ready) or\n> +# 'i' (init) and/or\n> +# b) if the subscription has no replication origin.\n> +# ------------------------------------------------------\n>\n> 13a.\n> /refuses to run in:/refuses to run if:/\n\nModified\n\n> ~\n>\n> 13b.\n> /a) if/a)/\n\nModified\n\n> ~\n>\n> 13c.\n> /b) if/b)/\n\nModified\n\n> ~~~\n>\n> 14.\n> +# Create another subscription and drop the subscription's replication origin\n> +$old_sub->safe_psql('postgres',\n> + \"CREATE SUBSCRIPTION regress_sub4 CONNECTION '$connstr' PUBLICATION\n> regress_pub3 WITH (enabled=false)\"\n> +);\n> +\n> +my $subid = $old_sub->safe_psql('postgres',\n> + \"SELECT oid FROM pg_subscription WHERE subname = 'regress_sub4'\");\n> +my $reporigin = 'pg_' . qq($subid);\n> +\n> +# Drop the subscription's replication origin\n> +$old_sub->safe_psql('postgres',\n> + \"SELECT pg_replication_origin_drop('$reporigin')\");\n> +\n> +$old_sub->stop;\n>\n> 14a.\n> IMO better to have all this without blank lines, because it all\n> belongs to the first comment.\n\nModified\n\n>\n> 14b.\n> That 2nd comment \"# Drop the...\" is not required because the first\n> comment already says the same.\n\nModified\n\n> ======\n> src/include/catalog/pg_subscription_rel.h\n>\n> 15.\n> extern void AddSubscriptionRelState(Oid subid, Oid relid, char state,\n> - XLogRecPtr sublsn);\n> + XLogRecPtr sublsn, bool upgrade);\n>\n> Shouldn't this 'upgrade' really be 'binary_upgrade' so it better\n> matches the comment you added in that function?\n>\n> If you agree, then change it here and also in the function definition.\n\nModified it to retain_lock based on suggestions from [1]\n\nThe attached v22 version patch has the changes for the same.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KFEHhJEo43k_qUpC0Eod34zVq%3DKae34koEDrPFXzeeJg%40mail.gmail.com\n\nRegards,\nVignesh",
"msg_date": "Fri, 1 Dec 2023 23:24:41 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Dec 1, 2023 at 11:24 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> The attached v22 version patch has the changes for the same.\n>\n\nI have made minor changes in the comments and code at various places.\nSee and let me know if you are not happy with the changes. I think\nunless there are more suggestions or comments, we can proceed with\ncommitting it.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 4 Dec 2023 16:30:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Dec 04, 2023 at 04:30:49PM +0530, Amit Kapila wrote:\n> I have made minor changes in the comments and code at various places.\n> See and let me know if you are not happy with the changes. I think\n> unless there are more suggestions or comments, we can proceed with\n> committing it.\n\nYeah. I am planning to look more closely at what you have here, and\nit is going to take me a bit more time though (some more stuff planned\nfor next CF, an upcoming conference and end/beginning-of-year\nvacations), but I think that targetting the beginning of next CF in\nJanuary would be OK.\n\nOverall, I have the impression that the patch looks pretty solid, with\na restriction in place for \"init\" and \"ready\" relations, while there\nare tests to check all the states that we expect. Seeing coverage\nabout all that makes me a happy hacker.\n\n+ * If retain_lock is true, then don't release the locks taken in this function.\n+ * We normally release the locks at the end of transaction but in binary-upgrade\n+ * mode, we expect to release those immediately.\n\nI think that this should be documented in pg_upgrade_support.c where\nthe caller expects the locks to be released, and why these should be\nreleased. There is a risk that this comment becomes obsolete if\nAddSubscriptionRelState() with locks released is called in a different\ncode path. Anyway, I am not sure to get why this is OK, or even\nnecessary. It seems like a good practice to keep the locks on the\nsubscription until the transaction that updates its state. If there's\na specific reason explaining why that's better, the patch should tell\nwhy.\n\n+ * However, this shouldn't be a problem as the upgrade ensures\n+ * that all the transactions were replicated before upgrading the\n+ * publisher.\n\nThis wording looks a bit confusing to me, as \"the upgrade\" could refer\nto the upgrade of a subscriber, but what we want to tell is that the\nreplay of the transactions is enforced when doing a publisher upgrade.\nI'd suggest something like \"the upgrade of the publisher ensures that\nall the transactions were replicated before upgrading it\".\n\n+my $result = $old_sub->safe_psql('postgres',\n+ \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'i'\");\n+is($result, qq(t), \"Check that the table is in init state\");\n\nHmm. Not sure that this is safe. Shouldn't this be a\npoll_query_until(), polling that the state of the relation is what we\nwant it to be after requesting a fresh of the publication on the\nsubscriber?\n--\nMichael",
"msg_date": "Tue, 5 Dec 2023 14:26:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Dec 5, 2023 at 10:56 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 04, 2023 at 04:30:49PM +0530, Amit Kapila wrote:\n> > I have made minor changes in the comments and code at various places.\n> > See and let me know if you are not happy with the changes. I think\n> > unless there are more suggestions or comments, we can proceed with\n> > committing it.\n>\n> Yeah. I am planning to look more closely at what you have here, and\n> it is going to take me a bit more time though (some more stuff planned\n> for next CF, an upcoming conference and end/beginning-of-year\n> vacations), but I think that targetting the beginning of next CF in\n> January would be OK.\n>\n> Overall, I have the impression that the patch looks pretty solid, with\n> a restriction in place for \"init\" and \"ready\" relations, while there\n> are tests to check all the states that we expect. Seeing coverage\n> about all that makes me a happy hacker.\n>\n> + * If retain_lock is true, then don't release the locks taken in this function.\n> + * We normally release the locks at the end of transaction but in binary-upgrade\n> + * mode, we expect to release those immediately.\n>\n> I think that this should be documented in pg_upgrade_support.c where\n> the caller expects the locks to be released, and why these should be\n> released. There is a risk that this comment becomes obsolete if\n> AddSubscriptionRelState() with locks released is called in a different\n> code path. Anyway, I am not sure to get why this is OK, or even\n> necessary. It seems like a good practice to keep the locks on the\n> subscription until the transaction that updates its state. If there's\n> a specific reason explaining why that's better, the patch should tell\n> why.\n>\n\nIt is to be consistent with other code paths in the upgrade. We\nfollowed existing coding rules like what we do in\nbinary_upgrade_set_missing_value->SetAttrMissing(). The probable\ntheory is that during the upgrade we are not worried about concurrent\noperations being blocked till the transaction ends. As in this\nparticular case, we know that the apply worker won't try to sync any\nof those relations or a concurrent DDL won't try to remove it from the\npg_subscrition_rel. This point is not being explicitly commented\nbecause of its similarity with the existing code.\n\n>\n> +my $result = $old_sub->safe_psql('postgres',\n> + \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'i'\");\n> +is($result, qq(t), \"Check that the table is in init state\");\n>\n> Hmm. Not sure that this is safe. Shouldn't this be a\n> poll_query_until(), polling that the state of the relation is what we\n> want it to be after requesting a fresh of the publication on the\n> subscriber?\n>\n\nThis is safe because the init state should be marked by the \"Alter\nSubscription ... Refresh ..\" command itself. What exactly makes you\nthink that such a poll would be required?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 5 Dec 2023 15:07:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Dec 4, 2023 at 8:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 1, 2023 at 11:24 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > The attached v22 version patch has the changes for the same.\n> >\n>\n> I have made minor changes in the comments and code at various places.\n> See and let me know if you are not happy with the changes. I think\n> unless there are more suggestions or comments, we can proceed with\n> committing it.\n>\n\nIt seems the patch is already close to ready-to-commit state but I've\nhad a look at the v23 patch with fresh eyes. It looks mostly good to\nme and there are some minor comments:\n\n---\n+ tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));\n+ if (!HeapTupleIsValid(tup))\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"relation %u does not exist\", relid));\n+ ReleaseSysCache(tup);\n\nGiven what we want to do here is just an existence check, isn't it\nclearer if we use SearchSysCacheExists1() instead?\n\n---\n+ query = createPQExpBuffer();\n+ appendPQExpBuffer(query, \"SELECT srsubid, srrelid,\nsrsubstate, srsublsn\"\n+ \" FROM\npg_catalog.pg_subscription_rel\"\n+ \" ORDER BY srsubid\");\n+ res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);\n+\n\nProbably we don't need to use PQExpBuffer here since the query to\nexecute is a static string.\n\n---\n+# The subscription's running status should be preserved. Old subscription\n+# regress_sub1 should be enabled and old subscription regress_sub2 should be\n+# disabled.\n+$result =\n+ $new_sub->safe_psql('postgres',\n+ \"SELECT subenabled FROM pg_subscription ORDER BY subname\");\n+is( $result, qq(t\n+f),\n+ \"check that the subscription's running status are preserved\");\n+\n\nHow about showing the subname along with the subenabled so that we can\ncheck if each subscription is in an expected state in case where\nsomething error happens?\n\n---\n+# Subscription relations should be preserved\n+$result =\n+ $new_sub->safe_psql('postgres',\n+ \"SELECT count(*) FROM pg_subscription_rel WHERE srsubid = $sub_oid\");\n+is($result, qq(2),\n+ \"there should be 2 rows in pg_subscription_rel(representing\ntab_upgraded1 and tab_upgraded2)\"\n+);\n\nIs there any reason why we check only the number of rows in\npg_subscription_rel? I guess it might be a good idea to check if table\nOIDs there are also preserved.\n\n---\n+# Enable the subscription\n+$new_sub->safe_psql('postgres', \"ALTER SUBSCRIPTION regress_sub2 ENABLE\");\n+$publisher->wait_for_catchup('regress_sub2');\n+\n\nIIUC after making the subscription regress_sub2 enabled, we will start\nthe initial table sync for the table tab_upgraded2. If so, shouldn't\nwe use wait_for_subscription_sync() instead?\n\n---\n+# Create another subscription and drop the subscription's replication origin\n+$old_sub->safe_psql('postgres',\n+ \"CREATE SUBSCRIPTION regress_sub4 CONNECTION '$connstr'\nPUBLICATION regress_pub3 WITH (enabled=false)\"\n\nIt's better to put spaces before and after '='.\n\n---\n+my $subid = $old_sub->safe_psql('postgres',\n+ \"SELECT oid FROM pg_subscription WHERE subname = 'regress_sub4'\");\n\nI think we can reuse $sub_oid.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Dec 2023 10:49:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Dec 5, 2023 at 6:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 5, 2023 at 10:56 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Dec 04, 2023 at 04:30:49PM +0530, Amit Kapila wrote:\n> > > I have made minor changes in the comments and code at various places.\n> > > See and let me know if you are not happy with the changes. I think\n> > > unless there are more suggestions or comments, we can proceed with\n> > > committing it.\n> >\n> > Yeah. I am planning to look more closely at what you have here, and\n> > it is going to take me a bit more time though (some more stuff planned\n> > for next CF, an upcoming conference and end/beginning-of-year\n> > vacations), but I think that targetting the beginning of next CF in\n> > January would be OK.\n> >\n> > Overall, I have the impression that the patch looks pretty solid, with\n> > a restriction in place for \"init\" and \"ready\" relations, while there\n> > are tests to check all the states that we expect. Seeing coverage\n> > about all that makes me a happy hacker.\n> >\n> > + * If retain_lock is true, then don't release the locks taken in this function.\n> > + * We normally release the locks at the end of transaction but in binary-upgrade\n> > + * mode, we expect to release those immediately.\n> >\n> > I think that this should be documented in pg_upgrade_support.c where\n> > the caller expects the locks to be released, and why these should be\n> > released. There is a risk that this comment becomes obsolete if\n> > AddSubscriptionRelState() with locks released is called in a different\n> > code path. Anyway, I am not sure to get why this is OK, or even\n> > necessary. It seems like a good practice to keep the locks on the\n> > subscription until the transaction that updates its state. If there's\n> > a specific reason explaining why that's better, the patch should tell\n> > why.\n> >\n>\n> It is to be consistent with other code paths in the upgrade. We\n> followed existing coding rules like what we do in\n> binary_upgrade_set_missing_value->SetAttrMissing(). The probable\n> theory is that during the upgrade we are not worried about concurrent\n> operations being blocked till the transaction ends. As in this\n> particular case, we know that the apply worker won't try to sync any\n> of those relations or a concurrent DDL won't try to remove it from the\n> pg_subscrition_rel. This point is not being explicitly commented\n> because of its similarity with the existing code.\n\nIt seems no problem to me with releasing locks early, I'm not sure how\nmuch it helps in better concurrency as it acquires lower level locks\nsuch as AccessShareLock and RowExclusiveLock though (SetAttrMissing()\nacquires AccessExclusiveLock on the table on the other hand).\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Dec 2023 10:55:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Dec 7, 2023 at 7:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Dec 5, 2023 at 6:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Dec 5, 2023 at 10:56 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Mon, Dec 04, 2023 at 04:30:49PM +0530, Amit Kapila wrote:\n> > > > I have made minor changes in the comments and code at various places.\n> > > > See and let me know if you are not happy with the changes. I think\n> > > > unless there are more suggestions or comments, we can proceed with\n> > > > committing it.\n> > >\n> > > Yeah. I am planning to look more closely at what you have here, and\n> > > it is going to take me a bit more time though (some more stuff planned\n> > > for next CF, an upcoming conference and end/beginning-of-year\n> > > vacations), but I think that targetting the beginning of next CF in\n> > > January would be OK.\n> > >\n> > > Overall, I have the impression that the patch looks pretty solid, with\n> > > a restriction in place for \"init\" and \"ready\" relations, while there\n> > > are tests to check all the states that we expect. Seeing coverage\n> > > about all that makes me a happy hacker.\n> > >\n> > > + * If retain_lock is true, then don't release the locks taken in this function.\n> > > + * We normally release the locks at the end of transaction but in binary-upgrade\n> > > + * mode, we expect to release those immediately.\n> > >\n> > > I think that this should be documented in pg_upgrade_support.c where\n> > > the caller expects the locks to be released, and why these should be\n> > > released. There is a risk that this comment becomes obsolete if\n> > > AddSubscriptionRelState() with locks released is called in a different\n> > > code path. Anyway, I am not sure to get why this is OK, or even\n> > > necessary. It seems like a good practice to keep the locks on the\n> > > subscription until the transaction that updates its state. If there's\n> > > a specific reason explaining why that's better, the patch should tell\n> > > why.\n> > >\n> >\n> > It is to be consistent with other code paths in the upgrade. We\n> > followed existing coding rules like what we do in\n> > binary_upgrade_set_missing_value->SetAttrMissing(). The probable\n> > theory is that during the upgrade we are not worried about concurrent\n> > operations being blocked till the transaction ends. As in this\n> > particular case, we know that the apply worker won't try to sync any\n> > of those relations or a concurrent DDL won't try to remove it from the\n> > pg_subscrition_rel. This point is not being explicitly commented\n> > because of its similarity with the existing code.\n>\n> It seems no problem to me with releasing locks early, I'm not sure how\n> much it helps in better concurrency as it acquires lower level locks\n> such as AccessShareLock and RowExclusiveLock though (SetAttrMissing()\n> acquires AccessExclusiveLock on the table on the other hand).\n>\n\nTrue, but we have kept it that way from the consistency point of view\nas well. We can change it if you think otherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Dec 2023 07:53:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thursday, December 7, 2023 10:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Dec 7, 2023 at 7:26 AM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Dec 5, 2023 at 6:37 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Tue, Dec 5, 2023 at 10:56 AM Michael Paquier <michael@paquier.xyz>\r\n> wrote:\r\n> > > >\r\n> > > > On Mon, Dec 04, 2023 at 04:30:49PM +0530, Amit Kapila wrote:\r\n> > > > > I have made minor changes in the comments and code at various\r\n> places.\r\n> > > > > See and let me know if you are not happy with the changes. I\r\n> > > > > think unless there are more suggestions or comments, we can\r\n> > > > > proceed with committing it.\r\n> > > >\r\n> > > > Yeah. I am planning to look more closely at what you have here,\r\n> > > > and it is going to take me a bit more time though (some more stuff\r\n> > > > planned for next CF, an upcoming conference and\r\n> > > > end/beginning-of-year vacations), but I think that targetting the\r\n> > > > beginning of next CF in January would be OK.\r\n> > > >\r\n> > > > Overall, I have the impression that the patch looks pretty solid,\r\n> > > > with a restriction in place for \"init\" and \"ready\" relations,\r\n> > > > while there are tests to check all the states that we expect.\r\n> > > > Seeing coverage about all that makes me a happy hacker.\r\n> > > >\r\n> > > > + * If retain_lock is true, then don't release the locks taken in this function.\r\n> > > > + * We normally release the locks at the end of transaction but in\r\n> > > > + binary-upgrade\r\n> > > > + * mode, we expect to release those immediately.\r\n> > > >\r\n> > > > I think that this should be documented in pg_upgrade_support.c\r\n> > > > where the caller expects the locks to be released, and why these\r\n> > > > should be released. There is a risk that this comment becomes\r\n> > > > obsolete if\r\n> > > > AddSubscriptionRelState() with locks released is called in a\r\n> > > > different code path. Anyway, I am not sure to get why this is OK,\r\n> > > > or even necessary. It seems like a good practice to keep the\r\n> > > > locks on the subscription until the transaction that updates its\r\n> > > > state. If there's a specific reason explaining why that's better,\r\n> > > > the patch should tell why.\r\n> > > >\r\n> > >\r\n> > > It is to be consistent with other code paths in the upgrade. We\r\n> > > followed existing coding rules like what we do in\r\n> > > binary_upgrade_set_missing_value->SetAttrMissing(). The probable\r\n> > > theory is that during the upgrade we are not worried about\r\n> > > concurrent operations being blocked till the transaction ends. As in\r\n> > > this particular case, we know that the apply worker won't try to\r\n> > > sync any of those relations or a concurrent DDL won't try to remove\r\n> > > it from the pg_subscrition_rel. This point is not being explicitly\r\n> > > commented because of its similarity with the existing code.\r\n> >\r\n> > It seems no problem to me with releasing locks early, I'm not sure how\r\n> > much it helps in better concurrency as it acquires lower level locks\r\n> > such as AccessShareLock and RowExclusiveLock though (SetAttrMissing()\r\n> > acquires AccessExclusiveLock on the table on the other hand).\r\n> >\r\n> \r\n> True, but we have kept it that way from the consistency point of view as well.\r\n> We can change it if you think otherwise.\r\n\r\nI also look into the patch and didn't find problems about the locking in\r\nAddSubscriptionRelState.\r\n\r\nAbout concurrency stuff, the lock on subscription object and\r\npg_subscription_rel only conflicts with ALTER/DROP SUBSCRIPTION which holds\r\nAccessExclusiveLock lock, but since there are not concurrent ALTER\r\nSUBSCRIPTION cmds during upgrade, so I think it's OK to release it earlier.\r\n\r\nI also thought about the cache invalidation stuff as we modified the catalog\r\nwhich will generate catcahe invalidateion. But the apply worker which build\r\ncache based on the pg_subscription_rel is not running, and no concurrent\r\nALTER/DROP SUBSCRIPTION cmds will be executed, so it looks OK as well.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Thu, 7 Dec 2023 04:24:08 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, 5 Dec 2023 at 10:56, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 04, 2023 at 04:30:49PM +0530, Amit Kapila wrote:\n> > I have made minor changes in the comments and code at various places.\n> > See and let me know if you are not happy with the changes. I think\n> > unless there are more suggestions or comments, we can proceed with\n> > committing it.\n>\n> Yeah. I am planning to look more closely at what you have here, and\n> it is going to take me a bit more time though (some more stuff planned\n> for next CF, an upcoming conference and end/beginning-of-year\n> vacations), but I think that targetting the beginning of next CF in\n> January would be OK.\n>\n> Overall, I have the impression that the patch looks pretty solid, with\n> a restriction in place for \"init\" and \"ready\" relations, while there\n> are tests to check all the states that we expect. Seeing coverage\n> about all that makes me a happy hacker.\n>\n> + * If retain_lock is true, then don't release the locks taken in this function.\n> + * We normally release the locks at the end of transaction but in binary-upgrade\n> + * mode, we expect to release those immediately.\n>\n> I think that this should be documented in pg_upgrade_support.c where\n> the caller expects the locks to be released, and why these should be\n> released. There is a risk that this comment becomes obsolete if\n> AddSubscriptionRelState() with locks released is called in a different\n> code path. Anyway, I am not sure to get why this is OK, or even\n> necessary. It seems like a good practice to keep the locks on the\n> subscription until the transaction that updates its state. If there's\n> a specific reason explaining why that's better, the patch should tell\n> why.\n\nAdded comments for this.\n\n> + * However, this shouldn't be a problem as the upgrade ensures\n> + * that all the transactions were replicated before upgrading the\n> + * publisher.\n> This wording looks a bit confusing to me, as \"the upgrade\" could refer\n> to the upgrade of a subscriber, but what we want to tell is that the\n> replay of the transactions is enforced when doing a publisher upgrade.\n> I'd suggest something like \"the upgrade of the publisher ensures that\n> all the transactions were replicated before upgrading it\".\n\nModified\n\n> +my $result = $old_sub->safe_psql('postgres',\n> + \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'i'\");\n> +is($result, qq(t), \"Check that the table is in init state\");\n>\n> Hmm. Not sure that this is safe. Shouldn't this be a\n> poll_query_until(), polling that the state of the relation is what we\n> want it to be after requesting a fresh of the publication on the\n> subscriber?\n\nThis is not required as the table will be added in init state after\n\"Alter Subscription ... Refresh ..\" command itself.\n\nThanks for the comments, the attached v24 version patch has the\nchanges for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 7 Dec 2023 16:44:36 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 7 Dec 2023 at 07:20, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Dec 4, 2023 at 8:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Dec 1, 2023 at 11:24 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > The attached v22 version patch has the changes for the same.\n> > >\n> >\n> > I have made minor changes in the comments and code at various places.\n> > See and let me know if you are not happy with the changes. I think\n> > unless there are more suggestions or comments, we can proceed with\n> > committing it.\n> >\n>\n> It seems the patch is already close to ready-to-commit state but I've\n> had a look at the v23 patch with fresh eyes. It looks mostly good to\n> me and there are some minor comments:\n>\n> ---\n> + tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));\n> + if (!HeapTupleIsValid(tup))\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"relation %u does not exist\", relid));\n> + ReleaseSysCache(tup);\n>\n> Given what we want to do here is just an existence check, isn't it\n> clearer if we use SearchSysCacheExists1() instead?\n\nModified\n\n> ---\n> + query = createPQExpBuffer();\n> + appendPQExpBuffer(query, \"SELECT srsubid, srrelid,\n> srsubstate, srsublsn\"\n> + \" FROM\n> pg_catalog.pg_subscription_rel\"\n> + \" ORDER BY srsubid\");\n> + res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);\n> +\n>\n> Probably we don't need to use PQExpBuffer here since the query to\n> execute is a static string.\n\nModified\n\n> ---\n> +# The subscription's running status should be preserved. Old subscription\n> +# regress_sub1 should be enabled and old subscription regress_sub2 should be\n> +# disabled.\n> +$result =\n> + $new_sub->safe_psql('postgres',\n> + \"SELECT subenabled FROM pg_subscription ORDER BY subname\");\n> +is( $result, qq(t\n> +f),\n> + \"check that the subscription's running status are preserved\");\n> +\n>\n> How about showing the subname along with the subenabled so that we can\n> check if each subscription is in an expected state in case where\n> something error happens?\n\nModified\n\n> ---\n> +# Subscription relations should be preserved\n> +$result =\n> + $new_sub->safe_psql('postgres',\n> + \"SELECT count(*) FROM pg_subscription_rel WHERE srsubid = $sub_oid\");\n> +is($result, qq(2),\n> + \"there should be 2 rows in pg_subscription_rel(representing\n> tab_upgraded1 and tab_upgraded2)\"\n> +);\n>\n> Is there any reason why we check only the number of rows in\n> pg_subscription_rel? I guess it might be a good idea to check if table\n> OIDs there are also preserved.\n\nModified\n\n> ---\n> +# Enable the subscription\n> +$new_sub->safe_psql('postgres', \"ALTER SUBSCRIPTION regress_sub2 ENABLE\");\n> +$publisher->wait_for_catchup('regress_sub2');\n> +\n>\n> IIUC after making the subscription regress_sub2 enabled, we will start\n> the initial table sync for the table tab_upgraded2. If so, shouldn't\n> we use wait_for_subscription_sync() instead?\n\nModified\n\n> ---\n> +# Create another subscription and drop the subscription's replication origin\n> +$old_sub->safe_psql('postgres',\n> + \"CREATE SUBSCRIPTION regress_sub4 CONNECTION '$connstr'\n> PUBLICATION regress_pub3 WITH (enabled=false)\"\n>\n> It's better to put spaces before and after '='.\n\nModified\n\n> ---\n> +my $subid = $old_sub->safe_psql('postgres',\n> + \"SELECT oid FROM pg_subscription WHERE subname = 'regress_sub4'\");\n>\n> I think we can reuse $sub_oid.\n\nModified\n\nThanks for the comments, the v24 version patch attached at [1] has the\nchanges for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm27%2BB6hiCS3g3nUDpfwmTaj6YopSY5ovo2%3D__iOSpkPbA%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 7 Dec 2023 16:50:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Dec 7, 2023 at 8:15 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, 5 Dec 2023 at 10:56, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Dec 04, 2023 at 04:30:49PM +0530, Amit Kapila wrote:\n> > > I have made minor changes in the comments and code at various places.\n> > > See and let me know if you are not happy with the changes. I think\n> > > unless there are more suggestions or comments, we can proceed with\n> > > committing it.\n> >\n> > Yeah. I am planning to look more closely at what you have here, and\n> > it is going to take me a bit more time though (some more stuff planned\n> > for next CF, an upcoming conference and end/beginning-of-year\n> > vacations), but I think that targetting the beginning of next CF in\n> > January would be OK.\n> >\n> > Overall, I have the impression that the patch looks pretty solid, with\n> > a restriction in place for \"init\" and \"ready\" relations, while there\n> > are tests to check all the states that we expect. Seeing coverage\n> > about all that makes me a happy hacker.\n> >\n> > + * If retain_lock is true, then don't release the locks taken in this function.\n> > + * We normally release the locks at the end of transaction but in binary-upgrade\n> > + * mode, we expect to release those immediately.\n> >\n> > I think that this should be documented in pg_upgrade_support.c where\n> > the caller expects the locks to be released, and why these should be\n> > released. There is a risk that this comment becomes obsolete if\n> > AddSubscriptionRelState() with locks released is called in a different\n> > code path. Anyway, I am not sure to get why this is OK, or even\n> > necessary. It seems like a good practice to keep the locks on the\n> > subscription until the transaction that updates its state. If there's\n> > a specific reason explaining why that's better, the patch should tell\n> > why.\n>\n> Added comments for this.\n>\n> > + * However, this shouldn't be a problem as the upgrade ensures\n> > + * that all the transactions were replicated before upgrading the\n> > + * publisher.\n> > This wording looks a bit confusing to me, as \"the upgrade\" could refer\n> > to the upgrade of a subscriber, but what we want to tell is that the\n> > replay of the transactions is enforced when doing a publisher upgrade.\n> > I'd suggest something like \"the upgrade of the publisher ensures that\n> > all the transactions were replicated before upgrading it\".\n>\n> Modified\n>\n> > +my $result = $old_sub->safe_psql('postgres',\n> > + \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'i'\");\n> > +is($result, qq(t), \"Check that the table is in init state\");\n> >\n> > Hmm. Not sure that this is safe. Shouldn't this be a\n> > poll_query_until(), polling that the state of the relation is what we\n> > want it to be after requesting a fresh of the publication on the\n> > subscriber?\n>\n> This is not required as the table will be added in init state after\n> \"Alter Subscription ... Refresh ..\" command itself.\n>\n> Thanks for the comments, the attached v24 version patch has the\n> changes for the same.\n\nThank you for updating the patch.\n\nHere are some minor comments:\n\n+ if (!SearchSysCacheExists1(RELOID, ObjectIdGetDatum(relid)))\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"relation %u does not exist\", relid));\n+\n\nI think the error code should be ERRCODE_UNDEFINED_TABLE, and the\nerror message should be something like \"relation with OID %u does not\nexist\". Or we might not need such checks since an undefined-object\nerror is caught by relation_open()?\n\n---\n+ /* Fetch the existing tuple. */\n+ tup = SearchSysCache2(SUBSCRIPTIONNAME, MyDatabaseId,\n+ CStringGetDatum(subname));\n+ if (!HeapTupleIsValid(tup))\n+ ereport(ERROR,\n+ errcode(ERRCODE_UNDEFINED_OBJECT),\n+ errmsg(\"subscription \\\"%s\\\" does not\nexist\", subname));\n+\n+ form = (Form_pg_subscription) GETSTRUCT(tup);\n+ subid = form->oid;\n\nThe above code can be replaced with \"get_subscription_oid(subname,\nfalse)\". binary_upgrade_replorigin_advance() has the same code.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 13 Dec 2023 05:25:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, 13 Dec 2023 at 01:56, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 7, 2023 at 8:15 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, 5 Dec 2023 at 10:56, Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Mon, Dec 04, 2023 at 04:30:49PM +0530, Amit Kapila wrote:\n> > > > I have made minor changes in the comments and code at various places.\n> > > > See and let me know if you are not happy with the changes. I think\n> > > > unless there are more suggestions or comments, we can proceed with\n> > > > committing it.\n> > >\n> > > Yeah. I am planning to look more closely at what you have here, and\n> > > it is going to take me a bit more time though (some more stuff planned\n> > > for next CF, an upcoming conference and end/beginning-of-year\n> > > vacations), but I think that targetting the beginning of next CF in\n> > > January would be OK.\n> > >\n> > > Overall, I have the impression that the patch looks pretty solid, with\n> > > a restriction in place for \"init\" and \"ready\" relations, while there\n> > > are tests to check all the states that we expect. Seeing coverage\n> > > about all that makes me a happy hacker.\n> > >\n> > > + * If retain_lock is true, then don't release the locks taken in this function.\n> > > + * We normally release the locks at the end of transaction but in binary-upgrade\n> > > + * mode, we expect to release those immediately.\n> > >\n> > > I think that this should be documented in pg_upgrade_support.c where\n> > > the caller expects the locks to be released, and why these should be\n> > > released. There is a risk that this comment becomes obsolete if\n> > > AddSubscriptionRelState() with locks released is called in a different\n> > > code path. Anyway, I am not sure to get why this is OK, or even\n> > > necessary. It seems like a good practice to keep the locks on the\n> > > subscription until the transaction that updates its state. If there's\n> > > a specific reason explaining why that's better, the patch should tell\n> > > why.\n> >\n> > Added comments for this.\n> >\n> > > + * However, this shouldn't be a problem as the upgrade ensures\n> > > + * that all the transactions were replicated before upgrading the\n> > > + * publisher.\n> > > This wording looks a bit confusing to me, as \"the upgrade\" could refer\n> > > to the upgrade of a subscriber, but what we want to tell is that the\n> > > replay of the transactions is enforced when doing a publisher upgrade.\n> > > I'd suggest something like \"the upgrade of the publisher ensures that\n> > > all the transactions were replicated before upgrading it\".\n> >\n> > Modified\n> >\n> > > +my $result = $old_sub->safe_psql('postgres',\n> > > + \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'i'\");\n> > > +is($result, qq(t), \"Check that the table is in init state\");\n> > >\n> > > Hmm. Not sure that this is safe. Shouldn't this be a\n> > > poll_query_until(), polling that the state of the relation is what we\n> > > want it to be after requesting a fresh of the publication on the\n> > > subscriber?\n> >\n> > This is not required as the table will be added in init state after\n> > \"Alter Subscription ... Refresh ..\" command itself.\n> >\n> > Thanks for the comments, the attached v24 version patch has the\n> > changes for the same.\n>\n> Thank you for updating the patch.\n>\n> Here are some minor comments:\n>\n> + if (!SearchSysCacheExists1(RELOID, ObjectIdGetDatum(relid)))\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"relation %u does not exist\", relid));\n> +\n>\n> I think the error code should be ERRCODE_UNDEFINED_TABLE, and the\n> error message should be something like \"relation with OID %u does not\n> exist\". Or we might not need such checks since an undefined-object\n> error is caught by relation_open()?\n\nI have removed this as it will be caught by relation_open.\n\n> ---\n> + /* Fetch the existing tuple. */\n> + tup = SearchSysCache2(SUBSCRIPTIONNAME, MyDatabaseId,\n> + CStringGetDatum(subname));\n> + if (!HeapTupleIsValid(tup))\n> + ereport(ERROR,\n> + errcode(ERRCODE_UNDEFINED_OBJECT),\n> + errmsg(\"subscription \\\"%s\\\" does not\n> exist\", subname));\n> +\n> + form = (Form_pg_subscription) GETSTRUCT(tup);\n> + subid = form->oid;\n\nModified\n\n> The above code can be replaced with \"get_subscription_oid(subname,\n> false)\". binary_upgrade_replorigin_advance() has the same code.\n\nModified\n\nThanks for the comments, the attached v25 version patch has the\nchanges for the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 13 Dec 2023 12:09:25 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Dec 13, 2023 at 12:09 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for the comments, the attached v25 version patch has the\n> changes for the same.\n>\n\nI have looked at it again and made some cosmetic changes like changing\nsome comments and a minor change in one of the error messages. See, if\nthe changes look okay to you.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 28 Dec 2023 15:59:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, 28 Dec 2023 at 15:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 13, 2023 at 12:09 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for the comments, the attached v25 version patch has the\n> > changes for the same.\n> >\n>\n> I have looked at it again and made some cosmetic changes like changing\n> some comments and a minor change in one of the error messages. See, if\n> the changes look okay to you.\n\nThanks, the changes look good.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 29 Dec 2023 14:26:30 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 2:26 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, 28 Dec 2023 at 15:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Dec 13, 2023 at 12:09 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Thanks for the comments, the attached v25 version patch has the\n> > > changes for the same.\n> > >\n> >\n> > I have looked at it again and made some cosmetic changes like changing\n> > some comments and a minor change in one of the error messages. See, if\n> > the changes look okay to you.\n>\n> Thanks, the changes look good.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 2 Jan 2024 15:58:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Jan 02, 2024 at 03:58:25PM +0530, Amit Kapila wrote:\n> On Fri, Dec 29, 2023 at 2:26 PM vignesh C <vignesh21@gmail.com> wrote:\n>> Thanks, the changes look good.\n> \n> Pushed.\n\nYeah! Thanks Amit and everybody involved here! Thanks also to Julien\nfor raising the thread and the problem, to start with.\n--\nMichael",
"msg_date": "Wed, 3 Jan 2024 09:51:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Jan 3, 2024 at 6:21 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jan 02, 2024 at 03:58:25PM +0530, Amit Kapila wrote:\n> > On Fri, Dec 29, 2023 at 2:26 PM vignesh C <vignesh21@gmail.com> wrote:\n> >> Thanks, the changes look good.\n> >\n> > Pushed.\n>\n> Yeah! Thanks Amit and everybody involved here! Thanks also to Julien\n> for raising the thread and the problem, to start with.\n>\n\nI think the next possible step here is to document how to upgrade the\nlogical replication nodes as previously discussed in this thread [1].\nIIRC, there were a few issues with the steps mentioned but if we want\nto document those we can start a separate thread for it as that\ninvolves both publishers and subscribers.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm2pe7SoOGtRkrTNsnZPnaaY%2B2iHC40HBYCSLYmyRg0wSw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 3 Jan 2024 11:24:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Jan 03, 2024 at 11:24:50AM +0530, Amit Kapila wrote:\n> I think the next possible step here is to document how to upgrade the\n> logical replication nodes as previously discussed in this thread [1].\n> IIRC, there were a few issues with the steps mentioned but if we want\n> to document those we can start a separate thread for it as that\n> involves both publishers and subscribers.\n> \n> [1] - https://www.postgresql.org/message-id/CALDaNm2pe7SoOGtRkrTNsnZPnaaY%2B2iHC40HBYCSLYmyRg0wSw%40mail.gmail.com\n\nYep. A second thing is whether it makes sense to have more automated\ntest coverage when it comes to the interferences between subscribers\nand publishers with more complex node structures.\n--\nMichael",
"msg_date": "Wed, 3 Jan 2024 15:03:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Jan 3, 2024 at 11:33 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jan 03, 2024 at 11:24:50AM +0530, Amit Kapila wrote:\n> > I think the next possible step here is to document how to upgrade the\n> > logical replication nodes as previously discussed in this thread [1].\n> > IIRC, there were a few issues with the steps mentioned but if we want\n> > to document those we can start a separate thread for it as that\n> > involves both publishers and subscribers.\n> >\n> > [1] - https://www.postgresql.org/message-id/CALDaNm2pe7SoOGtRkrTNsnZPnaaY%2B2iHC40HBYCSLYmyRg0wSw%40mail.gmail.com\n>\n> Yep. A second thing is whether it makes sense to have more automated\n> test coverage when it comes to the interferences between subscribers\n> and publishers with more complex node structures.\n>\n\nI think it would be good to finish the pending patch to improve the\nIsBinaryUpgrade check [1] which we decided to do once this patch is\nready. Would you like to take that up or do you want me to finish it?\n\n[1] - https://www.postgresql.org/message-id/ZU2TeVkUg5qEi7Oy%40paquier.xyz\n[2] - https://www.postgresql.org/message-id/ZVQtUTdJACnsbbpd%40paquier.xyz\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 3 Jan 2024 15:18:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Jan 03, 2024 at 03:18:50PM +0530, Amit Kapila wrote:\n> I think it would be good to finish the pending patch to improve the\n> IsBinaryUpgrade check [1] which we decided to do once this patch is\n> ready. Would you like to take that up or do you want me to finish it?\n>\n> [1] - https://www.postgresql.org/message-id/ZU2TeVkUg5qEi7Oy%40paquier.xyz\n> [2] - https://www.postgresql.org/message-id/ZVQtUTdJACnsbbpd%40paquier.xyz\n\nYep, that's on my TODO. I can send a new version at the beginning of\nnext week. No problem.\n--\nMichael",
"msg_date": "Thu, 4 Jan 2024 08:54:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, 3 Jan 2024 at 11:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 3, 2024 at 6:21 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Jan 02, 2024 at 03:58:25PM +0530, Amit Kapila wrote:\n> > > On Fri, Dec 29, 2023 at 2:26 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >> Thanks, the changes look good.\n> > >\n> > > Pushed.\n> >\n> > Yeah! Thanks Amit and everybody involved here! Thanks also to Julien\n> > for raising the thread and the problem, to start with.\n> >\n>\n> I think the next possible step here is to document how to upgrade the\n> logical replication nodes as previously discussed in this thread [1].\n> IIRC, there were a few issues with the steps mentioned but if we want\n> to document those we can start a separate thread for it as that\n> involves both publishers and subscribers.\n\nI have posted a patch for this at:\nhttps://www.postgresql.org/message-id/CALDaNm1_iDO6srWzntqTr0ZDVkk2whVhNKEWAvtgZBfSmuBeZQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 4 Jan 2024 15:09:01 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, 2 Jan 2024 at 15:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 29, 2023 at 2:26 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, 28 Dec 2023 at 15:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Dec 13, 2023 at 12:09 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > Thanks for the comments, the attached v25 version patch has the\n> > > > changes for the same.\n> > > >\n> > >\n> > > I have looked at it again and made some cosmetic changes like changing\n> > > some comments and a minor change in one of the error messages. See, if\n> > > the changes look okay to you.\n> >\n> > Thanks, the changes look good.\n> >\n>\n> Pushed.\n\nThanks for pushing this patch, I have updated the commitfest entry to\nCommitted for the same.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 4 Jan 2024 15:39:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Jan 03, 2024 at 03:18:50PM +0530, Amit Kapila wrote:\n> I think it would be good to finish the pending patch to improve the\n> IsBinaryUpgrade check [1] which we decided to do once this patch is\n> ready. Would you like to take that up or do you want me to finish it?\n> \n> [1] - https://www.postgresql.org/message-id/ZU2TeVkUg5qEi7Oy%40paquier.xyz\n> [2] - https://www.postgresql.org/message-id/ZVQtUTdJACnsbbpd%40paquier.xyz\n\nMy apologies for the delay, again. I have sent an update here:\nhttps://www.postgresql.org/message-id/ZZ4f3zKu0YyFndHi@paquier.xyz\n--\nMichael",
"msg_date": "Wed, 10 Jan 2024 13:42:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, 14 Feb 2024 at 09:07, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Justin,\n>\n> > pg_upgrade/t/004_subscription.pl says\n> >\n> > |my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\n> >\n> > ..but I think maybe it should not.\n> >\n> > When you try to use --link, it fails:\n> > https://cirrus-ci.com/task/4669494061170688\n> >\n> > |Adding \".old\" suffix to old global/pg_control ok\n> > |\n> > |If you want to start the old cluster, you will need to remove\n> > |the \".old\" suffix from\n> > /tmp/cirrus-ci-build/build/testrun/pg_upgrade/004_subscription/data/t_004_su\n> > bscription_old_sub_data/pgdata/global/pg_control.old.\n> > |Because \"link\" mode was used, the old cluster cannot be safely\n> > |started once the new cluster has been started.\n> > |...\n> > |\n> > |postgres: could not find the database system\n> > |Expected to find it in the directory\n> > \"/tmp/cirrus-ci-build/build/testrun/pg_upgrade/004_subscription/data/t_004_s\n> > ubscription_old_sub_data/pgdata\",\n> > |but could not open file\n> > \"/tmp/cirrus-ci-build/build/testrun/pg_upgrade/004_subscription/data/t_004_s\n> > ubscription_old_sub_data/pgdata/global/pg_control\": No such file or directory\n> > |# No postmaster PID for node \"old_sub\"\n> > |[19:36:01.396](0.250s) Bail out! pg_ctl start failed\n> >\n>\n> Good catch! The primal reason of the failure is to reuse the old cluster, even after\n> the successful upgrade. The documentation said [1]:\n>\n> >\n> If you use link mode, the upgrade will be much faster (no file copying) and use less\n> disk space, but you will not be able to access your old cluster once you start the new\n> cluster after the upgrade.\n> >\n>\n> > You could rename pg_control.old to avoid that immediate error, but that doesn't\n> > address the essential issue that \"the old cluster cannot be safely started once\n> > the new cluster has been started.\"\n>\n> Yeah, I agreed that it should be avoided to access to the old cluster after the upgrade.\n> IIUC, pg_upgrade would be run third times in 004_subscription.\n>\n> 1. successful upgrade\n> 2. failure due to the insufficient max_replication_slot\n> 3. failure because the pg_subscription_rel has 'd' state\n>\n> And old instance is reused in all of runs. Therefore, the most reasonable fix is to\n> change the ordering of tests, i.e., \"successful upgrade\" should be done at last.\n>\n> Attached patch modified the test accordingly. Also, it contains some optimizations.\n\nYour proposal to change the tests in the following order: a) failure\ndue to the insufficient max_replication_slot b) failure because the\npg_subscription_rel has 'd' state c) successful upgrade. looks good to\nme.\nI have also verified that your changes fixes the issue as the\nsuccessful upgrade is moved to the end and the old cluster is no\nlonger used after upgrade.\n\nOne minor suggestion:\nThere is an extra line break here, this can be removed:\n@@ -181,139 +310,5 @@ is($result, qq(1),\n \"check the data is synced after enabling the subscription for\nthe table that was in init state\"\n );\n\n-# cleanup\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 14 Feb 2024 01:50:08 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Jan 02, 2024 at 03:58:25PM +0530, Amit Kapila wrote:\n> Pushed.\n\npg_upgrade/t/004_subscription.pl says\n\n|my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\n\n..but I think maybe it should not.\n\nWhen you try to use --link, it fails:\nhttps://cirrus-ci.com/task/4669494061170688\n\n|Adding \".old\" suffix to old global/pg_control ok\n|\n|If you want to start the old cluster, you will need to remove\n|the \".old\" suffix from /tmp/cirrus-ci-build/build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_old_sub_data/pgdata/global/pg_control.old.\n|Because \"link\" mode was used, the old cluster cannot be safely\n|started once the new cluster has been started.\n|...\n|\n|postgres: could not find the database system\n|Expected to find it in the directory \"/tmp/cirrus-ci-build/build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_old_sub_data/pgdata\",\n|but could not open file \"/tmp/cirrus-ci-build/build/testrun/pg_upgrade/004_subscription/data/t_004_subscription_old_sub_data/pgdata/global/pg_control\": No such file or directory\n|# No postmaster PID for node \"old_sub\"\n|[19:36:01.396](0.250s) Bail out! pg_ctl start failed\n\nYou could rename pg_control.old to avoid that immediate error, but that doesn't\naddress the essential issue that \"the old cluster cannot be safely started once\nthe new cluster has been started.\"\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 13 Feb 2024 15:05:14 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 03:05:14PM -0600, Justin Pryzby wrote:\n> On Tue, Jan 02, 2024 at 03:58:25PM +0530, Amit Kapila wrote:\n> > Pushed.\n> \n> pg_upgrade/t/004_subscription.pl says\n> \n> |my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\n> \n> ..but I think maybe it should not.\n> \n> When you try to use --link, it fails:\n> https://cirrus-ci.com/task/4669494061170688\n\nThanks. It is the kind of things we don't want to lose sight on, so I\nhave taken this occasion to create a wiki page for the open items of\n17, and added this one to it:\nhttps://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items\n--\nMichael",
"msg_date": "Wed, 14 Feb 2024 06:34:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Justin,\n\n> pg_upgrade/t/004_subscription.pl says\n> \n> |my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\n> \n> ..but I think maybe it should not.\n> \n> When you try to use --link, it fails:\n> https://cirrus-ci.com/task/4669494061170688\n> \n> |Adding \".old\" suffix to old global/pg_control ok\n> |\n> |If you want to start the old cluster, you will need to remove\n> |the \".old\" suffix from\n> /tmp/cirrus-ci-build/build/testrun/pg_upgrade/004_subscription/data/t_004_su\n> bscription_old_sub_data/pgdata/global/pg_control.old.\n> |Because \"link\" mode was used, the old cluster cannot be safely\n> |started once the new cluster has been started.\n> |...\n> |\n> |postgres: could not find the database system\n> |Expected to find it in the directory\n> \"/tmp/cirrus-ci-build/build/testrun/pg_upgrade/004_subscription/data/t_004_s\n> ubscription_old_sub_data/pgdata\",\n> |but could not open file\n> \"/tmp/cirrus-ci-build/build/testrun/pg_upgrade/004_subscription/data/t_004_s\n> ubscription_old_sub_data/pgdata/global/pg_control\": No such file or directory\n> |# No postmaster PID for node \"old_sub\"\n> |[19:36:01.396](0.250s) Bail out! pg_ctl start failed\n> \n\nGood catch! The primal reason of the failure is to reuse the old cluster, even after\nthe successful upgrade. The documentation said [1]:\n\n>\nIf you use link mode, the upgrade will be much faster (no file copying) and use less\ndisk space, but you will not be able to access your old cluster once you start the new\ncluster after the upgrade.\n>\n\n> You could rename pg_control.old to avoid that immediate error, but that doesn't\n> address the essential issue that \"the old cluster cannot be safely started once\n> the new cluster has been started.\"\n\nYeah, I agreed that it should be avoided to access to the old cluster after the upgrade.\nIIUC, pg_upgrade would be run third times in 004_subscription.\n\n1. successful upgrade\n2. failure due to the insufficient max_replication_slot\n3. failure because the pg_subscription_rel has 'd' state\n\nAnd old instance is reused in all of runs. Therefore, the most reasonable fix is to \nchange the ordering of tests, i.e., \"successful upgrade\" should be done at last.\n\nAttached patch modified the test accordingly. Also, it contains some optimizations.\nThis can pass the test on my env:\n\n```\npg_upgrade]$ PG_TEST_PG_UPGRADE_MODE='--link' PG_TEST_TIMEOUT_DEFAULT=10 make check PROVE_TESTS='t/004_subscription.pl'\n...\n# +++ tap check in src/bin/pg_upgrade +++\nt/004_subscription.pl .. ok \nAll tests successful.\nFiles=1, Tests=14, 9 wallclock secs ( 0.03 usr 0.00 sys + 0.55 cusr 1.08 csys = 1.66 CPU)\nResult: PASS\n```\n\nHow do you think?\n\n[1]: https://www.postgresql.org/docs/devel/pgupgrade.html\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/",
"msg_date": "Wed, 14 Feb 2024 03:37:03 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThanks for verifying the fix!\r\n\r\n> Your proposal to change the tests in the following order: a) failure\r\n> due to the insufficient max_replication_slot b) failure because the\r\n> pg_subscription_rel has 'd' state c) successful upgrade. looks good to\r\n> me.\r\n\r\nRight.\r\n\r\n> I have also verified that your changes fixes the issue as the\r\n> successful upgrade is moved to the end and the old cluster is no\r\n> longer used after upgrade.\r\n\r\nYeah, it is same as my expectation.\r\n\r\n> One minor suggestion:\r\n> There is an extra line break here, this can be removed:\r\n> @@ -181,139 +310,5 @@ is($result, qq(1),\r\n> \"check the data is synced after enabling the subscription for\r\n> the table that was in init state\"\r\n> );\r\n> \r\n> -# cleanup\r\n>\r\n\r\nRemoved.\r\n\r\nPSA a new version patch.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Wed, 14 Feb 2024 05:21:54 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 9:07 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > pg_upgrade/t/004_subscription.pl says\n> >\n> > |my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\n> >\n> > ..but I think maybe it should not.\n> >\n> > When you try to use --link, it fails:\n> > https://cirrus-ci.com/task/4669494061170688\n> >\n> > |Adding \".old\" suffix to old global/pg_control ok\n> > |\n> > |If you want to start the old cluster, you will need to remove\n> > |the \".old\" suffix from\n> > /tmp/cirrus-ci-build/build/testrun/pg_upgrade/004_subscription/data/t_004_su\n> > bscription_old_sub_data/pgdata/global/pg_control.old.\n> > |Because \"link\" mode was used, the old cluster cannot be safely\n> > |started once the new cluster has been started.\n> > |...\n> > |\n> > |postgres: could not find the database system\n> > |Expected to find it in the directory\n> > \"/tmp/cirrus-ci-build/build/testrun/pg_upgrade/004_subscription/data/t_004_s\n> > ubscription_old_sub_data/pgdata\",\n> > |but could not open file\n> > \"/tmp/cirrus-ci-build/build/testrun/pg_upgrade/004_subscription/data/t_004_s\n> > ubscription_old_sub_data/pgdata/global/pg_control\": No such file or directory\n> > |# No postmaster PID for node \"old_sub\"\n> > |[19:36:01.396](0.250s) Bail out! pg_ctl start failed\n> >\n>\n> Good catch! The primal reason of the failure is to reuse the old cluster, even after\n> the successful upgrade. The documentation said [1]:\n>\n> >\n> If you use link mode, the upgrade will be much faster (no file copying) and use less\n> disk space, but you will not be able to access your old cluster once you start the new\n> cluster after the upgrade.\n> >\n>\n> > You could rename pg_control.old to avoid that immediate error, but that doesn't\n> > address the essential issue that \"the old cluster cannot be safely started once\n> > the new cluster has been started.\"\n>\n> Yeah, I agreed that it should be avoided to access to the old cluster after the upgrade.\n> IIUC, pg_upgrade would be run third times in 004_subscription.\n>\n> 1. successful upgrade\n> 2. failure due to the insufficient max_replication_slot\n> 3. failure because the pg_subscription_rel has 'd' state\n>\n> And old instance is reused in all of runs. Therefore, the most reasonable fix is to\n> change the ordering of tests, i.e., \"successful upgrade\" should be done at last.\n>\n\nThis sounds like a reasonable way to address the reported problem.\nJustin, do let me know if you think otherwise?\n\nComment:\n===========\n*\n-# Setup an enabled subscription to verify that the running status and failover\n-# option are retained after the upgrade.\n+# Setup a subscription to verify that the failover option are retained after\n+# the upgrade.\n $publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n $old_sub->safe_psql('postgres',\n- \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr' PUBLICATION\nregress_pub1 WITH (failover = true)\"\n+ \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr' PUBLICATION\nregress_pub1 WITH (failover = true, enabled = false)\"\n );\n\nI think it is better not to create a subscription in the early stage\nwhich we wanted to use for the success case. Let's have separate\nsubscriptions for failure and success cases. I think that will avoid\nthe newly added ALTER statements in the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 15 Feb 2024 12:49:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 03:37:03AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> Attached patch modified the test accordingly. Also, it contains some optimizations.\n> This can pass the test on my env:\n\nWhat optimizations? I can't see them, and since the patch is described\nas rearranging test cases (and therefore already difficult to read), I\nguess they should be a separate patch, or the optimizations described.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 15 Feb 2024 03:46:57 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> This sounds like a reasonable way to address the reported problem.\r\n\r\nOK, thanks!\r\n\r\n> Justin, do let me know if you think otherwise?\r\n> \r\n> Comment:\r\n> ===========\r\n> *\r\n> -# Setup an enabled subscription to verify that the running status and failover\r\n> -# option are retained after the upgrade.\r\n> +# Setup a subscription to verify that the failover option are retained after\r\n> +# the upgrade.\r\n> $publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\r\n> $old_sub->safe_psql('postgres',\r\n> - \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr' PUBLICATION\r\n> regress_pub1 WITH (failover = true)\"\r\n> + \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr'\r\n> PUBLICATION\r\n> regress_pub1 WITH (failover = true, enabled = false)\"\r\n> );\r\n> \r\n> I think it is better not to create a subscription in the early stage\r\n> which we wanted to use for the success case. Let's have separate\r\n> subscriptions for failure and success cases. I think that will avoid\r\n> the newly added ALTER statements in the patch.\r\n\r\nI made a patch to avoid creating objects as much as possible, but it\r\nmay lead some confusion. I recreated a patch for creating pub/sub\r\nand dropping them at cleanup for every test cases.\r\n\r\nPSA a new version.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Fri, 16 Feb 2024 02:52:01 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Justin,\n\nThanks for replying!\n\n> What optimizations? I can't see them, and since the patch is described\n> as rearranging test cases (and therefore already difficult to read), I\n> guess they should be a separate patch, or the optimizations described.\n\nThe basic idea was to reduce number of CREATE/DROP statement,\nbut it was changed for now - publications and subscriptions were created and\ndropped per testcases. \n\nE.g., In case of successful upgrade, below steps were done:\n\n1. create two publications\n2. create a subscription with failover = true\n3. avoid further initial sync by setting max_logical_replication_workers = 0\n4. create another subscription\n5. confirm statuses of tables are either of 'i' or 'r'\n6. run pg_upgrade\n7. confirm table statuses are preserved\n8. confirm replication origins are preserved.\n\nNew patch is available in [1].\n\n[1]: https://www.postgresql.org/message-id/TYCPR01MB12077B16EEDA360BA645B96F8F54C2%40TYCPR01MB12077.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/\n\n\n\n",
"msg_date": "Fri, 16 Feb 2024 03:26:06 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, 16 Feb 2024 at 08:22, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Amit,\n>\n> > This sounds like a reasonable way to address the reported problem.\n>\n> OK, thanks!\n>\n> > Justin, do let me know if you think otherwise?\n> >\n> > Comment:\n> > ===========\n> > *\n> > -# Setup an enabled subscription to verify that the running status and failover\n> > -# option are retained after the upgrade.\n> > +# Setup a subscription to verify that the failover option are retained after\n> > +# the upgrade.\n> > $publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n> > $old_sub->safe_psql('postgres',\n> > - \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr' PUBLICATION\n> > regress_pub1 WITH (failover = true)\"\n> > + \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr'\n> > PUBLICATION\n> > regress_pub1 WITH (failover = true, enabled = false)\"\n> > );\n> >\n> > I think it is better not to create a subscription in the early stage\n> > which we wanted to use for the success case. Let's have separate\n> > subscriptions for failure and success cases. I think that will avoid\n> > the newly added ALTER statements in the patch.\n>\n> I made a patch to avoid creating objects as much as possible, but it\n> may lead some confusion. I recreated a patch for creating pub/sub\n> and dropping them at cleanup for every test cases.\n>\n> PSA a new version.\n\nThanks for the updated patch, few suggestions:\n1) Can we use a new publication for this subscription too so that the\npublication and subscription naming will become consistent throughout\nthe test case:\n+# Table will be in 'd' (data is being copied) state as table sync will fail\n+# because of primary key constraint error.\n+my $started_query =\n+ \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'd'\";\n+$old_sub->poll_query_until('postgres', $started_query)\n+ or die\n+ \"Timed out while waiting for the table state to become 'd' (datasync)\";\n+\n+# Create another subscription and drop the subscription's replication origin\n+$old_sub->safe_psql('postgres',\n+ \"CREATE SUBSCRIPTION regress_sub3 CONNECTION '$connstr'\nPUBLICATION regress_pub2 WITH (enabled = false)\"\n+);\n\nSo after the change it will become like subscription regress_sub3 for\npublication regress_pub3, subscription regress_sub4 for publication\nregress_pub4 and subscription regress_sub5 for publication\nregress_pub5.\n\n2) The tab_upgraded1 table can be created along with create\npublication and create subscription itself:\n$publisher->safe_psql('postgres',\n\"CREATE PUBLICATION regress_pub3 FOR TABLE tab_upgraded1\");\n$old_sub->safe_psql('postgres',\n\"CREATE SUBSCRIPTION regress_sub4 CONNECTION '$connstr' PUBLICATION\nregress_pub3 WITH (failover = true)\"\n);\n\n3) The tab_upgraded2 table can be created along with create\npublication and create subscription itself to keep it consistent:\n $publisher->safe_psql('postgres',\n- \"ALTER PUBLICATION regress_pub2 ADD TABLE tab_upgraded2\");\n+ \"CREATE PUBLICATION regress_pub4 FOR TABLE tab_upgraded2\");\n $old_sub->safe_psql('postgres',\n- \"ALTER SUBSCRIPTION regress_sub2 REFRESH PUBLICATION\");\n+ \"CREATE SUBSCRIPTION regress_sub5 CONNECTION '$connstr'\nPUBLICATION regress_pub4\"\n+);\n\nWith above fixes, the following can be removed:\n# Initial setup\n$publisher->safe_psql(\n'postgres', qq[\nCREATE TABLE tab_upgraded1(id int);\nCREATE TABLE tab_upgraded2(id int);\n]);\n$old_sub->safe_psql(\n'postgres', qq[\nCREATE TABLE tab_upgraded1(id int);\nCREATE TABLE tab_upgraded2(id int);\n]);\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 16 Feb 2024 09:56:43 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThanks for reviewing! PSA new version.\r\n\r\n> \r\n> Thanks for the updated patch, few suggestions:\r\n> 1) Can we use a new publication for this subscription too so that the\r\n> publication and subscription naming will become consistent throughout\r\n> the test case:\r\n> +# Table will be in 'd' (data is being copied) state as table sync will fail\r\n> +# because of primary key constraint error.\r\n> +my $started_query =\r\n> + \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'd'\";\r\n> +$old_sub->poll_query_until('postgres', $started_query)\r\n> + or die\r\n> + \"Timed out while waiting for the table state to become 'd' (datasync)\";\r\n> +\r\n> +# Create another subscription and drop the subscription's replication origin\r\n> +$old_sub->safe_psql('postgres',\r\n> + \"CREATE SUBSCRIPTION regress_sub3 CONNECTION '$connstr'\r\n> PUBLICATION regress_pub2 WITH (enabled = false)\"\r\n> +);\r\n>\r\n> So after the change it will become like subscription regress_sub3 for\r\n> publication regress_pub3, subscription regress_sub4 for publication\r\n> regress_pub4 and subscription regress_sub5 for publication\r\n> regress_pub5.\r\n\r\nA new publication was defined.\r\n\r\n> 2) The tab_upgraded1 table can be created along with create\r\n> publication and create subscription itself:\r\n> $publisher->safe_psql('postgres',\r\n> \"CREATE PUBLICATION regress_pub3 FOR TABLE tab_upgraded1\");\r\n> $old_sub->safe_psql('postgres',\r\n> \"CREATE SUBSCRIPTION regress_sub4 CONNECTION '$connstr' PUBLICATION\r\n> regress_pub3 WITH (failover = true)\"\r\n> );\r\n\r\nThe definition of tab_upgraded1 was moved to the place you pointed.\r\n\r\n> 3) The tab_upgraded2 table can be created along with create\r\n> publication and create subscription itself to keep it consistent:\r\n> $publisher->safe_psql('postgres',\r\n> - \"ALTER PUBLICATION regress_pub2 ADD TABLE tab_upgraded2\");\r\n> + \"CREATE PUBLICATION regress_pub4 FOR TABLE tab_upgraded2\");\r\n> $old_sub->safe_psql('postgres',\r\n> - \"ALTER SUBSCRIPTION regress_sub2 REFRESH PUBLICATION\");\r\n> + \"CREATE SUBSCRIPTION regress_sub5 CONNECTION '$connstr'\r\n> PUBLICATION regress_pub4\"\r\n> +);\r\n\r\nDitto.\r\n\r\n> With above fixes, the following can be removed:\r\n> # Initial setup\r\n> $publisher->safe_psql(\r\n> 'postgres', qq[\r\n> CREATE TABLE tab_upgraded1(id int);\r\n> CREATE TABLE tab_upgraded2(id int);\r\n> ]);\r\n> $old_sub->safe_psql(\r\n> 'postgres', qq[\r\n> CREATE TABLE tab_upgraded1(id int);\r\n> CREATE TABLE tab_upgraded2(id int);\r\n> ]);\r\n\r\nYes, earlier definitions were removed instead.\r\nAlso, some comments were adjusted based on these fixes.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Fri, 16 Feb 2024 05:20:33 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Feb 16, 2024 at 10:50 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Thanks for reviewing! PSA new version.\n>\n\n+# Setup a disabled subscription. The upcoming test will check the\n+# pg_createsubscriber won't work, so it is sufficient.\n+$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n\nWhy pg_createsubscriber is referred to here? I think it is a typo.\n\nOther than that patch looks good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 16 Feb 2024 18:07:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, 16 Feb 2024 at 10:50, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Vignesh,\n>\n> Thanks for reviewing! PSA new version.\n>\n> >\n> > Thanks for the updated patch, few suggestions:\n> > 1) Can we use a new publication for this subscription too so that the\n> > publication and subscription naming will become consistent throughout\n> > the test case:\n> > +# Table will be in 'd' (data is being copied) state as table sync will fail\n> > +# because of primary key constraint error.\n> > +my $started_query =\n> > + \"SELECT count(1) = 1 FROM pg_subscription_rel WHERE srsubstate = 'd'\";\n> > +$old_sub->poll_query_until('postgres', $started_query)\n> > + or die\n> > + \"Timed out while waiting for the table state to become 'd' (datasync)\";\n> > +\n> > +# Create another subscription and drop the subscription's replication origin\n> > +$old_sub->safe_psql('postgres',\n> > + \"CREATE SUBSCRIPTION regress_sub3 CONNECTION '$connstr'\n> > PUBLICATION regress_pub2 WITH (enabled = false)\"\n> > +);\n> >\n> > So after the change it will become like subscription regress_sub3 for\n> > publication regress_pub3, subscription regress_sub4 for publication\n> > regress_pub4 and subscription regress_sub5 for publication\n> > regress_pub5.\n>\n> A new publication was defined.\n>\n> > 2) The tab_upgraded1 table can be created along with create\n> > publication and create subscription itself:\n> > $publisher->safe_psql('postgres',\n> > \"CREATE PUBLICATION regress_pub3 FOR TABLE tab_upgraded1\");\n> > $old_sub->safe_psql('postgres',\n> > \"CREATE SUBSCRIPTION regress_sub4 CONNECTION '$connstr' PUBLICATION\n> > regress_pub3 WITH (failover = true)\"\n> > );\n>\n> The definition of tab_upgraded1 was moved to the place you pointed.\n>\n> > 3) The tab_upgraded2 table can be created along with create\n> > publication and create subscription itself to keep it consistent:\n> > $publisher->safe_psql('postgres',\n> > - \"ALTER PUBLICATION regress_pub2 ADD TABLE tab_upgraded2\");\n> > + \"CREATE PUBLICATION regress_pub4 FOR TABLE tab_upgraded2\");\n> > $old_sub->safe_psql('postgres',\n> > - \"ALTER SUBSCRIPTION regress_sub2 REFRESH PUBLICATION\");\n> > + \"CREATE SUBSCRIPTION regress_sub5 CONNECTION '$connstr'\n> > PUBLICATION regress_pub4\"\n> > +);\n>\n> Ditto.\n>\n> > With above fixes, the following can be removed:\n> > # Initial setup\n> > $publisher->safe_psql(\n> > 'postgres', qq[\n> > CREATE TABLE tab_upgraded1(id int);\n> > CREATE TABLE tab_upgraded2(id int);\n> > ]);\n> > $old_sub->safe_psql(\n> > 'postgres', qq[\n> > CREATE TABLE tab_upgraded1(id int);\n> > CREATE TABLE tab_upgraded2(id int);\n> > ]);\n>\n> Yes, earlier definitions were removed instead.\n> Also, some comments were adjusted based on these fixes.\n\nThanks for the updated patch, Few suggestions:\n1) This can be moved to keep it similar to other tests:\n+# Setup a disabled subscription. The upcoming test will check the\n+# pg_createsubscriber won't work, so it is sufficient.\n+$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n+$old_sub->safe_psql('postgres',\n+ \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr'\nPUBLICATION regress_pub1 WITH (enabled = false)\"\n+);\n+\n+$old_sub->stop;\n+\n+# ------------------------------------------------------\n+# Check that pg_upgrade fails when max_replication_slots configured in the new\n+# cluster is less than the number of subscriptions in the old cluster.\n+# ------------------------------------------------------\n+$new_sub->append_conf('postgresql.conf', \"max_replication_slots = 0\");\n+\n+# pg_upgrade will fail because the new cluster has insufficient\n+# max_replication_slots.\n+command_checks_all(\n+ [\n+ 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n+ '-D', $new_sub->data_dir, '-b', $oldbindir,\n+ '-B', $newbindir, '-s', $new_sub->host,\n+ '-p', $old_sub->port, '-P', $new_sub->port,\n+ $mode, '--check',\n+ ],\n\nlike below and the extra comment can be removed:\n+# ------------------------------------------------------\n+# Check that pg_upgrade fails when max_replication_slots configured in the new\n+# cluster is less than the number of subscriptions in the old cluster.\n+# ------------------------------------------------------\n+# Create a disabled subscription.\n+$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n+$old_sub->safe_psql('postgres',\n+ \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr'\nPUBLICATION regress_pub1 WITH (enabled = false)\"\n+);\n+\n+$old_sub->stop;\n+\n+$new_sub->append_conf('postgresql.conf', \"max_replication_slots = 0\");\n+\n+# pg_upgrade will fail because the new cluster has insufficient\n+# max_replication_slots.\n+command_checks_all(\n+ [\n+ 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n+ '-D', $new_sub->data_dir, '-b', $oldbindir,\n+ '-B', $newbindir, '-s', $new_sub->host,\n+ '-p', $old_sub->port, '-P', $new_sub->port,\n+ $mode, '--check',\n+ ],\n\n2) This comment can be slightly changed:\n+# Change configuration as well not to start the initial sync automatically\n+$new_sub->append_conf('postgresql.conf',\n+ \"max_logical_replication_workers = 0\");\n\nto:\nChange configuration so that initial table sync sync does not get\nstarted automatically\n\n3) The old comments were slightly better:\n# Resume the initial sync and wait until all tables of subscription\n# 'regress_sub5' are synchronized\n$new_sub->append_conf('postgresql.conf',\n\"max_logical_replication_workers = 10\");\n$new_sub->restart;\n$new_sub->safe_psql('postgres', \"ALTER SUBSCRIPTION regress_sub5 ENABLE\");\n$new_sub->wait_for_subscription_sync($publisher, 'regress_sub5');\n\nLike:\n# Enable the subscription\n$new_sub->safe_psql('postgres', \"ALTER SUBSCRIPTION regress_sub5 ENABLE\");\n\n# Wait until all tables of subscription 'regress_sub5' are synchronized\n$new_sub->wait_for_subscription_sync($publisher, 'regress_sub5');\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 17 Feb 2024 10:04:51 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sat, Feb 17, 2024 at 10:05 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 16 Feb 2024 at 10:50, Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n>\n> Thanks for the updated patch, Few suggestions:\n> 1) This can be moved to keep it similar to other tests:\n> +# Setup a disabled subscription. The upcoming test will check the\n> +# pg_createsubscriber won't work, so it is sufficient.\n> +$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n> +$old_sub->safe_psql('postgres',\n> + \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr'\n> PUBLICATION regress_pub1 WITH (enabled = false)\"\n> +);\n> +\n> +$old_sub->stop;\n> +\n> +# ------------------------------------------------------\n> +# Check that pg_upgrade fails when max_replication_slots configured in the new\n> +# cluster is less than the number of subscriptions in the old cluster.\n> +# ------------------------------------------------------\n> +$new_sub->append_conf('postgresql.conf', \"max_replication_slots = 0\");\n> +\n> +# pg_upgrade will fail because the new cluster has insufficient\n> +# max_replication_slots.\n> +command_checks_all(\n> + [\n> + 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n> + '-D', $new_sub->data_dir, '-b', $oldbindir,\n> + '-B', $newbindir, '-s', $new_sub->host,\n> + '-p', $old_sub->port, '-P', $new_sub->port,\n> + $mode, '--check',\n> + ],\n>\n> like below and the extra comment can be removed:\n> +# ------------------------------------------------------\n> +# Check that pg_upgrade fails when max_replication_slots configured in the new\n> +# cluster is less than the number of subscriptions in the old cluster.\n> +# ------------------------------------------------------\n> +# Create a disabled subscription.\n>\n\nIt is okay to adjust as you are suggesting but I find Kuroda-San's\ncomment better than just saying: \"Create a disabled subscription.\" as\nthat explicitly tells why it is okay to create a disabled\nsubscription.\n\n>\n> 3) The old comments were slightly better:\n> # Resume the initial sync and wait until all tables of subscription\n> # 'regress_sub5' are synchronized\n> $new_sub->append_conf('postgresql.conf',\n> \"max_logical_replication_workers = 10\");\n> $new_sub->restart;\n> $new_sub->safe_psql('postgres', \"ALTER SUBSCRIPTION regress_sub5 ENABLE\");\n> $new_sub->wait_for_subscription_sync($publisher, 'regress_sub5');\n>\n> Like:\n> # Enable the subscription\n> $new_sub->safe_psql('postgres', \"ALTER SUBSCRIPTION regress_sub5 ENABLE\");\n>\n> # Wait until all tables of subscription 'regress_sub5' are synchronized\n> $new_sub->wait_for_subscription_sync($publisher, 'regress_sub5');\n>\n\nI would prefer Kuroda-San's version as his version of the comment\nexplains the intent of the test better whereas what you are saying is\njust exactly what the next line of code is doing and is\nself-explanatory.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 17 Feb 2024 10:52:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThanks for reviewing! PSA new version.\r\n\r\n> \r\n> Thanks for the updated patch, Few suggestions:\r\n> 1) This can be moved to keep it similar to other tests:\r\n> +# Setup a disabled subscription. The upcoming test will check the\r\n> +# pg_createsubscriber won't work, so it is sufficient.\r\n> +$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\r\n> +$old_sub->safe_psql('postgres',\r\n> + \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr'\r\n> PUBLICATION regress_pub1 WITH (enabled = false)\"\r\n> +);\r\n> +\r\n> +$old_sub->stop;\r\n> +\r\n> +# ------------------------------------------------------\r\n> +# Check that pg_upgrade fails when max_replication_slots configured in the new\r\n> +# cluster is less than the number of subscriptions in the old cluster.\r\n> +# ------------------------------------------------------\r\n> +$new_sub->append_conf('postgresql.conf', \"max_replication_slots = 0\");\r\n> +\r\n> +# pg_upgrade will fail because the new cluster has insufficient\r\n> +# max_replication_slots.\r\n> +command_checks_all(\r\n> + [\r\n> + 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\r\n> + '-D', $new_sub->data_dir, '-b', $oldbindir,\r\n> + '-B', $newbindir, '-s', $new_sub->host,\r\n> + '-p', $old_sub->port, '-P', $new_sub->port,\r\n> + $mode, '--check',\r\n> + ],\r\n> \r\n> like below and the extra comment can be removed:\r\n> +# ------------------------------------------------------\r\n> +# Check that pg_upgrade fails when max_replication_slots configured in the new\r\n> +# cluster is less than the number of subscriptions in the old cluster.\r\n> +# ------------------------------------------------------\r\n> +# Create a disabled subscription.\r\n> +$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\r\n> +$old_sub->safe_psql('postgres',\r\n> + \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr'\r\n> PUBLICATION regress_pub1 WITH (enabled = false)\"\r\n> +);\r\n> +\r\n> +$old_sub->stop;\r\n> +\r\n> +$new_sub->append_conf('postgresql.conf', \"max_replication_slots = 0\");\r\n> +\r\n> +# pg_upgrade will fail because the new cluster has insufficient\r\n> +# max_replication_slots.\r\n> +command_checks_all(\r\n> + [\r\n> + 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\r\n> + '-D', $new_sub->data_dir, '-b', $oldbindir,\r\n> + '-B', $newbindir, '-s', $new_sub->host,\r\n> + '-p', $old_sub->port, '-P', $new_sub->port,\r\n> + $mode, '--check',\r\n> + ],\r\n\r\nPartially fixed. I moved the creation part to below but comments were kept.\r\n\r\n> 2) This comment can be slightly changed:\r\n> +# Change configuration as well not to start the initial sync automatically\r\n> +$new_sub->append_conf('postgresql.conf',\r\n> + \"max_logical_replication_workers = 0\");\r\n> \r\n> to:\r\n> Change configuration so that initial table sync sync does not get\r\n> started automatically\r\n\r\nFixed.\r\n\r\n> 3) The old comments were slightly better:\r\n> # Resume the initial sync and wait until all tables of subscription\r\n> # 'regress_sub5' are synchronized\r\n> $new_sub->append_conf('postgresql.conf',\r\n> \"max_logical_replication_workers = 10\");\r\n> $new_sub->restart;\r\n> $new_sub->safe_psql('postgres', \"ALTER SUBSCRIPTION regress_sub5\r\n> ENABLE\");\r\n> $new_sub->wait_for_subscription_sync($publisher, 'regress_sub5');\r\n> \r\n> Like:\r\n> # Enable the subscription\r\n> $new_sub->safe_psql('postgres', \"ALTER SUBSCRIPTION regress_sub5\r\n> ENABLE\");\r\n> \r\n> # Wait until all tables of subscription 'regress_sub5' are synchronized\r\n> $new_sub->wait_for_subscription_sync($publisher, 'regress_sub5');\r\n\r\nPer comments from Amit [1], I did not change.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1Ls%2BRmJtTvOgaRXd%2BeHSY3x-KUE%3DsfEGQoU-JF_UzA62A%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/",
"msg_date": "Mon, 19 Feb 2024 01:24:35 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 19 Feb 2024 at 06:54, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Vignesh,\n>\n> Thanks for reviewing! PSA new version.\n>\n> >\n> > Thanks for the updated patch, Few suggestions:\n> > 1) This can be moved to keep it similar to other tests:\n> > +# Setup a disabled subscription. The upcoming test will check the\n> > +# pg_createsubscriber won't work, so it is sufficient.\n> > +$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n> > +$old_sub->safe_psql('postgres',\n> > + \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr'\n> > PUBLICATION regress_pub1 WITH (enabled = false)\"\n> > +);\n> > +\n> > +$old_sub->stop;\n> > +\n> > +# ------------------------------------------------------\n> > +# Check that pg_upgrade fails when max_replication_slots configured in the new\n> > +# cluster is less than the number of subscriptions in the old cluster.\n> > +# ------------------------------------------------------\n> > +$new_sub->append_conf('postgresql.conf', \"max_replication_slots = 0\");\n> > +\n> > +# pg_upgrade will fail because the new cluster has insufficient\n> > +# max_replication_slots.\n> > +command_checks_all(\n> > + [\n> > + 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n> > + '-D', $new_sub->data_dir, '-b', $oldbindir,\n> > + '-B', $newbindir, '-s', $new_sub->host,\n> > + '-p', $old_sub->port, '-P', $new_sub->port,\n> > + $mode, '--check',\n> > + ],\n> >\n> > like below and the extra comment can be removed:\n> > +# ------------------------------------------------------\n> > +# Check that pg_upgrade fails when max_replication_slots configured in the new\n> > +# cluster is less than the number of subscriptions in the old cluster.\n> > +# ------------------------------------------------------\n> > +# Create a disabled subscription.\n> > +$publisher->safe_psql('postgres', \"CREATE PUBLICATION regress_pub1\");\n> > +$old_sub->safe_psql('postgres',\n> > + \"CREATE SUBSCRIPTION regress_sub1 CONNECTION '$connstr'\n> > PUBLICATION regress_pub1 WITH (enabled = false)\"\n> > +);\n> > +\n> > +$old_sub->stop;\n> > +\n> > +$new_sub->append_conf('postgresql.conf', \"max_replication_slots = 0\");\n> > +\n> > +# pg_upgrade will fail because the new cluster has insufficient\n> > +# max_replication_slots.\n> > +command_checks_all(\n> > + [\n> > + 'pg_upgrade', '--no-sync', '-d', $old_sub->data_dir,\n> > + '-D', $new_sub->data_dir, '-b', $oldbindir,\n> > + '-B', $newbindir, '-s', $new_sub->host,\n> > + '-p', $old_sub->port, '-P', $new_sub->port,\n> > + $mode, '--check',\n> > + ],\n>\n> Partially fixed. I moved the creation part to below but comments were kept.\n>\n> > 2) This comment can be slightly changed:\n> > +# Change configuration as well not to start the initial sync automatically\n> > +$new_sub->append_conf('postgresql.conf',\n> > + \"max_logical_replication_workers = 0\");\n> >\n> > to:\n> > Change configuration so that initial table sync sync does not get\n> > started automatically\n>\n> Fixed.\n>\n> > 3) The old comments were slightly better:\n> > # Resume the initial sync and wait until all tables of subscription\n> > # 'regress_sub5' are synchronized\n> > $new_sub->append_conf('postgresql.conf',\n> > \"max_logical_replication_workers = 10\");\n> > $new_sub->restart;\n> > $new_sub->safe_psql('postgres', \"ALTER SUBSCRIPTION regress_sub5\n> > ENABLE\");\n> > $new_sub->wait_for_subscription_sync($publisher, 'regress_sub5');\n> >\n> > Like:\n> > # Enable the subscription\n> > $new_sub->safe_psql('postgres', \"ALTER SUBSCRIPTION regress_sub5\n> > ENABLE\");\n> >\n> > # Wait until all tables of subscription 'regress_sub5' are synchronized\n> > $new_sub->wait_for_subscription_sync($publisher, 'regress_sub5');\n>\n> Per comments from Amit [1], I did not change.\n\nThanks for the updated patch, I don't have any more comments.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 19 Feb 2024 09:22:26 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 6:54 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Thanks for reviewing! PSA new version.\n>\n\nPushed this after making minor changes in the comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Feb 2024 12:38:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, 19 Feb 2024 at 12:38, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 19, 2024 at 6:54 AM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Thanks for reviewing! PSA new version.\n> >\n>\n> Pushed this after making minor changes in the comments.\n\nRecently there was a failure in 004_subscription tap test at [1].\nIn this failure, the tab_upgraded1 table was expected to have 51\nrecords but has only 50 records. Before the upgrade both publisher and\nsubscriber have 50 records.\nAfter the upgrade we have inserted one record in the publisher, now\ntab_upgraded1 will have 51 records in the publisher. Then we start the\nsubscriber after changing max_logical_replication_workers so that\napply workers get started and apply the changes received. After\nstarting we enable regress_sub5, wait for sync of regress_sub5\nsubscription and check for tab_upgraded1 and tab_upgraded2 table data.\nIn a few random cases the one record that was inserted into\ntab_upgraded1 table will not get replicated as we have not waited for\nregress_sub4 subscription to apply the changes from the publisher.\nThe attached patch has changes to wait for regress_sub4 subscription\nto apply the changes from the publisher before verifying the data.\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-03-26%2004%3A23%3A13\n\nRegards,\nVignesh",
"msg_date": "Wed, 27 Mar 2024 11:57:29 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\n> \r\n> Recently there was a failure in 004_subscription tap test at [1].\r\n> In this failure, the tab_upgraded1 table was expected to have 51\r\n> records but has only 50 records. Before the upgrade both publisher and\r\n> subscriber have 50 records.\r\n\r\nGood catch!\r\n\r\n> After the upgrade we have inserted one record in the publisher, now\r\n> tab_upgraded1 will have 51 records in the publisher. Then we start the\r\n> subscriber after changing max_logical_replication_workers so that\r\n> apply workers get started and apply the changes received. After\r\n> starting we enable regress_sub5, wait for sync of regress_sub5\r\n> subscription and check for tab_upgraded1 and tab_upgraded2 table data.\r\n> In a few random cases the one record that was inserted into\r\n> tab_upgraded1 table will not get replicated as we have not waited for\r\n> regress_sub4 subscription to apply the changes from the publisher.\r\n> The attached patch has changes to wait for regress_sub4 subscription\r\n> to apply the changes from the publisher before verifying the data.\r\n> \r\n> [1] -\r\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-03-\r\n> 26%2004%3A23%3A13\r\n\r\nYeah, I think it is an oversight in f17529. Previously subscriptions which\r\nreceiving changes were confirmed to be caught up, I missed to add the line while\r\nrestructuring the script. +1 for your fix.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n",
"msg_date": "Wed, 27 Mar 2024 06:59:58 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 11:57 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> The attached patch has changes to wait for regress_sub4 subscription\n> to apply the changes from the publisher before verifying the data.\n>\n\nPushed after changing the order of wait as it looks logical to wait\nfor regress_sub5 after enabling the subscription. Thanks\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Mar 2024 14:25:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "I've been looking into optimizing pg_upgrade's once-in-each-database steps\n[0], and I noticed that we are opening a connection to every database in\nthe cluster and running a query like\n\n\tSELECT count(*) FROM pg_catalog.pg_subscription WHERE subdbid = %d;\n\nThen, later on, we combine all of these values in\ncount_old_cluster_subscriptions() to verify that max_replication_slots is\nset high enough. AFAICT these per-database subscription counts aren't used\nfor anything else.\n\nThis is an extremely expensive way to perform that check, and so I'm\nwondering why we don't just do\n\n\tSELECT count(*) FROM pg_catalog.pg_subscription;\n\nonce in count_old_cluster_subscriptions().\n\n[0] https://commitfest.postgresql.org/48/4995/\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 19 Jul 2024 15:44:22 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 03:44:22PM -0500, Nathan Bossart wrote:\n> I've been looking into optimizing pg_upgrade's once-in-each-database steps\n> [0], and I noticed that we are opening a connection to every database in\n> the cluster and running a query like\n> \n> \tSELECT count(*) FROM pg_catalog.pg_subscription WHERE subdbid = %d;\n> \n> Then, later on, we combine all of these values in\n> count_old_cluster_subscriptions() to verify that max_replication_slots is\n> set high enough. AFAICT these per-database subscription counts aren't used\n> for anything else.\n> \n> This is an extremely expensive way to perform that check, and so I'm\n> wondering why we don't just do\n> \n> \tSELECT count(*) FROM pg_catalog.pg_subscription;\n> \n> once in count_old_cluster_subscriptions().\n\nLike so...\n\n-- \nnathan",
"msg_date": "Sat, 20 Jul 2024 21:03:07 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Sat, Jul 20, 2024 at 09:03:07PM -0500, Nathan Bossart wrote:\n>> This is an extremely expensive way to perform that check, and so I'm\n>> wondering why we don't just do\n>> \n>> \tSELECT count(*) FROM pg_catalog.pg_subscription;\n>> \n>> once in count_old_cluster_subscriptions().\n> \n> Like so...\n\nAh, good catch. That sounds like a good thing to do because we don't\ncare about the number of subscriptions for each database in the\ncurrent code.\n\nThis is something that qualifies as an open item, IMO, as this code\nis new to PG17.\n\nA comment in get_db_rel_and_slot_infos() becomes incorrect where\nget_old_cluster_logical_slot_infos() is called; it is still referring\nto the subscription count.\n\nActually, on the same grounds, couldn't we do the logical slot info\nretrieval in get_old_cluster_logical_slot_infos() in a single pass as\nwell? pg_replication_slots reports some information about all the\nslots, and the current code has a qual on current_database(). It\nlooks to me that this could be replaced by a single query, ordering\nthe slots by database names, assigning the slot infos in each\ndatabase's DbInfo at the end. That would be much more efficient if\ndealing with a lot of databases.\n--\nMichael",
"msg_date": "Mon, 22 Jul 2024 11:05:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 7:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jul 20, 2024 at 09:03:07PM -0500, Nathan Bossart wrote:\n> >> This is an extremely expensive way to perform that check, and so I'm\n> >> wondering why we don't just do\n> >>\n> >> SELECT count(*) FROM pg_catalog.pg_subscription;\n> >>\n> >> once in count_old_cluster_subscriptions().\n> >\n> > Like so...\n\nIsn't it better to directly invoke get_subscription_count() in\ncheck_new_cluster_subscription_configuration() where it is required\nrather than in a db-specific general function?\n\n>\n> Ah, good catch. That sounds like a good thing to do because we don't\n> care about the number of subscriptions for each database in the\n> current code.\n>\n> This is something that qualifies as an open item, IMO, as this code\n> is new to PG17.\n>\n> A comment in get_db_rel_and_slot_infos() becomes incorrect where\n> get_old_cluster_logical_slot_infos() is called; it is still referring\n> to the subscription count.\n>\n> Actually, on the same grounds, couldn't we do the logical slot info\n> retrieval in get_old_cluster_logical_slot_infos() in a single pass as\n> well? pg_replication_slots reports some information about all the\n> slots, and the current code has a qual on current_database(). It\n> looks to me that this could be replaced by a single query, ordering\n> the slots by database names, assigning the slot infos in each\n> database's DbInfo at the end.\n>\n\nUnlike subscriptions, logical slots are database-specific objects. We\nhave some checks in the code like the one in CreateDecodingContext()\nfor MyDatabaseId which may or may not create a problem for this case\nas we don't consume changes when checking\nLogicalReplicationSlotHasPendingWal via\nbinary_upgrade_logical_slot_has_caught_up() but I think this needs\nmore analysis than what Nathan has proposed. So, I suggest taking up\nthis task for PG18 if we want to optimize this code path.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 22 Jul 2024 15:45:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 03:45:19PM +0530, Amit Kapila wrote:\n> On Mon, Jul 22, 2024 at 7:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Sat, Jul 20, 2024 at 09:03:07PM -0500, Nathan Bossart wrote:\n>> >> This is an extremely expensive way to perform that check, and so I'm\n>> >> wondering why we don't just do\n>> >>\n>> >> SELECT count(*) FROM pg_catalog.pg_subscription;\n>> >>\n>> >> once in count_old_cluster_subscriptions().\n>> >\n>> > Like so...\n> \n> Isn't it better to directly invoke get_subscription_count() in\n> check_new_cluster_subscription_configuration() where it is required\n> rather than in a db-specific general function?\n\nIIUC the old cluster won't be running at that point.\n\n>> Ah, good catch. That sounds like a good thing to do because we don't\n>> care about the number of subscriptions for each database in the\n>> current code.\n>>\n>> This is something that qualifies as an open item, IMO, as this code\n>> is new to PG17.\n\n+1\n\n>> A comment in get_db_rel_and_slot_infos() becomes incorrect where\n>> get_old_cluster_logical_slot_infos() is called; it is still referring\n>> to the subscription count.\n\nI removed this comment since IMHO it doesn't add much.\n\n>> Actually, on the same grounds, couldn't we do the logical slot info\n>> retrieval in get_old_cluster_logical_slot_infos() in a single pass as\n>> well? pg_replication_slots reports some information about all the\n>> slots, and the current code has a qual on current_database(). It\n>> looks to me that this could be replaced by a single query, ordering\n>> the slots by database names, assigning the slot infos in each\n>> database's DbInfo at the end.\n> \n> Unlike subscriptions, logical slots are database-specific objects. We\n> have some checks in the code like the one in CreateDecodingContext()\n> for MyDatabaseId which may or may not create a problem for this case\n> as we don't consume changes when checking\n> LogicalReplicationSlotHasPendingWal via\n> binary_upgrade_logical_slot_has_caught_up() but I think this needs\n> more analysis than what Nathan has proposed. So, I suggest taking up\n> this task for PG18 if we want to optimize this code path.\n\nI see what you mean.\n\n-- \nnathan",
"msg_date": "Mon, 22 Jul 2024 09:46:29 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 09:46:29AM -0500, Nathan Bossart wrote:\n> On Mon, Jul 22, 2024 at 03:45:19PM +0530, Amit Kapila wrote:\n>> On Mon, Jul 22, 2024 at 7:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>> A comment in get_db_rel_and_slot_infos() becomes incorrect where\n>>> get_old_cluster_logical_slot_infos() is called; it is still referring\n>>> to the subscription count.\n> \n> I removed this comment since IMHO it doesn't add much.\n\nWFM.\n\n>>> Actually, on the same grounds, couldn't we do the logical slot info\n>>> retrieval in get_old_cluster_logical_slot_infos() in a single pass as\n>>> well? pg_replication_slots reports some information about all the\n>>> slots, and the current code has a qual on current_database(). It\n>>> looks to me that this could be replaced by a single query, ordering\n>>> the slots by database names, assigning the slot infos in each\n>>> database's DbInfo at the end.\n>> \n>> Unlike subscriptions, logical slots are database-specific objects. We\n>> have some checks in the code like the one in CreateDecodingContext()\n>> for MyDatabaseId which may or may not create a problem for this case\n>> as we don't consume changes when checking\n>> LogicalReplicationSlotHasPendingWal via\n>> binary_upgrade_logical_slot_has_caught_up() but I think this needs\n>> more analysis than what Nathan has proposed. So, I suggest taking up\n>> this task for PG18 if we want to optimize this code path.\n> \n> I see what you mean.\n\nI am not sure to get the reason why get_old_cluster_logical_slot_infos()\ncould not be optimized, TBH. LogicalReplicationSlotHasPendingWal()\nuses the fast forward mode where no changes are generated, hence there\nshould be no need for a dependency to a connection to a specific\ndatabase :)\n\nCombined to a hash table based on the database name and/or OID to know\nto which dbinfo to attach the information of a slot, then it should be\npossible to use one query, making the slot info gathering closer to\nO(N) rather than the current O(N^2).\n--\nMichael",
"msg_date": "Tue, 23 Jul 2024 08:03:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 4:33 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jul 22, 2024 at 09:46:29AM -0500, Nathan Bossart wrote:\n> > On Mon, Jul 22, 2024 at 03:45:19PM +0530, Amit Kapila wrote:\n> >>\n> >> Unlike subscriptions, logical slots are database-specific objects. We\n> >> have some checks in the code like the one in CreateDecodingContext()\n> >> for MyDatabaseId which may or may not create a problem for this case\n> >> as we don't consume changes when checking\n> >> LogicalReplicationSlotHasPendingWal via\n> >> binary_upgrade_logical_slot_has_caught_up() but I think this needs\n> >> more analysis than what Nathan has proposed. So, I suggest taking up\n> >> this task for PG18 if we want to optimize this code path.\n> >\n> > I see what you mean.\n>\n> I am not sure to get the reason why get_old_cluster_logical_slot_infos()\n> could not be optimized, TBH. LogicalReplicationSlotHasPendingWal()\n> uses the fast forward mode where no changes are generated, hence there\n> should be no need for a dependency to a connection to a specific\n> database :)\n>\n> Combined to a hash table based on the database name and/or OID to know\n> to which dbinfo to attach the information of a slot, then it should be\n> possible to use one query, making the slot info gathering closer to\n> O(N) rather than the current O(N^2).\n>\n\nThe point is that unlike subscriptions logical slots are not\ncluster-level objects. So, this needs more careful design decisions\nrather than a fix-up patch for PG-17. One more thing after collecting\nslot-level, we also want to consider the creation of slots which again\nare created at per-database level.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 23 Jul 2024 08:27:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "Dear Amit, Michael,\r\n\r\n> > I am not sure to get the reason why get_old_cluster_logical_slot_infos()\r\n> > could not be optimized, TBH. LogicalReplicationSlotHasPendingWal()\r\n> > uses the fast forward mode where no changes are generated, hence there\r\n> > should be no need for a dependency to a connection to a specific\r\n> > database :)\r\n> >\r\n> > Combined to a hash table based on the database name and/or OID to know\r\n> > to which dbinfo to attach the information of a slot, then it should be\r\n> > possible to use one query, making the slot info gathering closer to\r\n> > O(N) rather than the current O(N^2).\r\n> >\r\n> \r\n> The point is that unlike subscriptions logical slots are not\r\n> cluster-level objects. So, this needs more careful design decisions\r\n> rather than a fix-up patch for PG-17. One more thing after collecting\r\n> slot-level, we also want to consider the creation of slots which again\r\n> are created at per-database level.\r\n\r\nI also considered the combination with the optimization (parallelization) of\r\npg_upgrade [1]. IIUC, the patch tries to connect to some databases in parallel\r\nand run commands. The current style of create_logical_replication_slots() can be\r\neasily adapted because tasks are divided per database.\r\n\r\nHowever, if we change like get_old_cluster_logical_slot_infos() to do in a single\r\npass, we may have to shift LogicalSlotInfoArr to cluster-wide data and store the\r\ndatabase name in LogicalSlotInfo. Also, in create_logical_replication_slots(),\r\nwe may have to check the located database for every slot and connect to the\r\nappropriate database. These changes make it difficult to parallelize the operation.\r\n\r\n[1]: https://www.postgresql.org/message-id/flat/20240516211638.GA1688936@nathanxps13\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Tue, 23 Jul 2024 03:08:40 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade and logical replication"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 8:16 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Jul 22, 2024 at 03:45:19PM +0530, Amit Kapila wrote:\n> > On Mon, Jul 22, 2024 at 7:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> On Sat, Jul 20, 2024 at 09:03:07PM -0500, Nathan Bossart wrote:\n> >> >> This is an extremely expensive way to perform that check, and so I'm\n> >> >> wondering why we don't just do\n> >> >>\n> >> >> SELECT count(*) FROM pg_catalog.pg_subscription;\n> >> >>\n> >> >> once in count_old_cluster_subscriptions().\n> >> >\n> >> > Like so...\n> >\n> > Isn't it better to directly invoke get_subscription_count() in\n> > check_new_cluster_subscription_configuration() where it is required\n> > rather than in a db-specific general function?\n>\n> IIUC the old cluster won't be running at that point.\n>\n\nRight, the other option would be to move it to the place where we call\ncheck_old_cluster_for_valid_slots(), etc. Initially, it was kept in\nthe specific function (get_db_rel_and_slot_infos) as we were\nmainlining the count at the per-database level but now as we are\nchanging that I am not sure if calling it from the same place is a\ngood idea. But OTOH, it is okay to keep it at the place where we\nretrieve the required information from the old cluster.\n\nOne minor point is the comment atop get_subscription_count() still\nrefers to the function name as get_db_subscription_count().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 23 Jul 2024 09:05:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 09:05:05AM +0530, Amit Kapila wrote:\n> Right, the other option would be to move it to the place where we call\n> check_old_cluster_for_valid_slots(), etc. Initially, it was kept in\n> the specific function (get_db_rel_and_slot_infos) as we were\n> mainlining the count at the per-database level but now as we are\n> changing that I am not sure if calling it from the same place is a\n> good idea. But OTOH, it is okay to keep it at the place where we\n> retrieve the required information from the old cluster.\n\nI moved it to where you suggested.\n\n> One minor point is the comment atop get_subscription_count() still\n> refers to the function name as get_db_subscription_count().\n\nOops, fixed.\n\n-- \nnathan",
"msg_date": "Tue, 23 Jul 2024 14:55:28 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 1:25 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Jul 23, 2024 at 09:05:05AM +0530, Amit Kapila wrote:\n> > Right, the other option would be to move it to the place where we call\n> > check_old_cluster_for_valid_slots(), etc. Initially, it was kept in\n> > the specific function (get_db_rel_and_slot_infos) as we were\n> > mainlining the count at the per-database level but now as we are\n> > changing that I am not sure if calling it from the same place is a\n> > good idea. But OTOH, it is okay to keep it at the place where we\n> > retrieve the required information from the old cluster.\n>\n> I moved it to where you suggested.\n>\n> > One minor point is the comment atop get_subscription_count() still\n> > refers to the function name as get_db_subscription_count().\n>\n> Oops, fixed.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 24 Jul 2024 11:32:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 11:32:47AM +0530, Amit Kapila wrote:\n> LGTM.\n\nThanks for reviewing. Committed and back-patched to v17.\n\n-- \nnathan\n\n\n",
"msg_date": "Wed, 24 Jul 2024 11:33:55 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 10:03 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Wed, Jul 24, 2024 at 11:32:47AM +0530, Amit Kapila wrote:\n> > LGTM.\n>\n> Thanks for reviewing. Committed and back-patched to v17.\n>\n\nShall we close the open items? I think even if we want to improve the\nslot fetching/creation mechanism, it should be part of PG18.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 25 Jul 2024 08:41:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 8:41 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 24, 2024 at 10:03 PM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n> >\n> > On Wed, Jul 24, 2024 at 11:32:47AM +0530, Amit Kapila wrote:\n> > > LGTM.\n> >\n> > Thanks for reviewing. Committed and back-patched to v17.\n> >\n>\n> Shall we close the open items?\n>\n\nSorry for the typo. There is only one open item corresponding to this:\n\"Subscription and slot information retrieval inefficiency in\npg_upgrade\" which according to me should be closed after your commit.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 25 Jul 2024 08:43:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 08:43:03AM +0530, Amit Kapila wrote:\n>> Shall we close the open items?\n> \n> Sorry for the typo. There is only one open item corresponding to this:\n> \"Subscription and slot information retrieval inefficiency in\n> pg_upgrade\" which according to me should be closed after your commit.\n\nOops, I forgot to do that. I've moved it to the \"resolved before 17beta3\"\nsection.\n\n-- \nnathan\n\n\n",
"msg_date": "Wed, 24 Jul 2024 22:16:51 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 10:16:51PM -0500, Nathan Bossart wrote:\n> On Thu, Jul 25, 2024 at 08:43:03AM +0530, Amit Kapila wrote:\n>>> Shall we close the open items?\n>> \n>> Sorry for the typo. There is only one open item corresponding to this:\n>> \"Subscription and slot information retrieval inefficiency in\n>> pg_upgrade\" which according to me should be closed after your commit.\n> \n> Oops, I forgot to do that. I've moved it to the \"resolved before 17beta3\"\n> section.\n\nRemoving the item sounds good to me. Thanks.\n--\nMichael",
"msg_date": "Thu, 25 Jul 2024 17:16:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and logical replication"
}
] |
[
{
"msg_contents": "Hi,\n\nHere is a small patch to improve the note, which was added by commit\n97da48246 (\"Allow batch insertion during COPY into a foreign table.\"),\nby adding an explanation about how the actual number of rows\npostgres_fdw inserts at once is determined in the COPY case, including\na limitation that does not apply to the INSERT case.\n\nI will add this to the next CF.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Fri, 17 Feb 2023 17:45:51 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Doc: Improve note about copying into postgres_fdw foreign tables in\n batch"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 5:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Here is a small patch to improve the note, which was added by commit\n> 97da48246 (\"Allow batch insertion during COPY into a foreign table.\"),\n> by adding an explanation about how the actual number of rows\n> postgres_fdw inserts at once is determined in the COPY case, including\n> a limitation that does not apply to the INSERT case.\n\nDoes anyone want to comment on this?\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 22 Mar 2023 20:58:40 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Improve note about copying into postgres_fdw foreign tables\n in batch"
},
{
"msg_contents": "> On Fri, Feb 17, 2023 at 5:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\r\n>> Here is a small patch to improve the note, which was added by commit\r\n>> 97da48246 (\"Allow batch insertion during COPY into a foreign table.\"),\r\n>> by adding an explanation about how the actual number of rows\r\n>> postgres_fdw inserts at once is determined in the COPY case, including\r\n>> a limitation that does not apply to the INSERT case.\r\n> \r\n> Does anyone want to comment on this?\r\n\r\n> <para>\r\n> - This option also applies when copying into foreign tables.\r\n> + This option also applies when copying into foreign tables. In that case\r\n> + the actual number of rows <filename>postgres_fdw</filename> copies at\r\n> + once is determined in a similar way to in the insert case, but it is\r\n\r\n\"similar way to in\" should be \"similar way to\", maybe?\r\n\r\n> + limited to at most 1000 due to implementation restrictions of the\r\n> + <command>COPY</command> command.\r\n> </para>\r\n> </listitem>\r\n> </varlistentry>\r\n\r\nBest reagards,\r\n--\r\nTatsuo Ishii\r\nSRA OSS LLC\r\nEnglish: http://www.sraoss.co.jp/index_en/\r\nJapanese:http://www.sraoss.co.jp\r\n",
"msg_date": "Wed, 22 Mar 2023 21:08:53 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Improve note about copying into postgres_fdw foreign\n tables in batch"
},
{
"msg_contents": "> On 22 Mar 2023, at 12:58, Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> \n> On Fri, Feb 17, 2023 at 5:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> Here is a small patch to improve the note, which was added by commit\n>> 97da48246 (\"Allow batch insertion during COPY into a foreign table.\"),\n>> by adding an explanation about how the actual number of rows\n>> postgres_fdw inserts at once is determined in the COPY case, including\n>> a limitation that does not apply to the INSERT case.\n> \n> Does anyone want to comment on this?\n\nPatch looks good to me, but I agree with Tatsuo downthread that \"similar way to\nthe insert case\" reads better. Theoretically the number could be different\nfrom 1000 if MAX_BUFFERED_TUPLES was changed in the build, but that's a\nnon-default not worth spending time explaining.\n\n+ the actual number of rows <filename>postgres_fdw</filename> copies at\n\nWhile not the fault of this patch I find it confusing that we mix <filename>\nand <literal> for marking up \"postgres_fdw\", the latter seemingly more correct\n(and less commonly used) than <filename>.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 13:13:01 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Improve note about copying into postgres_fdw foreign tables\n in batch"
},
{
"msg_contents": "> While not the fault of this patch I find it confusing that we mix <filename>\n> and <literal> for marking up \"postgres_fdw\", the latter seemingly more correct\n> (and less commonly used) than <filename>.\n\nI think we traditionally use <filename> for an extension module (file)\nname. It seems the <literal> is used when we want to refer to objects\nother than files.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 22 Mar 2023 21:32:26 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Improve note about copying into postgres_fdw foreign\n tables in batch"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 9:13 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Patch looks good to me, but I agree with Tatsuo downthread that \"similar way to\n> the insert case\" reads better.\n\nOk, I removed \"in\".\n\n> Theoretically the number could be different\n> from 1000 if MAX_BUFFERED_TUPLES was changed in the build, but that's a\n> non-default not worth spending time explaining.\n\nAgreed.\n\n> + the actual number of rows <filename>postgres_fdw</filename> copies at\n>\n> While not the fault of this patch I find it confusing that we mix <filename>\n> and <literal> for marking up \"postgres_fdw\", the latter seemingly more correct\n> (and less commonly used) than <filename>.\n\nOn Wed, Mar 22, 2023 at 9:32 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> I think we traditionally use <filename> for an extension module (file)\n> name. It seems the <literal> is used when we want to refer to objects\n> other than files.\n\n<filename> seems more appropriate to me as well in this context, so I\nleft it alone.\n\nAttached is an updated version of the patch.\n\nThanks for looking, Daniel and Ishii-san!\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Thu, 23 Mar 2023 18:51:48 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Improve note about copying into postgres_fdw foreign tables\n in batch"
},
{
"msg_contents": "> On 23 Mar 2023, at 10:51, Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n\n> <filename> seems more appropriate to me as well in this context, so I\n> left it alone.\n\nAnd just to be clear, I think you are right in leaving it alone given the\ncontext.\n\n> Attached is an updated version of the patch.\n\nLGTM.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 23 Mar 2023 10:55:30 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Improve note about copying into postgres_fdw foreign tables\n in batch"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 6:55 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > <filename> seems more appropriate to me as well in this context, so I\n> > left it alone.\n>\n> And just to be clear, I think you are right in leaving it alone given the\n> context.\n>\n> > Attached is an updated version of the patch.\n>\n> LGTM.\n\nCool! Pushed.\n\nThanks again!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 24 Mar 2023 13:05:38 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Improve note about copying into postgres_fdw foreign tables\n in batch"
}
] |
[
{
"msg_contents": "Hi,\n\nThe output sql generated by pg_dump for the below function refers to a\nmodified table name:\ncreate table t1 (c1 int);\ncreate table t2 (c1 int);\n\nCREATE OR REPLACE FUNCTION test_fun(c1 int)\nRETURNS void\nLANGUAGE SQL\nBEGIN ATOMIC\n WITH delete_t1 AS (\n DELETE FROM t1 WHERE c1 = $1\n )\n INSERT INTO t1 (c1) SELECT $1 FROM t2;\nEND;\n\nThe below sql output created by pg_dump refers to t1_1 which should\nhave been t1:\nCREATE FUNCTION public.test_fun(c1 integer) RETURNS void\n LANGUAGE sql\n BEGIN ATOMIC\n WITH delete_t1 AS (\n DELETE FROM public.t1\n WHERE (t1_1.c1 = test_fun.c1)\n )\n INSERT INTO public.t1 (c1) SELECT test_fun.c1\n FROM public.t2;\nEND;\n\npg_get_function_sqlbody also returns similar result:\nselect proname, pg_get_function_sqlbody(oid) from pg_proc where\nproname = 'test_fun';\n proname | pg_get_function_sqlbody\n----------+-------------------------------------------\n test_fun | BEGIN ATOMIC +\n | WITH delete_t1 AS ( +\n | DELETE FROM t1 +\n | WHERE (t1_1.c1 = test_fun.c1) +\n | ) +\n | INSERT INTO t1 (c1) SELECT test_fun.c1+\n | FROM t2; +\n | END\n(1 row)\n\nI felt the problem here is with set_rtable_names function which\nchanges the relation name t1 to t1_1 while parsing the statement:\n/*\n* If the selected name isn't unique, append digits to make it so, and\n* make a new hash entry for it once we've got a unique name. For a\n* very long input name, we might have to truncate to stay within\n* NAMEDATALEN.\n*/\n\nDuring the query generation we will set the table names before\ngenerating each statement, in our case the table t1 would have been\nadded already to the hash table during the first insert statement\ngeneration. Next time it will try to set the relation names again for\nthe next statement, i.e delete statement, if the entry with same name\nalready exists, it will change the name to t1_1 by appending a digit\nto keep the has entry unique.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 17 Feb 2023 15:52:54 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "The output sql generated by pg_dump for a create function refers to a\n modified table name"
},
{
"msg_contents": "On 2/17/23 5:22 AM, vignesh C wrote:\r\n> Hi,\r\n> \r\n> The output sql generated by pg_dump for the below function refers to a\r\n> modified table name:\r\n> create table t1 (c1 int);\r\n> create table t2 (c1 int);\r\n> \r\n> CREATE OR REPLACE FUNCTION test_fun(c1 int)\r\n> RETURNS void\r\n> LANGUAGE SQL\r\n> BEGIN ATOMIC\r\n> WITH delete_t1 AS (\r\n> DELETE FROM t1 WHERE c1 = $1\r\n> )\r\n> INSERT INTO t1 (c1) SELECT $1 FROM t2;\r\n> END;\r\n> \r\n> The below sql output created by pg_dump refers to t1_1 which should\r\n> have been t1:\r\n> CREATE FUNCTION public.test_fun(c1 integer) RETURNS void\r\n> LANGUAGE sql\r\n> BEGIN ATOMIC\r\n> WITH delete_t1 AS (\r\n> DELETE FROM public.t1\r\n> WHERE (t1_1.c1 = test_fun.c1)\r\n> )\r\n> INSERT INTO public.t1 (c1) SELECT test_fun.c1\r\n> FROM public.t2;\r\n> END;\r\n> \r\n> pg_get_function_sqlbody also returns similar result:\r\n> select proname, pg_get_function_sqlbody(oid) from pg_proc where\r\n> proname = 'test_fun';\r\n> proname | pg_get_function_sqlbody\r\n> ----------+-------------------------------------------\r\n> test_fun | BEGIN ATOMIC +\r\n> | WITH delete_t1 AS ( +\r\n> | DELETE FROM t1 +\r\n> | WHERE (t1_1.c1 = test_fun.c1) +\r\n> | ) +\r\n> | INSERT INTO t1 (c1) SELECT test_fun.c1+\r\n> | FROM t2; +\r\n> | END\r\n> (1 row)\r\n\r\nThanks for reproducing and demonstrating that this was more generally \r\napplicable. For context, this was initially discovered when testing the \r\nDDL replication patch[1] under that context.\r\n\r\n> I felt the problem here is with set_rtable_names function which\r\n> changes the relation name t1 to t1_1 while parsing the statement:\r\n> /*\r\n> * If the selected name isn't unique, append digits to make it so, and\r\n> * make a new hash entry for it once we've got a unique name. For a\r\n> * very long input name, we might have to truncate to stay within\r\n> * NAMEDATALEN.\r\n> */\r\n> \r\n> During the query generation we will set the table names before\r\n> generating each statement, in our case the table t1 would have been\r\n> added already to the hash table during the first insert statement\r\n> generation. Next time it will try to set the relation names again for\r\n> the next statement, i.e delete statement, if the entry with same name\r\n> already exists, it will change the name to t1_1 by appending a digit\r\n> to keep the has entry unique.\r\n\r\nGood catch. Do you have thoughts on how we can adjust the naming logic \r\nto handle cases like this?\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/e947fa21-24b2-f922-375a-d4f763ef3e4b%40postgresql.org",
"msg_date": "Fri, 17 Feb 2023 09:06:09 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: The output sql generated by pg_dump for a create function refers\n to a modified table name"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Good catch. Do you have thoughts on how we can adjust the naming logic \n> to handle cases like this?\n\nI think it's perfectly fine that ruleutils decided to use different\naliases for the two different occurrences of \"t1\": the statement is\nquite confusing as written. The problem probably is that\nget_delete_query_def() has no idea that it's supposed to print the\nadjusted alias just after \"DELETE FROM tab\". UPDATE likely has same\nissue ... maybe INSERT too?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Feb 2023 10:09:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The output sql generated by pg_dump for a create function refers\n to a modified table name"
},
{
"msg_contents": "On 2/17/23 10:09 AM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> Good catch. Do you have thoughts on how we can adjust the naming logic\r\n>> to handle cases like this?\r\n> \r\n> I think it's perfectly fine that ruleutils decided to use different\r\n> aliases for the two different occurrences of \"t1\": the statement is\r\n> quite confusing as written.\r\n\r\nAgreed on that -- while it's harder to set up, I do prefer the original \r\nexample[1] to demonstrate this, as it shows the issue given it does not \r\nhave those multiple occurrences, at least not within the same context, i.e.:\r\n\r\nCREATE OR REPLACE FUNCTION public.calendar_manage(room_id int, \r\ncalendar_date date)\r\nRETURNS void\r\nLANGUAGE SQL\r\nBEGIN ATOMIC\r\n WITH delete_calendar AS (\r\n DELETE FROM calendar\r\n WHERE\r\n room_id = $1 AND\r\n calendar_date = $2\r\n )\r\n INSERT INTO calendar (room_id, status, calendar_date, calendar_range)\r\n SELECT $1, c.status, $2, c.calendar_range\r\n FROM calendar_generate_calendar($1, tstzrange($2, $2 + 1)) c;\r\nEND;\r\n\r\nthe table prefixes on the attributes within the DELETE statement were \r\nultimately mangled:\r\n\r\nWITH delete_calendar AS (\r\n DELETE FROM public.calendar\r\n WHERE ((calendar_1.room_id OPERATOR(pg_catalog.=)\r\ncalendar_manage.room_id) AND (calendar_1.calendar_date\r\nOPERATOR(pg_catalog.=) calendar_manage.calendar_date))\r\n)\r\nINSERT INTO public.calendar (room_id, status, calendar_date,\r\ncalendar_range)\r\n\r\n> The problem probably is that\r\n> get_delete_query_def() has no idea that it's supposed to print the\r\n> adjusted alias just after \"DELETE FROM tab\". UPDATE likely has same\r\n> issue ... maybe INSERT too?\r\n\r\nMaybe? I modified the function above to do an INSERT/UPDATE instead of a \r\nDELETE but I did not get any errors. However, if the logic is similar \r\nthere could be an issue there.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/e947fa21-24b2-f922-375a-d4f763ef3e4b%40postgresql.org",
"msg_date": "Fri, 17 Feb 2023 11:19:44 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: The output sql generated by pg_dump for a create function refers\n to a modified table name"
},
{
"msg_contents": "On 2/17/23 11:19 AM, Jonathan S. Katz wrote:\r\n> On 2/17/23 10:09 AM, Tom Lane wrote:\r\n\r\n> Agreed on that -- while it's harder to set up, I do prefer the original \r\n> example[1] to demonstrate this, as it shows the issue given it does not \r\n> have those multiple occurrences, at least not within the same context, \r\n> i.e.:\r\n> \r\n> CREATE OR REPLACE FUNCTION public.calendar_manage(room_id int, \r\n> calendar_date date)\r\n> RETURNS void\r\n> LANGUAGE SQL\r\n> BEGIN ATOMIC\r\n> WITH delete_calendar AS (\r\n> DELETE FROM calendar\r\n> WHERE\r\n> room_id = $1 AND\r\n> calendar_date = $2\r\n> )\r\n> INSERT INTO calendar (room_id, status, calendar_date, calendar_range)\r\n> SELECT $1, c.status, $2, c.calendar_range\r\n> FROM calendar_generate_calendar($1, tstzrange($2, $2 + 1)) c;\r\n> END;\r\n> \r\n>> The problem probably is that\r\n>> get_delete_query_def() has no idea that it's supposed to print the\r\n>> adjusted alias just after \"DELETE FROM tab\". UPDATE likely has same\r\n>> issue ... maybe INSERT too?\r\n> \r\n> Maybe? I modified the function above to do an INSERT/UPDATE instead of a \r\n> DELETE but I did not get any errors. However, if the logic is similar \r\n> there could be an issue there.\r\n\r\nI spoke too soon -- I was looking at the wrong logs. I did reproduce it \r\nwith UPDATE, but not INSERT. The example I used for UPDATE:\r\n\r\nCREATE OR REPLACE FUNCTION public.calendar_manage(room_id int, \r\ncalendar_date date)\r\nRETURNS void\r\nLANGUAGE SQL\r\nBEGIN ATOMIC\r\n WITH update_calendar AS (\r\n UPDATE calendar\r\n SET room_id = $1\r\n WHERE\r\n room_id = $1 AND\r\n calendar_date = $2\r\n )\r\n INSERT INTO calendar (room_id, status, calendar_date, calendar_range)\r\n SELECT $1, c.status, $2, c.calendar_range\r\n FROM calendar_generate_calendar($1, tstzrange($2, $2 + 1)) c;\r\nEND;\r\n\r\nwhich produced:\r\n\r\nWITH update_calendar AS (\r\n UPDATE public.calendar SET room_id = calendar_manage.room_id\r\n WHERE (\r\n (calendar_1.room_id OPERATOR(pg_catalog.=) \r\ncalendar_manage.room_id) AND (calendar_1.calendar_date \r\nOPERATOR(pg_catalog.=) calendar_manage.calendar_date))\r\n)\r\nINSERT INTO public.calendar (room_id, status, calendar_date, \r\ncalendar_range) SELECT calendar_manage.room_id,\r\n c.status,\r\n calendar_manage.calendar_date,\r\n c.calendar_range\r\nFROM public.calendar_generate_calendar(calendar_manage.room_id, \r\npg_catalog.tstzrange((calendar_manage.calendar_date)::timestamp with \r\ntime zone, ((calendar_manage.calendar_date OPERATOR(pg_catalog.+) \r\n1))::timestamp with time zone)) c(status, calendar_range);\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Fri, 17 Feb 2023 11:28:39 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: The output sql generated by pg_dump for a create function refers\n to a modified table name"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> I spoke too soon -- I was looking at the wrong logs. I did reproduce it \n> with UPDATE, but not INSERT.\n\nIt can be reproduced with INSERT too, on the same principle as the others:\nput the DML command inside a WITH, and give it an alias conflicting with\nthe outer query.\n\nBeing a lazy sort, I tried to collapse all three cases into a single\ntest case, and observed something I hadn't thought of: we disambiguate\naliases in a WITH query with respect to the outer query, but not with\nrespect to other WITH queries. This makes the example (see attached)\na bit more confusing than I would have hoped. However, the same sort\nof thing happens within other kinds of nested subqueries, so I think\nit's probably all right as-is. In any case, changing this aspect\nwould require a significantly bigger patch with more risk of unwanted\nside-effects.\n\nTo fix it, I pulled out the print-an-alias logic within\nget_from_clause_item and called that new function for\nINSERT/UPDATE/DELETE. This is a bit of overkill perhaps, because\nonly the RTE_RELATION case can be needed by these other callers, but\nit seemed like a sane refactorization.\n\nI've not tested, but I imagine this will need patched all the way back.\nThe rule case should be reachable in all supported versions.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 17 Feb 2023 13:18:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The output sql generated by pg_dump for a create function refers\n to a modified table name"
},
{
"msg_contents": "On 2/17/23 1:18 PM, Tom Lane wrote:\r\n\r\n> It can be reproduced with INSERT too, on the same principle as the others:\r\n> put the DML command inside a WITH, and give it an alias conflicting with\r\n> the outer query.\r\n\r\nAh, I see based on your example below. I did not alias the INSERT \r\nstatement in the way (and I don't know how common of a pattern it is to \r\no that).\r\n\r\n> Being a lazy sort, I tried to collapse all three cases into a single\r\n> test case, and observed something I hadn't thought of: we disambiguate\r\n> aliases in a WITH query with respect to the outer query, but not with\r\n> respect to other WITH queries. This makes the example (see attached)\r\n> a bit more confusing than I would have hoped. However, the same sort\r\n> of thing happens within other kinds of nested subqueries, so I think\r\n> it's probably all right as-is. In any case, changing this aspect\r\n> would require a significantly bigger patch with more risk of unwanted\r\n> side-effects.\r\n\r\nI think I agree. Most people should not be looking at the disambiguated \r\nstatements unless they are troubleshooting an issue (such as $SUBJECT). \r\nThe main goal is to disambiguate correctly.\r\n\r\n> To fix it, I pulled out the print-an-alias logic within\r\n> get_from_clause_item and called that new function for\r\n> INSERT/UPDATE/DELETE. This is a bit of overkill perhaps, because\r\n> only the RTE_RELATION case can be needed by these other callers, but\r\n> it seemed like a sane refactorization.\r\n> \r\n> I've not tested, but I imagine this will need patched all the way back.\r\n> The rule case should be reachable in all supported versions.\r\n\r\nI tested this against HEAD (+v69 of the DDL replication patch). My cases \r\nare now all passing.\r\n\r\nThe code looks good to me -- I don't know if moving that logic is \r\noverkill, but it makes the solution relatively clean.\r\n\r\nI didn't test in any back branches yet, but given this can generate an \r\ninvalid function body, it does likely need to be backpatched.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Fri, 17 Feb 2023 14:00:21 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: The output sql generated by pg_dump for a create function refers\n to a modified table name"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 2/17/23 1:18 PM, Tom Lane wrote:\n>> It can be reproduced with INSERT too, on the same principle as the others:\n>> put the DML command inside a WITH, and give it an alias conflicting with\n>> the outer query.\n\n> Ah, I see based on your example below. I did not alias the INSERT \n> statement in the way (and I don't know how common of a pattern it is to \n> o that).\n\nI suppose you can also make examples where the true name of the DML\ntarget table conflicts with an outer-query name, implying that we need\nto give it an alias even though the user wrote none.\n\n> I tested this against HEAD (+v69 of the DDL replication patch). My cases \n> are now all passing.\n> The code looks good to me -- I don't know if moving that logic is \n> overkill, but it makes the solution relatively clean.\n\nCool, thanks for testing and code-reading. I'll go see about\nback-patching.\n\n> I didn't test in any back branches yet, but given this can generate an \n> invalid function body, it does likely need to be backpatched.\n\nPresumably it can also cause dump/restore failures for rules that\ndo this sort of thing, though admittedly those wouldn't be common.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Feb 2023 15:46:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The output sql generated by pg_dump for a create function refers\n to a modified table name"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nRecently we faced a problem with one of our production clusters. Problem\nwas with pg_upgrade,\nthe reason was an invalid pg_dump of cluster schema. in pg_dump sql there\nwas strange records like\n\nREVOKE SELECT,INSERT,DELETE,UPDATE ON TABLE *relation* FROM \"144841\";\n\nbut there is no role \"144841\"\nWe did dig in, and it turns out that 144841 was OID of previously-deleted\nrole.\n\nI have reproduced issue using simple test extension yoext(1).\n\nSQL script:\n\ncreate role user1;\nALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT select ON TABLES TO user1;\ncreate extension yoext;\ndrop owned by user1;\nselect * from pg_init_privs where privtype = 'e';\ndrop role user1;\nselect * from pg_init_privs where privtype = 'e';\n\nresult of execution (executed on fest master from commit\n17feb6a566b77bf62ca453dec215adcc71755c20):\n\npsql (16devel)\nType \"help\" for help.\n\npostgres=#\npostgres=#\npostgres=# create role user1;\nCREATE ROLE\npostgres=# ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT select ON TABLES\nTO user1;\nALTER DEFAULT PRIVILEGES\npostgres=# create extension yobaext ;\nCREATE EXTENSION\npostgres=# drop owned by user1;\nDROP OWNED\npostgres=# select * from pg_init_privs where privtype = 'e';\n objoid | classoid | objsubid | privtype | initprivs\n--------+----------+----------+----------+---------------------------------------------------\n 16387 | 1259 | 0 | e |\n{reshke=arwdDxtm/reshke,user1=r/reshke,=r/reshke}\n(1 row)\n\npostgres=# drop role user1;\nDROP ROLE\npostgres=# select * from pg_init_privs where privtype = 'e';\n objoid | classoid | objsubid | privtype | initprivs\n--------+----------+----------+----------+---------------------------------------------------\n 16387 | 1259 | 0 | e |\n{reshke=arwdDxtm/reshke,16384=r/reshke,=r/reshke}\n(1 row)\n\n\nAs you can see, after drop role there is invalid records in pg_init_privs\nsystem relation. After this, pg_dump generate sql statements, some of which\nare based on content of pg_init_privs, resulting in invalid dump.\n\nPFA fix.\n\nThe idea of fix is simply drop records from pg_init_privs while dropping\nrole.\nRecords with grantor of grantee equal to oid of dropped role will erase.\nafter that, pg_dump works ok.\n\nImplementation comment: i failed to find proper way to alloc acl array, so\ndefined some acl.c internal function `allocacl` in header. Need to improve\nthis somehow.\n\n[1] yoext https://github.com/reshke/yoext/",
"msg_date": "Fri, 17 Feb 2023 17:31:30 +0100",
"msg_from": "Kirill Reshke <reshke@double.cloud>",
"msg_from_op": true,
"msg_subject": "pg_init_privs corruption."
},
{
"msg_contents": "Kirill Reshke <reshke@double.cloud> writes:\n> As you can see, after drop role there is invalid records in pg_init_privs\n> system relation. After this, pg_dump generate sql statements, some of which\n> are based on content of pg_init_privs, resulting in invalid dump.\n\nUgh.\n\n> PFA fix.\n\nI don't think this is anywhere near usable as-is, because it only\naccounts for pg_init_privs entries in the current database. We need\nto handle these records in the DROP OWNED BY mechanism instead, and\nalso ensure there are shared-dependency entries for them so that the\nrole can't be dropped until the entries are gone in all DBs. The real\nproblem may be less that DROP is doing the wrong thing, and more that\ncreation of the pg_init_privs entries neglects to record a dependency.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Feb 2023 13:43:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_init_privs corruption."
},
{
"msg_contents": "> Kirill Reshke <reshke@double.cloud> writes:\n> > As you can see, after drop role there is invalid records in\n> > pg_init_privs system relation. After this, pg_dump generate sql\n> > statements, some of which are based on content of pg_init_privs, resulting\n> in invalid dump.\n> \n\nThis is as far as I can see the same case as what I reported a few years ago here: https://www.postgresql.org/message-id/flat/1574068566573.13088%40Optiver.com#488bd647ce6f5d2c92764673a7c58289\nThere was a discussion with some options, but no fix back then. \n\n-Floris\n\n\n\n",
"msg_date": "Fri, 17 Feb 2023 19:23:45 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_init_privs corruption."
},
{
"msg_contents": "Floris Van Nee <florisvannee@Optiver.com> writes:\n> This is as far as I can see the same case as what I reported a few years ago here: https://www.postgresql.org/message-id/flat/1574068566573.13088%40Optiver.com#488bd647ce6f5d2c92764673a7c58289\n> There was a discussion with some options, but no fix back then. \n\nHmm, so Stephen was opining that the extension's objects shouldn't\nhave gotten these privs attached in the first place. I'm not\nquite convinced about that one way or the other, but if you buy it\nthen maybe this situation is unreachable once we fix that. I'm\nnot sure though. It's still clear that we are making ACL entries\nthat aren't reflected in pg_shdepend, and that seems bad.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Feb 2023 15:37:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_init_privs corruption."
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Floris Van Nee <florisvannee@Optiver.com> writes:\n> > This is as far as I can see the same case as what I reported a few years ago here: https://www.postgresql.org/message-id/flat/1574068566573.13088%40Optiver.com#488bd647ce6f5d2c92764673a7c58289\n> > There was a discussion with some options, but no fix back then. \n> \n> Hmm, so Stephen was opining that the extension's objects shouldn't\n> have gotten these privs attached in the first place. I'm not\n> quite convinced about that one way or the other, but if you buy it\n> then maybe this situation is unreachable once we fix that. I'm\n> not sure though. It's still clear that we are making ACL entries\n> that aren't reflected in pg_shdepend, and that seems bad.\n\nWould be great to get some other thoughts on this then, perhaps, as it's\nclearly not good as-is either.\n\nI mentioned in that other thread that recording the dependency should be\ndone but that it's an independent issue and I do still generally feel\nthat way, so I guess we're all mostly in agreement that the dependency\nshould get recorded and perhaps we can just go do that.\n\nI don't see any cases of it currently, but I do still worry, as I also\nmentioned in the prior thread, that by allowing DEFAULT PRIVILEGES to\nimpact extension objects that we could end up with a security issue.\nSpecifically, if a user sets up their schema like:\n\nALTER DEFAULT PRIVILEGES ... GRANT EXECUTE ON FUNCTIONS TO me;\n\nand then creates an extension which is marked as 'trusted':\n\nCREATE EXTENSION abc;\n\nwhere that extension manages function access through the GRANT system\n(as many do, eg: pg_stat_statements which does:\n\nREVOKE ALL ON FUNCTION pg_stat_statements_reset() FROM PUBLIC;\n)\n\nThat the user then will have EXECUTE rights on that function which they\nreally shouldn't have.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 20 Feb 2023 10:15:23 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_init_privs corruption."
},
{
"msg_contents": "On Fri, 17 Feb 2023 at 15:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Hmm, so Stephen was opining that the extension's objects shouldn't\n> have gotten these privs attached in the first place. I'm not\n> quite convinced about that one way or the other, but if you buy it\n> then maybe this situation is unreachable once we fix that.\n\nWell pg_dump might still have to deal with it even if it's unreachable\nin new databases (or rather schemas with extensions newly added... it\nmight be hard to explain what cases are affected).\n\nAlternately it'll be a note at the top of every point release pointing\nat a note explaining how to run a script to fix your schemas.\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 11 Apr 2023 21:48:12 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: pg_init_privs corruption."
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 3:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Floris Van Nee <florisvannee@Optiver.com> writes:\n> > This is as far as I can see the same case as what I reported a few years ago here: https://www.postgresql.org/message-id/flat/1574068566573.13088%40Optiver.com#488bd647ce6f5d2c92764673a7c58289\n> > There was a discussion with some options, but no fix back then.\n>\n> Hmm, so Stephen was opining that the extension's objects shouldn't\n> have gotten these privs attached in the first place. I'm not\n> quite convinced about that one way or the other, but if you buy it\n> then maybe this situation is unreachable once we fix that. I'm\n> not sure though. It's still clear that we are making ACL entries\n> that aren't reflected in pg_shdepend, and that seems bad.\n\nYep. I think you have the right idea how to fix this. Making extension\ncreation somehow not subject to the same rules about default\nprivileges as everything else doesn't seem like either a good idea or\na real fix for this problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 13 Apr 2023 12:15:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_init_privs corruption."
}
] |
[
{
"msg_contents": "When adding a check to pg_upgrade a while back I noticed in a profile that the\ncluster compatibility check phase spend a lot of time in connectToServer. Some\nof this can be attributed to data type checks which each run serially in turn\nconnecting to each database to run the check, and this seemed like a place\nwhere we can do better.\n\nThe attached patch moves the checks from individual functions, which each loops\nover all databases, into a struct which is consumed by a single umbrella check\nwhere all data type queries are executed against a database using the same\nconnection. This way we can amortize the connectToServer overhead across more\naccesses to the database.\n\nIn the trivial case, a single database, I don't see a reduction of performance\nover the current approach. In a cluster with 100 (empty) databases there is a\n~15% reduction in time to run a --check pass. While it won't move the earth in\nterms of wallclock time, consuming less resources on the old cluster allowing\n--check to be cheaper might be the bigger win.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 17 Feb 2023 22:44:49 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 10:44:49PM +0100, Daniel Gustafsson wrote:\n> In the trivial case, a single database, I don't see a reduction of performance\n> over the current approach. In a cluster with 100 (empty) databases there is a\n> ~15% reduction in time to run a --check pass. While it won't move the earth in\n> terms of wallclock time, consuming less resources on the old cluster allowing\n> --check to be cheaper might be the bigger win.\n\nNice! This has actually been on my list of things to look into, so I\nintend to help review the patch. In any case, +1 for making pg_upgrade\nfaster.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 17 Feb 2023 15:04:51 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 10:44:49PM +0100, Daniel Gustafsson wrote:\n> When adding a check to pg_upgrade a while back I noticed in a profile that the\n> cluster compatibility check phase spend a lot of time in connectToServer. Some\n> of this can be attributed to data type checks which each run serially in turn\n> connecting to each database to run the check, and this seemed like a place\n> where we can do better.\n> \n> The attached patch moves the checks from individual functions, which each loops\n> over all databases, into a struct which is consumed by a single umbrella check\n> where all data type queries are executed against a database using the same\n> connection. This way we can amortize the connectToServer overhead across more\n> accesses to the database.\n\nThis change consolidates all the data type checks, so instead of 7 separate\nloops through all the databases, there is just one. However, I wonder if\nwe are leaving too much on the table, as there are a number of other\nfunctions that also loop over all the databases:\n\n\t* get_loadable_libraries\n\t* get_db_and_rel_infos\n\t* report_extension_updates\n\t* old_9_6_invalidate_hash_indexes\n\t* check_for_isn_and_int8_passing_mismatch\n\t* check_for_user_defined_postfix_ops\n\t* check_for_incompatible_polymorphics\n\t* check_for_tables_with_oids\n\t* check_for_user_defined_encoding_conversions\n\nI suspect consolidating get_loadable_libraries, get_db_and_rel_infos, and\nreport_extension_updates would be prohibitively complicated and not worth\nthe effort. old_9_6_invalidate_hash_indexes is only needed for unsupported\nversions, so that might not be worth consolidating.\ncheck_for_isn_and_int8_passing_mismatch only loops through all databases\nwhen float8_pass_by_value in the control data differs, so that might not be\nworth it, either. The last 4 are for supported versions and, from a very\nquick glance, seem possible to consolidate. That would bring us to a total\nof 11 separate loops that we could consolidate into one. However, the data\ntype checks seem to follow a nice pattern, so perhaps this is easier said\nthan done.\n\nIIUC with the patch, pg_upgrade will immediately fail as soon as a single\ncheck in a database fails. I believe this differs from the current\nbehavior where all matches for a given check in the cluster are logged\nbefore failing. I wonder if it'd be better to perform all of the data type\nchecks in all databases before failing so that all of the violations are\nreported. Else, users would have to run pg_upgrade, fix a violation, run\npg_upgrade again, fix another one, etc.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 17 Feb 2023 21:46:13 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 10:44:49PM +0100, Daniel Gustafsson wrote:\n> When adding a check to pg_upgrade a while back I noticed in a profile that the\n> cluster compatibility check phase spend a lot of time in connectToServer. Some\n> of this can be attributed to data type checks which each run serially in turn\n> connecting to each database to run the check, and this seemed like a place\n> where we can do better.\n\n src/bin/pg_upgrade/check.c | 371 +++++++++++++++---------------\n src/bin/pg_upgrade/pg_upgrade.h | 28 ++-\n src/bin/pg_upgrade/version.c | 394 ++++++++++++++------------------\n 3 files changed, 373 insertions(+), 420 deletions(-)\n\nAnd saves 50 LOC.\n\nThe stated goal of the patch is to reduce overhead. But it only updates\na couple functions, and there are (I think) nine functions which loop\naround all DBs. If you want to reduce the overhead, I assumed you'd\ncache the DB connection for all tests ... but then I tried it, and first\nran into max_connections, and then ran into EMFILE. Which is probably\nenough to kill my idea.\n\nBut maybe the existing patch could be phrased in terms of moving all the\nper-db checks from functions to data structures (which has its own\nmerits). Then, there could be a single loop around DBs which executes\nall the functions. The test runner can also test the major version and\nhandle the textfile output.\n\nHowever (as Nathan mentioned) what's currently done shows *all* the\nproblems of a given type - if there were 9 DBs with 99 relations with\nOIDs, it'd show all of them at once. It'd be a big step backwards to\nonly show problems for the first problematic DB.\n\nBut maybe that's an another opportunity to do better. Right now, if I\nrun pg_upgrade, it'll show all the failing objects, but only for first\ncheck that fails. After fixing them, it might tell me about a 2nd\nfailing check. I've never run into multiple types of failing checks,\nbut I do know that needing to re-run pg-upgrade is annoying (see\n3c0471b5f).\n\nYou talked about improving the two data types tests, which aren't\nconditional on a maximum server version. The minimal improvement you'll\nget is when only those two checks are run (like on a developer upgrade\nv16=>v16). But when more checks are run during a production upgrade\nlike v13=>v16, you'd see a larger gain.\n\nI fooled around with that idea in the attached patch. I have no\nparticular interest in optimizing --check for large numbers of DBs, so\nI'm not planning to pursue it further, but maybe it'll be useful to you.\n\nAbout your original patch:\n\n+static DataTypesUsageChecks data_types_usage_checks[] = {\n+\t/*\n+\t * Look for composite types that were made during initdb *or* belong to\n+\t * information_schema; that's important in case information_schema was\n+\t * dropped and reloaded.\n+\t *\n+\t * The cutoff OID here should match the source cluster's value of\n+\t * FirstNormalObjectId. We hardcode it rather than using that C #define\n+\t * because, if that #define is ever changed, our own version's value is\n+\t * NOT what to use. Eventually we may need a test on the source cluster's\n+\t * version to select the correct value.\n+\t */\n+\t{\"Checking for system-defined composite types in user tables\",\n+\t \"tables_using_composite.txt\",\n\nI think this might e cleaner using \"named initializer\" struct\ninitialization, rather than a comma-separated list (whatever that's\ncalled).\n\nMaybe instead of putting all checks into an array of\nDataTypesUsageChecks, they should be defined in separate arrays, and\nthen an array defined with the list of checks?\n\n+\t\t\t * If the check failed, terminate the umbrella status and print\n+\t\t\t * the specific status line of the check to indicate which it was\n+\t\t\t * before terminating with the detailed error message.\n+\t\t\t */\n+\t\t\tif (found)\n+\t\t\t{\n+\t\t\t\tPQfinish(conn);\n \n-\tbase_query = psprintf(\"SELECT '%s'::pg_catalog.regtype AS oid\",\n-\t\t\t\t\t\t type_name);\n+\t\t\t\treport_status(PG_REPORT, \"failed\");\n+\t\t\t\tprep_status(\"%s\", cur_check->status);\n+\t\t\t\tpg_log(PG_REPORT, \"fatal\");\n+\t\t\t\tpg_fatal(\"%s %s\", cur_check->fatal_check, output_path);\n+\t\t\t}\n\nI think this loses the message localization/translation that currently\nexists. It could be written like prep_status(cur_check->status) or\nprep_status(\"%s\", _(cur_check->status)). And _(cur_check->fatal_check). \n\n-- \nJustin",
"msg_date": "Sat, 18 Feb 2023 14:42:21 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 18 Feb 2023, at 06:46, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> On Fri, Feb 17, 2023 at 10:44:49PM +0100, Daniel Gustafsson wrote:\n>> When adding a check to pg_upgrade a while back I noticed in a profile that the\n>> cluster compatibility check phase spend a lot of time in connectToServer. Some\n>> of this can be attributed to data type checks which each run serially in turn\n>> connecting to each database to run the check, and this seemed like a place\n>> where we can do better.\n>> \n>> The attached patch moves the checks from individual functions, which each loops\n>> over all databases, into a struct which is consumed by a single umbrella check\n>> where all data type queries are executed against a database using the same\n>> connection. This way we can amortize the connectToServer overhead across more\n>> accesses to the database.\n> \n> This change consolidates all the data type checks, so instead of 7 separate\n> loops through all the databases, there is just one. However, I wonder if\n> we are leaving too much on the table, as there are a number of other\n> functions that also loop over all the databases:\n> \n> \t* get_loadable_libraries\n> \t* get_db_and_rel_infos\n> \t* report_extension_updates\n> \t* old_9_6_invalidate_hash_indexes\n> \t* check_for_isn_and_int8_passing_mismatch\n> \t* check_for_user_defined_postfix_ops\n> \t* check_for_incompatible_polymorphics\n> \t* check_for_tables_with_oids\n> \t* check_for_user_defined_encoding_conversions\n> \n> I suspect consolidating get_loadable_libraries, get_db_and_rel_infos, and\n> report_extension_updates would be prohibitively complicated and not worth\n> the effort.\n\nAgreed, the added complexity of the code seems hard to justify unless there are\nactual reports of problems.\n\nI did experiment with reducing the allocations of namespaces and tablespaces\nwith a hashtable, see the attached WIP diff. There is no measurable difference\nin speed, but a synthetic benchmark where allocations cannot be reused shows\nreduced memory pressure. This might help on very large schemas, but it's not\nworth pursuing IMO.\n\n> old_9_6_invalidate_hash_indexes is only needed for unsupported\n> versions, so that might not be worth consolidating.\n> check_for_isn_and_int8_passing_mismatch only loops through all databases\n> when float8_pass_by_value in the control data differs, so that might not be\n> worth it, either. \n\nYeah, these two aren't all that interesting to spend cycles on IMO.\n\n> The last 4 are for supported versions and, from a very\n> quick glance, seem possible to consolidate. That would bring us to a total\n> of 11 separate loops that we could consolidate into one. However, the data\n> type checks seem to follow a nice pattern, so perhaps this is easier said\n> than done.\n\nThere is that, refactoring the data type checks leads to removal of duplicated\ncode and a slight performance improvement. Refactoring the other checks to\nreduce overhead would be an interesting thing to look at, but this point in the\nv16 cycle might not be ideal for that.\n\n> IIUC with the patch, pg_upgrade will immediately fail as soon as a single\n> check in a database fails. I believe this differs from the current\n> behavior where all matches for a given check in the cluster are logged\n> before failing.\n\nYeah, that's wrong. Fixed.\n\n> I wonder if it'd be better to perform all of the data type\n> checks in all databases before failing so that all of the violations are\n> reported. Else, users would have to run pg_upgrade, fix a violation, run\n> pg_upgrade again, fix another one, etc.\n\nI think that's better, and have changed the patch to do it that way.\n\nOne change this brings is that check.c contains version specific checks in the\nstruct. Previously these were mostly contained in version.c (some, like the\n9.4 jsonb check was in check.c) which maintained some level of separation.\nSplitting the array init is of course one option but it also seems a tad messy.\nNot sure what's best, but for now I've documented it in the array comment at\nleast.\n\nThis version also moves the main data types check to check.c, renames some\nmembers in the struct, moves to named initializers (as commented on by Justin\ndownthread), and adds some more polish here and there.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 22 Feb 2023 10:37:35 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 10:37:35AM +0100, Daniel Gustafsson wrote:\n>> On 18 Feb 2023, at 06:46, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> The last 4 are for supported versions and, from a very\n>> quick glance, seem possible to consolidate. That would bring us to a total\n>> of 11 separate loops that we could consolidate into one. However, the data\n>> type checks seem to follow a nice pattern, so perhaps this is easier said\n>> than done.\n> \n> There is that, refactoring the data type checks leads to removal of duplicated\n> code and a slight performance improvement. Refactoring the other checks to\n> reduce overhead would be an interesting thing to look at, but this point in the\n> v16 cycle might not be ideal for that.\n\nMakes sense.\n\n>> I wonder if it'd be better to perform all of the data type\n>> checks in all databases before failing so that all of the violations are\n>> reported. Else, users would have to run pg_upgrade, fix a violation, run\n>> pg_upgrade again, fix another one, etc.\n> \n> I think that's better, and have changed the patch to do it that way.\n\nThanks. This seems to work as intended. One thing I noticed is that the\n\"failed check\" log is only printed once, even if multiple data type checks\nfailed. I believe this is because this message uses PG_STATUS. If I\nchange it to PG_REPORT, all of the \"failed check\" messages appear. TBH I'm\nnot sure we need this message at all since a more detailed explanation will\nbe printed afterwards. If we do keep it around, I think it should be\nindented so that it looks more like this:\n\n\tChecking for data type usage checking all databases \n\t failed check: incompatible aclitem data type in user tables\n\t failed check: reg* data types in user tables\n\n> One change this brings is that check.c contains version specific checks in the\n> struct. Previously these were mostly contained in version.c (some, like the\n> 9.4 jsonb check was in check.c) which maintained some level of separation.\n> Splitting the array init is of course one option but it also seems a tad messy.\n> Not sure what's best, but for now I've documented it in the array comment at\n> least.\n\nHm. We could move check_for_aclitem_data_type_usage() and\ncheck_for_jsonb_9_4_usage() to version.c since those are only used for\ndetermining whether the check applies now. Otherwise, IMO things are in\nroughly the right place. I don't think it's necessary to split the array.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 22 Feb 2023 11:20:06 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 22 Feb 2023, at 20:20, Nathan Bossart <nathandbossart@gmail.com> wrote:\n\n> One thing I noticed is that the\n> \"failed check\" log is only printed once, even if multiple data type checks\n> failed. I believe this is because this message uses PG_STATUS. If I\n> change it to PG_REPORT, all of the \"failed check\" messages appear. TBH I'm\n> not sure we need this message at all since a more detailed explanation will\n> be printed afterwards. If we do keep it around, I think it should be\n> indented so that it looks more like this:\n> \n> \tChecking for data type usage checking all databases \n> \t failed check: incompatible aclitem data type in user tables\n> \t failed check: reg* data types in user tables\n\nThats a good point, that's better. I think it makes sense to keep it around.\n\n>> One change this brings is that check.c contains version specific checks in the\n>> struct. Previously these were mostly contained in version.c (some, like the\n>> 9.4 jsonb check was in check.c) which maintained some level of separation.\n>> Splitting the array init is of course one option but it also seems a tad messy.\n>> Not sure what's best, but for now I've documented it in the array comment at\n>> least.\n> \n> Hm. We could move check_for_aclitem_data_type_usage() and\n> check_for_jsonb_9_4_usage() to version.c since those are only used for\n> determining whether the check applies now. Otherwise, IMO things are in\n> roughly the right place. I don't think it's necessary to split the array.\n\nWill do, thanks.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 15:12:21 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 23 Feb 2023, at 15:12, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 22 Feb 2023, at 20:20, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n>> One thing I noticed is that the\n>> \"failed check\" log is only printed once, even if multiple data type checks\n>> failed. I believe this is because this message uses PG_STATUS. If I\n>> change it to PG_REPORT, all of the \"failed check\" messages appear. TBH I'm\n>> not sure we need this message at all since a more detailed explanation will\n>> be printed afterwards. If we do keep it around, I think it should be\n>> indented so that it looks more like this:\n>> \n>> \tChecking for data type usage checking all databases \n>> \t failed check: incompatible aclitem data type in user tables\n>> \t failed check: reg* data types in user tables\n> \n> Thats a good point, that's better. I think it makes sense to keep it around.\n> \n>>> One change this brings is that check.c contains version specific checks in the\n>>> struct. Previously these were mostly contained in version.c (some, like the\n>>> 9.4 jsonb check was in check.c) which maintained some level of separation.\n>>> Splitting the array init is of course one option but it also seems a tad messy.\n>>> Not sure what's best, but for now I've documented it in the array comment at\n>>> least.\n>> \n>> Hm. We could move check_for_aclitem_data_type_usage() and\n>> check_for_jsonb_9_4_usage() to version.c since those are only used for\n>> determining whether the check applies now. Otherwise, IMO things are in\n>> roughly the right place. I don't think it's necessary to split the array.\n> \n> Will do, thanks.\n\nThe attached v3 is a rebase to handle conflicts and with the above comments\nadressed.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 13 Mar 2023 15:10:58 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 03:10:58PM +0100, Daniel Gustafsson wrote:\n> The attached v3 is a rebase to handle conflicts and with the above comments\n> adressed.\n\nThanks for the new version of the patch.\n\nI noticed that git-am complained when I applied the patch:\n\n Applying: pg_upgrade: run all data type checks per connection\n .git/rebase-apply/patch:1023: new blank line at EOF.\n +\n warning: 1 line adds whitespace errors.\n\n+\t\t\t\tfor (int rowno = 0; rowno < ntups; rowno++)\n+\t\t\t\t{\n+\t\t\t\t\tfound = true;\n\nIt looks like \"found\" is set unconditionally a few lines above, so I think\nthis is redundant.\n\nAlso, I think it would be worth breaking check_for_data_types_usage() into\na few separate functions (or doing some other similar refactoring) to\nimprove readability. At this point, the function is quite lengthy, and I\ncount 6 levels of indentation at some lines.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 Mar 2023 11:21:47 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "I put together a rebased version of the patch for cfbot.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 4 Jul 2023 12:08:39 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 4 Jul 2023, at 21:08, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> I put together a rebased version of the patch for cfbot.\n\nThanks for doing that, much appreciated! I was busy looking at other peoples\npatches and hadn't gotten to my own yet =)\n\n> On 13 Mar 2023, at 19:21, Nathan Bossart <nathandbossart@gmail.com> wrote:\n\n> I noticed that git-am complained when I applied the patch:\n> \n> Applying: pg_upgrade: run all data type checks per connection\n> .git/rebase-apply/patch:1023: new blank line at EOF.\n> +\n> warning: 1 line adds whitespace errors.\n\nFixed.\n\n> +\t\t\t\tfor (int rowno = 0; rowno < ntups; rowno++)\n> +\t\t\t\t{\n> +\t\t\t\t\tfound = true;\n> \n> It looks like \"found\" is set unconditionally a few lines above, so I think\n> this is redundant.\n\nCorrect, this must've been a leftover from a previous coding that changed.\nRemoved.\n\n> Also, I think it would be worth breaking check_for_data_types_usage() into\n> a few separate functions (or doing some other similar refactoring) to\n> improve readability. At this point, the function is quite lengthy, and I\n> count 6 levels of indentation at some lines.\n\n\nIt it is pretty big for sure, but it's also IMHO not terribly complicated as\nit's not really performing any hard to follow logic.\n\nI have no issues refactoring it, but trying my hand at I was only making (what\nI consider) less readable code by having to jump around so I consider it a\nfailure. If you have any suggestions, I would be more than happy to review and\nincorporate those though.\n\nAttached is a v5 with the above fixes and a pgindenting to fix up a few runaway\ncomments and indentations.\n\n--\nDaniel Gustafsson",
"msg_date": "Thu, 6 Jul 2023 17:58:33 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "Thanks for the new patch.\n\nOn Thu, Jul 06, 2023 at 05:58:33PM +0200, Daniel Gustafsson wrote:\n>> On 4 Jul 2023, at 21:08, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Also, I think it would be worth breaking check_for_data_types_usage() into\n>> a few separate functions (or doing some other similar refactoring) to\n>> improve readability. At this point, the function is quite lengthy, and I\n>> count 6 levels of indentation at some lines.\n> \n> \n> It it is pretty big for sure, but it's also IMHO not terribly complicated as\n> it's not really performing any hard to follow logic.\n> \n> I have no issues refactoring it, but trying my hand at I was only making (what\n> I consider) less readable code by having to jump around so I consider it a\n> failure. If you have any suggestions, I would be more than happy to review and\n> incorporate those though.\n\nI don't have a strong opinion about this.\n\n+\t\t\t\tfor (int rowno = 0; rowno < ntups; rowno++)\n+\t\t\t\t{\n+\t\t\t\t\tif (script == NULL && (script = fopen_priv(output_path, \"a\")) == NULL)\n+\t\t\t\t\t\tpg_fatal(\"could not open file \\\"%s\\\": %s\",\n+\t\t\t\t\t\t\t\t output_path,\n+\t\t\t\t\t\t\t\t strerror(errno));\n+\t\t\t\t\tif (!db_used)\n+\t\t\t\t\t{\n+\t\t\t\t\t\tfprintf(script, \"In database: %s\\n\", active_db->db_name);\n+\t\t\t\t\t\tdb_used = true;\n+\t\t\t\t\t}\n\nSince \"script\" will be NULL and \"db_used\" will be false in the first\niteration of the loop, couldn't we move this stuff to before the loop?\n\n+\t\t\t\t\tfprintf(script, \" %s.%s.%s\\n\",\n+\t\t\t\t\t\t\tPQgetvalue(res, rowno, i_nspname),\n+\t\t\t\t\t\t\tPQgetvalue(res, rowno, i_relname),\n+\t\t\t\t\t\t\tPQgetvalue(res, rowno, i_attname));\n\nnitpick: І think the current code has two spaces at the beginning of this\nformat string. Did you mean to remove one of them?\n\n+\t\t\t\tif (script)\n+\t\t\t\t{\n+\t\t\t\t\tfclose(script);\n+\t\t\t\t\tscript = NULL;\n+\t\t\t\t}\n\nWon't \"script\" always be initialized here? If I'm following this code\ncorrectly, I think everything except the fclose() can be removed.\n\n+\t\t\tcur_check++;\n\nI think this is unnecessary since we assign \"cur_check\" at the beginning of\nevery loop iteration. I see two of these.\n\n+static int\tn_data_types_usage_checks = 7;\n\nCan we determine this programmatically so that folks don't need to remember\nto update it?\n\n+\t/* Prepare an array to store the results of checks in */\n+\tresults = pg_malloc(sizeof(bool) * n_data_types_usage_checks);\n+\tmemset(results, true, sizeof(*results));\n\nIMHO it's a little strange that this is initialized to all \"true\", only\nbecause I think most other Postgres code does the opposite.\n\n+bool\n+check_for_aclitem_data_type_usage(ClusterInfo *cluster)\n\nDo you think we should rename these functions to something like\n\"should_check_for_*\"? They don't actually do the check, they just tell you\nwhether you should based on the version. In fact, I wonder if we could\njust add the versions directly to data_types_usage_checks so that we don't\nneed the separate hook functions.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 8 Jul 2023 14:43:54 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 8 Jul 2023, at 23:43, Nathan Bossart <nathandbossart@gmail.com> wrote:\n\nThanks for reviewing!\n\n> Since \"script\" will be NULL and \"db_used\" will be false in the first\n> iteration of the loop, couldn't we move this stuff to before the loop?\n\nWe could. It's done this way to match how all the other checks are performing\nthe inner loop for consistency. I think being consistent is better than micro\noptimizing in non-hot codepaths even though it adds some redundancy.\n\n> nitpick: І think the current code has two spaces at the beginning of this\n> format string. Did you mean to remove one of them?\n\nNice catch, I did not. Fixed.\n\n> Won't \"script\" always be initialized here? If I'm following this code\n> correctly, I think everything except the fclose() can be removed.\n\nYou are right that this check is superfluous. This is again an artifact of\nmodelling the code around how the other checks work for consistency. At least\nI think that's a good characteristic of the code.\n\n> I think this is unnecessary since we assign \"cur_check\" at the beginning of\n> every loop iteration. I see two of these.\n\nRight, this is a pointless leftover from a previous version which used a\nwhile() loop, and I had missed removing them. Fixed.\n\n> +static int\tn_data_types_usage_checks = 7;\n> \n> Can we determine this programmatically so that folks don't need to remember\n> to update it?\n\nFair point, I've added a counter loop to the beginning of the check function to\ncalculate it.\n\n> IMHO it's a little strange that this is initialized to all \"true\", only\n> because I think most other Postgres code does the opposite.\n\nAgreed, but it made for a less contrived codepath in knowing when an error has\nbeen seen already, to avoid duplicate error output, so I think it's worth it.\n\n> Do you think we should rename these functions to something like\n> \"should_check_for_*\"? They don't actually do the check, they just tell you\n> whether you should based on the version.\n\nI've been pondering that too, and did a rename now along with moving them all\nto a single place as well as changing the comments to make it clearer.\n\n> In fact, I wonder if we could just add the versions directly to\n> data_types_usage_checks so that we don't need the separate hook functions.\n\nWe could, but it would be sort of contrived I think since some check <= and\nsome == while some check the catversion as well (and new ones may have other\nvariants. I think this is the least paint-ourselves-in-a-corner version, if we\nfeel it's needlessly complicated and no other variants are added we can revisit\nthis.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 10 Jul 2023 16:43:23 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 04:43:23PM +0200, Daniel Gustafsson wrote:\n>> On 8 Jul 2023, at 23:43, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Since \"script\" will be NULL and \"db_used\" will be false in the first\n>> iteration of the loop, couldn't we move this stuff to before the loop?\n> \n> We could. It's done this way to match how all the other checks are performing\n> the inner loop for consistency. I think being consistent is better than micro\n> optimizing in non-hot codepaths even though it adds some redundancy.\n>\n> [ ... ] \n> \n>> Won't \"script\" always be initialized here? If I'm following this code\n>> correctly, I think everything except the fclose() can be removed.\n> \n> You are right that this check is superfluous. This is again an artifact of\n> modelling the code around how the other checks work for consistency. At least\n> I think that's a good characteristic of the code.\n\nI can't say I agree with this, but I'm not going to hold up the patch over\nit. FWIW I was looking at this more from a code simplification/readability\nstandpoint.\n\n>> +static int\tn_data_types_usage_checks = 7;\n>> \n>> Can we determine this programmatically so that folks don't need to remember\n>> to update it?\n> \n> Fair point, I've added a counter loop to the beginning of the check function to\n> calculate it.\n\n+\t/* Gather number of checks to perform */\n+\twhile (tmp->status != NULL)\n+\t\tn_data_types_usage_checks++;\n\nI think we need to tmp++ somewhere here.\n\n>> In fact, I wonder if we could just add the versions directly to\n>> data_types_usage_checks so that we don't need the separate hook functions.\n> \n> We could, but it would be sort of contrived I think since some check <= and\n> some == while some check the catversion as well (and new ones may have other\n> variants. I think this is the least paint-ourselves-in-a-corner version, if we\n> feel it's needlessly complicated and no other variants are added we can revisit\n> this.\n\nMakes sense.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 10 Jul 2023 16:09:07 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 11 Jul 2023, at 01:09, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Mon, Jul 10, 2023 at 04:43:23PM +0200, Daniel Gustafsson wrote:\n\n>>> +static int\tn_data_types_usage_checks = 7;\n>>> \n>>> Can we determine this programmatically so that folks don't need to remember\n>>> to update it?\n>> \n>> Fair point, I've added a counter loop to the beginning of the check function to\n>> calculate it.\n> \n> +\t/* Gather number of checks to perform */\n> +\twhile (tmp->status != NULL)\n> +\t\tn_data_types_usage_checks++;\n> \n> I think we need to tmp++ somewhere here.\n\nYuk, yes, will fix when caffeinated. Thanks.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 11 Jul 2023 01:26:33 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 11 Jul 2023, at 01:26, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 11 Jul 2023, at 01:09, Nathan Bossart <nathandbossart@gmail.com> wrote:\n\n>> I think we need to tmp++ somewhere here.\n> \n> Yuk, yes, will fix when caffeinated. Thanks.\n\nI did have coffee before now, but only found time to actually address this now\nso here is a v7 with just that change and a fresh rebase.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 12 Jul 2023 00:43:14 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 12:43:14AM +0200, Daniel Gustafsson wrote:\n> I did have coffee before now, but only found time to actually address this now\n> so here is a v7 with just that change and a fresh rebase.\n\nThanks. I think the patch is in decent shape.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 11 Jul 2023 16:36:06 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 12 Jul 2023, at 01:36, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> On Wed, Jul 12, 2023 at 12:43:14AM +0200, Daniel Gustafsson wrote:\n>> I did have coffee before now, but only found time to actually address this now\n>> so here is a v7 with just that change and a fresh rebase.\n> \n> Thanks. I think the patch is in decent shape.\n\nDue to ENOTENOUGHTIME it bitrotted a bit, so here is a v8 rebase which I really\nhope to close in this CF.\n\n--\nDaniel Gustafsson",
"msg_date": "Thu, 31 Aug 2023 23:34:53 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On 31.08.23 23:34, Daniel Gustafsson wrote:\n>> On 12 Jul 2023, at 01:36, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>>\n>> On Wed, Jul 12, 2023 at 12:43:14AM +0200, Daniel Gustafsson wrote:\n>>> I did have coffee before now, but only found time to actually address this now\n>>> so here is a v7 with just that change and a fresh rebase.\n>>\n>> Thanks. I think the patch is in decent shape.\n> \n> Due to ENOTENOUGHTIME it bitrotted a bit, so here is a v8 rebase which I really\n> hope to close in this CF.\n\nThe alignment of this output looks a bit funny:\n\n...\nChecking for prepared transactions ok\nChecking for contrib/isn with bigint-passing mismatch ok\nChecking for data type usage checking all databases\nok\nChecking for presence of required libraries ok\nChecking database user is the install user ok\n...\n\n\nAlso, you should put gettext_noop() calls into the .status = \"Checking ...\"\nassignments and arrange to call gettext() where they are used, to maintain\nthe translatability.\n\n\n\n",
"msg_date": "Wed, 13 Sep 2023 16:12:02 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 13 Sep 2023, at 16:12, Peter Eisentraut <peter@eisentraut.org> wrote:\n\n> The alignment of this output looks a bit funny:\n> \n> ...\n> Checking for prepared transactions ok\n> Checking for contrib/isn with bigint-passing mismatch ok\n> Checking for data type usage checking all databases\n> ok\n> Checking for presence of required libraries ok\n> Checking database user is the install user ok\n> ...\n\nI was using the progress reporting to indicate that it hadn't stalled for slow\nsystems, but it's not probably not all that important really. Removed such\nthat \"ok\" aligns.\n\n> Also, you should put gettext_noop() calls into the .status = \"Checking ...\"\n> assignments and arrange to call gettext() where they are used, to maintain\n> the translatability.\n\nAh, yes of course. Fixed.\n\n--\nDaniel Gustafsson",
"msg_date": "Thu, 14 Sep 2023 10:48:45 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "Attached is a v10 rebase of this patch which had undergone significant bitrot\ndue to recent changes in the pg_upgrade check phase. This brings in the\nchanges into the proposed structure without changes to queries, with no\nadditional changes to the proposed functionality.\n\nTesting with a completely empty v11 cluster fresh from initdb as the old\ncluster shows a significant speedup (averaged over multiple runs, adjusted for\noutliers):\n\npatched: 53.59ms (52.78ms, 52.49ms, 55.49ms)\nmaster : 125.87ms (125.23 ms, 125.67ms, 126.67ms)\n\nUsing a similarly empty cluster from master as the old cluster shows a smaller\nspeedup, which is expected since many checks only run for older versions:\n\npatched: 33.36ms (32.82ms, 33.78ms, 33.47ms)\nmaster : 44.87ms (44.73ms, 44.90ms 44.99ms)\n\nThe latter case is still pretty interesting IMO since it can speed up testing\nwhere every millisecond gained matters.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 27 Oct 2023 15:20:09 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On Fri, 27 Oct 2023 at 18:50, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> Attached is a v10 rebase of this patch which had undergone significant bitrot\n> due to recent changes in the pg_upgrade check phase. This brings in the\n> changes into the proposed structure without changes to queries, with no\n> additional changes to the proposed functionality.\n>\n> Testing with a completely empty v11 cluster fresh from initdb as the old\n> cluster shows a significant speedup (averaged over multiple runs, adjusted for\n> outliers):\n>\n> patched: 53.59ms (52.78ms, 52.49ms, 55.49ms)\n> master : 125.87ms (125.23 ms, 125.67ms, 126.67ms)\n>\n> Using a similarly empty cluster from master as the old cluster shows a smaller\n> speedup, which is expected since many checks only run for older versions:\n>\n> patched: 33.36ms (32.82ms, 33.78ms, 33.47ms)\n> master : 44.87ms (44.73ms, 44.90ms 44.99ms)\n>\n> The latter case is still pretty interesting IMO since it can speed up testing\n> where every millisecond gained matters.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\n55627ba2d334ce98e1f5916354c46472d414bda6 ===\n=== applying patch\n./v10-0001-pg_upgrade-run-all-data-type-checks-per-connecti.patch\npatching file src/bin/pg_upgrade/check.c\nHunk #2 FAILED at 24.\n...\n1 out of 7 hunks FAILED -- saving rejects to file src/bin/pg_upgrade/check.c.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4200.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 27 Jan 2024 09:10:58 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On Sat, 27 Jan 2024 at 09:10, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 27 Oct 2023 at 18:50, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > Attached is a v10 rebase of this patch which had undergone significant bitrot\n> > due to recent changes in the pg_upgrade check phase. This brings in the\n> > changes into the proposed structure without changes to queries, with no\n> > additional changes to the proposed functionality.\n> >\n> > Testing with a completely empty v11 cluster fresh from initdb as the old\n> > cluster shows a significant speedup (averaged over multiple runs, adjusted for\n> > outliers):\n> >\n> > patched: 53.59ms (52.78ms, 52.49ms, 55.49ms)\n> > master : 125.87ms (125.23 ms, 125.67ms, 126.67ms)\n> >\n> > Using a similarly empty cluster from master as the old cluster shows a smaller\n> > speedup, which is expected since many checks only run for older versions:\n> >\n> > patched: 33.36ms (32.82ms, 33.78ms, 33.47ms)\n> > master : 44.87ms (44.73ms, 44.90ms 44.99ms)\n> >\n> > The latter case is still pretty interesting IMO since it can speed up testing\n> > where every millisecond gained matters.\n>\n> CFBot shows that the patch does not apply anymore as in [1]:\n> === Applying patches on top of PostgreSQL commit ID\n> 55627ba2d334ce98e1f5916354c46472d414bda6 ===\n> === applying patch\n> ./v10-0001-pg_upgrade-run-all-data-type-checks-per-connecti.patch\n> patching file src/bin/pg_upgrade/check.c\n> Hunk #2 FAILED at 24.\n> ...\n> 1 out of 7 hunks FAILED -- saving rejects to file src/bin/pg_upgrade/check.c.rej\n>\n> Please post an updated version for the same.\n\nWith no update to the thread and the patch still not applying I'm\nmarking this as returned with feedback. Please feel free to resubmit\nto the next CF when there is a new version of the patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 2 Feb 2024 00:18:25 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On Fri, Feb 02, 2024 at 12:18:25AM +0530, vignesh C wrote:\n> With no update to the thread and the patch still not applying I'm\n> marking this as returned with feedback. Please feel free to resubmit\n> to the next CF when there is a new version of the patch.\n\nIMHO this patch is worth trying to get into v17. I'd be happy to take it\nforward if Daniel does not intend to work on it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 6 Feb 2024 10:32:13 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 6 Feb 2024, at 17:32, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> On Fri, Feb 02, 2024 at 12:18:25AM +0530, vignesh C wrote:\n>> With no update to the thread and the patch still not applying I'm\n>> marking this as returned with feedback. Please feel free to resubmit\n>> to the next CF when there is a new version of the patch.\n> \n> IMHO this patch is worth trying to get into v17. I'd be happy to take it\n> forward if Daniel does not intend to work on it.\n\nI actually had the same thought yesterday and spent some time polishing and\nrebasing it. I'll post an updated rebase shortly with the hopes of getting it\ncommitted this week.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 6 Feb 2024 17:47:56 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On Tue, Feb 06, 2024 at 05:47:56PM +0100, Daniel Gustafsson wrote:\n>> On 6 Feb 2024, at 17:32, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> IMHO this patch is worth trying to get into v17. I'd be happy to take it\n>> forward if Daniel does not intend to work on it.\n> \n> I actually had the same thought yesterday and spent some time polishing and\n> rebasing it. I'll post an updated rebase shortly with the hopes of getting it\n> committed this week.\n\nOh, awesome. Thanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 6 Feb 2024 10:55:57 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 6 Feb 2024, at 17:47, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 6 Feb 2024, at 17:32, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> \n>> On Fri, Feb 02, 2024 at 12:18:25AM +0530, vignesh C wrote:\n>>> With no update to the thread and the patch still not applying I'm\n>>> marking this as returned with feedback. Please feel free to resubmit\n>>> to the next CF when there is a new version of the patch.\n>> \n>> IMHO this patch is worth trying to get into v17. I'd be happy to take it\n>> forward if Daniel does not intend to work on it.\n> \n> I actually had the same thought yesterday and spent some time polishing and\n> rebasing it. I'll post an updated rebase shortly with the hopes of getting it\n> committed this week.\n\nAttached is a v11 rebased over HEAD with some very minor tweaks. Unless there\nare objections I plan to go ahead with this version this week.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 7 Feb 2024 14:25:35 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On 07.02.24 14:25, Daniel Gustafsson wrote:\n>> On 6 Feb 2024, at 17:47, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>\n>>> On 6 Feb 2024, at 17:32, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>>>\n>>> On Fri, Feb 02, 2024 at 12:18:25AM +0530, vignesh C wrote:\n>>>> With no update to the thread and the patch still not applying I'm\n>>>> marking this as returned with feedback. Please feel free to resubmit\n>>>> to the next CF when there is a new version of the patch.\n>>>\n>>> IMHO this patch is worth trying to get into v17. I'd be happy to take it\n>>> forward if Daniel does not intend to work on it.\n>>\n>> I actually had the same thought yesterday and spent some time polishing and\n>> rebasing it. I'll post an updated rebase shortly with the hopes of getting it\n>> committed this week.\n> \n> Attached is a v11 rebased over HEAD with some very minor tweaks. Unless there\n> are objections I plan to go ahead with this version this week.\n\nA few more quick comments:\n\nI think the .report_text assignments also need a gettext_noop(), like \nthe .status assignments.\n\nThe type DataTypesUsageChecks is only used in check.c, so doesn't need \nto be in pg_upgrade.h.\n\n\nIdea for further improvement: Might be nice if the \nDataTypesUsageVersionCheck struct also included the applicable version \ninformation, so the additional checks in version.c would no longer be \nnecessary.\n\n\n\n",
"msg_date": "Thu, 8 Feb 2024 11:55:19 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 8 Feb 2024, at 11:55, Peter Eisentraut <peter@eisentraut.org> wrote:\n\n> A few more quick comments:\n\nThanks for reviewing!\n\n> I think the .report_text assignments also need a gettext_noop(), like the .status assignments.\n\nDone in the attached.\n\n> The type DataTypesUsageChecks is only used in check.c, so doesn't need to be in pg_upgrade.h.\n\nFixed.\n\n> Idea for further improvement: Might be nice if the DataTypesUsageVersionCheck struct also included the applicable version information, so the additional checks in version.c would no longer be necessary.\n\nI tried various variants of this when writing it, but since the checks aren't\njust checking version but also include catalog version checks it became messy.\nOne option could perhaps be to include a version number for <= comparison, and\nif set to zero a function pointer to a version check function must be provided?\nThat would handle the simple cases in a single place without messy logic, and\nleave the more convoluted checks with a special case function.\n\n--\nDaniel Gustafsson",
"msg_date": "Thu, 8 Feb 2024 15:16:07 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 8 Feb 2024, at 15:16, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> One option could perhaps be to include a version number for <= comparison, and\n> if set to zero a function pointer to a version check function must be provided?\n> That would handle the simple cases in a single place without messy logic, and\n> leave the more convoluted checks with a special case function.\n\nThe attached is a draft version of this approach, each check can define to run\nfor all versions, set a threshold version for which it runs or define a\ncallback which implements a more complicated check.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 9 Feb 2024 00:04:56 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 9 Feb 2024, at 00:04, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 8 Feb 2024, at 15:16, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> One option could perhaps be to include a version number for <= comparison, and\n>> if set to zero a function pointer to a version check function must be provided?\n>> That would handle the simple cases in a single place without messy logic, and\n>> leave the more convoluted checks with a special case function.\n> \n> The attached is a draft version of this approach, each check can define to run\n> for all versions, set a threshold version for which it runs or define a\n> callback which implements a more complicated check.\n\nAnd again pgindented and with documentation on the struct members to make it\neasy to add new checks. A repetitive part of the report text was also moved to\na single place.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 9 Feb 2024 10:33:37 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "Attached is a fresh rebase with only minor cosmetic touch-ups which I would\nlike to go ahead with during this CF.\n\nPeter: does this address the comments you had on translation and code\nduplication?\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 18 Mar 2024 13:11:41 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "On 18.03.24 13:11, Daniel Gustafsson wrote:\n> Attached is a fresh rebase with only minor cosmetic touch-ups which I would\n> like to go ahead with during this CF.\n> \n> Peter: does this address the comments you had on translation and code\n> duplication?\n\nYes, this looks good.\n\n\n\n",
"msg_date": "Tue, 19 Mar 2024 08:07:46 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
},
{
"msg_contents": "> On 19 Mar 2024, at 08:07, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> On 18.03.24 13:11, Daniel Gustafsson wrote:\n>> Attached is a fresh rebase with only minor cosmetic touch-ups which I would\n>> like to go ahead with during this CF.\n>> Peter: does this address the comments you had on translation and code\n>> duplication?\n> \n> Yes, this looks good.\n\nThanks for review! I took another look at this and pushed it.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 19 Mar 2024 14:38:51 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Reducing connection overhead in pg_upgrade compat check phase"
}
] |
[
{
"msg_contents": "Andres recently reminded me of some loose ends in archive modules [0], so\nI'm starting a dedicated thread to address his feedback.\n\nThe first one is the requirement that archive module authors create their\nown exception handlers if they want to make use of ERROR. Ideally, there\nwould be a handler in pgarch.c so that authors wouldn't need to deal with\nthis. I do see some previous dicussion about this [1] in which I expressed\nconcerns about memory management. Looking at this again, I may have been\noverthinking it. IIRC I was thinking about creating a memory context that\nwould be switched into for only the archiving callback (and reset\nafterwards), but that might not be necessary. Instead, we could rely on\nmodule authors to handle this. One example is basic_archive, which\nmaintains its own memory context. Alternatively, authors could simply\npfree() anything that was allocated.\n\nFurthermore, by moving the exception handling to pgarch.c, module authors\ncan begin using PG_TRY, etc. in their archiving callbacks, which simplifies\nthings a bit. I've attached a work-in-progress patch for this change.\n\nOn Fri, Feb 17, 2023 at 11:41:32AM -0800, Andres Freund wrote:\n> On 2023-02-16 13:58:10 -0800, Nathan Bossart wrote:\n>> On Thu, Feb 16, 2023 at 01:17:54PM -0800, Andres Freund wrote:\n>> > I'm quite baffled by:\n>> > \t\t/* Close any files left open by copy_file() or compare_files() */\n>> > \t\tAtEOSubXact_Files(false, InvalidSubTransactionId, InvalidSubTransactionId);\n>> > \n>> > in basic_archive_file(). It seems *really* off to call AtEOSubXact_Files()\n>> > completely outside the context of a transaction environment. And it only does\n>> > the thing you want because you pass parameters that aren't actually valid in\n>> > the normal use in AtEOSubXact_Files(). I really don't understand how that's\n>> > supposed to be ok.\n>> \n>> Hm. Should copy_file() and compare_files() have PG_FINALLY blocks that\n>> attempt to close the files instead? What would you recommend?\n> \n> I don't fully now, it's not entirely clear to me what the goals here were. I\n> think you'd likely need to do a bit of infrastructure work to do this\n> sanely. So far we just didn't have the need to handle files being released in\n> a way like you want to do there.\n> \n> I suspect a good direction would be to use resource owners. Add a separate set\n> of functions that release files on resource owner release. Most of the\n> infrastructure is there already, for temporary files\n> (c.f. OpenTemporaryFile()).\n> \n> Then that resource owner could be reset in case of error.\n> \n> \n> I'm not even sure that erroring out is a reasonable way to implement\n> copy_file(), compare_files(), particularly because you want to return via a\n> return code from basic_archive_files().\n\nTo initialize this thread, I'll provide a bit more background.\nbasic_archive makes use of copy_file(), and it introduces a function called\ncompare_files() that is used to check whether two files have the same\ncontent. These functions make use of OpenTransientFile() and\nCloseTransientFile(). In basic_archive's sigsetjmp() block, there's a call\nto AtEOSubXact_Files() to make sure we close any files that are open when\nthere is an ERROR. IIRC I was following the example set by other processes\nthat make use of the AtEOXact* functions in their sigsetjmp() blocks.\nLooking again, I think AtEOXact_Files() would also work for basic_archive's\nuse-case. That would at least avoid the hack of using\nInvalidSubTransactionId for the second and third arguments.\n\n From the feedback quoted above, it sounds like improving this further will\nrequire a bit of infrastructure work. I haven't looked too deeply into\nthis yet.\n\n[0] https://postgr.es/m/20230216192956.mhi6uiakchkolpki%40awork3.anarazel.de\n[1] https://postgr.es/m/20220202224433.GA1036711%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 17 Feb 2023 13:56:24 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "archive modules loose ends"
},
{
"msg_contents": "There seems to be no interest in this patch, so I plan to withdraw it from\nthe commitfest system by the end of the month unless such interest\nmaterializes.\n\nOn Fri, Feb 17, 2023 at 01:56:24PM -0800, Nathan Bossart wrote:\n> The first one is the requirement that archive module authors create their\n> own exception handlers if they want to make use of ERROR. Ideally, there\n> would be a handler in pgarch.c so that authors wouldn't need to deal with\n> this. I do see some previous dicussion about this [1] in which I expressed\n> concerns about memory management. Looking at this again, I may have been\n> overthinking it. IIRC I was thinking about creating a memory context that\n> would be switched into for only the archiving callback (and reset\n> afterwards), but that might not be necessary. Instead, we could rely on\n> module authors to handle this. One example is basic_archive, which\n> maintains its own memory context. Alternatively, authors could simply\n> pfree() anything that was allocated.\n> \n> Furthermore, by moving the exception handling to pgarch.c, module authors\n> can begin using PG_TRY, etc. in their archiving callbacks, which simplifies\n> things a bit. I've attached a work-in-progress patch for this change.\n\nI took another look at this, and I think І remembered what I was worried\nabout with memory management. One example is the built-in shell archiving.\nPresently, whenever there is an ERROR during archiving via shell, it gets\nbumped up to FATAL because the archiver operates at the bottom of the\nexception stack. Consequently, there's no need to worry about managing\nmemory contexts to ensure that palloc'd memory is cleared up after an\nerror. With the attached patch, we no longer call the archiving callback\nwhile we're at the bottom of the exception stack, so ERRORs no longer get\nbumped up to FATALs, and any palloc'd memory won't be freed.\n\nI see two main options for dealing with this. One option is to simply have\nshell_archive (and any other archive modules out there) maintain its own\nmemory context like basic_archive does. This ends up requiring a whole lot\nof duplicate code between the two built-in modules, though. Another option\nis to have the archiver manage a memory context that it resets after every\ninvocation of the archiving callback, ERROR or not. This has the advantage\nof avoiding code duplication and simplifying things for the built-in\nmodules, but any external modules that rely on palloc'd state being\nlong-lived would need to be adjusted to manage their own long-lived\ncontext. (This would need to be appropriately documented.) However, I'm\nnot aware of any archive modules that would be impacted by this.\n\nThe attached patch is an attempt at the latter option. As I noted above,\nthis probably deserves some discussion in the archive modules\ndocumentation, but I don't intend to spend too much more time on this patch\nright now given it is likely going to be withdrawn.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 13 Nov 2023 16:42:31 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: archive modules loose ends"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-13 16:42:31 -0600, Nathan Bossart wrote:\n> There seems to be no interest in this patch, so I plan to withdraw it from\n> the commitfest system by the end of the month unless such interest\n> materializes.\n\nI think it might just have arrived too shortly before the feature freeze to be\nworth looking at at the time, and then it didn't really re-raise attention\nuntil now. I'm so far behind on keeping up with the list that I rarely end up\nlooking far back for things I'd like to have answered... Sorry.\n\nI think it's somewhat important to fix this - having a dedicated \"recover from\nerror\" implementation in a bunch of extension modules seems quite likely to\ncause problems down the line, when another type of resource needs to be dealt\nwith after errors. I think many non-toy implementations would e.g. need to\nrelease lwlocks in case of errors (e.g. because they use a shared hashtable to\nqueue jobs for workers or such).\n\n\n> On Fri, Feb 17, 2023 at 01:56:24PM -0800, Nathan Bossart wrote:\n> > The first one is the requirement that archive module authors create their\n> > own exception handlers if they want to make use of ERROR. Ideally, there\n> > would be a handler in pgarch.c so that authors wouldn't need to deal with\n> > this. I do see some previous dicussion about this [1] in which I expressed\n> > concerns about memory management. Looking at this again, I may have been\n> > overthinking it. IIRC I was thinking about creating a memory context that\n> > would be switched into for only the archiving callback (and reset\n> > afterwards), but that might not be necessary. Instead, we could rely on\n> > module authors to handle this. One example is basic_archive, which\n> > maintains its own memory context. Alternatively, authors could simply\n> > pfree() anything that was allocated.\n> >\n> > Furthermore, by moving the exception handling to pgarch.c, module authors\n> > can begin using PG_TRY, etc. in their archiving callbacks, which simplifies\n> > things a bit. I've attached a work-in-progress patch for this change.\n>\n> I took another look at this, and I think І remembered what I was worried\n> about with memory management. One example is the built-in shell archiving.\n> Presently, whenever there is an ERROR during archiving via shell, it gets\n> bumped up to FATAL because the archiver operates at the bottom of the\n> exception stack. Consequently, there's no need to worry about managing\n> memory contexts to ensure that palloc'd memory is cleared up after an\n> error. With the attached patch, we no longer call the archiving callback\n> while we're at the bottom of the exception stack, so ERRORs no longer get\n> bumped up to FATALs, and any palloc'd memory won't be freed.\n>\n> I see two main options for dealing with this. One option is to simply have\n> shell_archive (and any other archive modules out there) maintain its own\n> memory context like basic_archive does. This ends up requiring a whole lot\n> of duplicate code between the two built-in modules, though. Another option\n> is to have the archiver manage a memory context that it resets after every\n> invocation of the archiving callback, ERROR or not.\n\nI think passing in a short-lived memory context is a lot nicer to deal with.\n\n\n> This has the advantage of avoiding code duplication and simplifying things\n> for the built-in modules, but any external modules that rely on palloc'd\n> state being long-lived would need to be adjusted to manage their own\n> long-lived context. (This would need to be appropriately documented.)\n\nAlternatively we could provide a longer-lived memory context in\nArchiveModuleState, set up by the genric infrastructure. That context would\nobviously still need to be explicitly utilized by a module, but no duplicated\nsetup code would be required.\n\n\n\n\n> /*\n> * check_archive_directory\n> *\n> @@ -172,67 +147,19 @@ basic_archive_configured(ArchiveModuleState *state)\n> static bool\n> basic_archive_file(ArchiveModuleState *state, const char *file, const char *path)\n> {\n> ...\n> +\tPG_TRY();\n> +\t{\n> +\t\t/* Archive the file! */\n> +\t\tbasic_archive_file_internal(file, path);\n> +\t}\n> +\tPG_CATCH();\n> \t{\n> -\t\t/* Since not using PG_TRY, must reset error stack by hand */\n> -\t\terror_context_stack = NULL;\n> -\n> -\t\t/* Prevent interrupts while cleaning up */\n> -\t\tHOLD_INTERRUPTS();\n> -\n> -\t\t/* Report the error and clear ErrorContext for next time */\n> -\t\tEmitErrorReport();\n> -\t\tFlushErrorState();\n> -\n> \t\t/* Close any files left open by copy_file() or compare_files() */\n> -\t\tAtEOSubXact_Files(false, InvalidSubTransactionId, InvalidSubTransactionId);\n> -\n> -\t\t/* Reset our memory context and switch back to the original one */\n> -\t\tMemoryContextSwitchTo(oldcontext);\n> -\t\tMemoryContextReset(basic_archive_context);\n> -\n> -\t\t/* Remove our exception handler */\n> -\t\tPG_exception_stack = NULL;\n> +\t\tAtEOXact_Files(false);\n>\n> -\t\t/* Now we can allow interrupts again */\n> -\t\tRESUME_INTERRUPTS();\n> -\n> -\t\t/* Report failure so that the archiver retries this file */\n> -\t\treturn false;\n> +\t\tPG_RE_THROW();\n> \t}\n\nI think we should just have the AtEOXact_Files() in pgarch.c, then no\nPG_TRY/CATCH is needed here. At the moment I think just about every possible\nuse of an archive modules would require using files, so there doesn't seem\nmuch of a reason to not handle it in pgarch.c.\n\nI'd probably reset a few other subsystems at the same time (there's probably\nmore):\n- disable_all_timeouts()\n- LWLockReleaseAll()\n- ConditionVariableCancelSleep()\n- pgstat_report_wait_end()\n- ReleaseAuxProcessResources()\n\n\n> @@ -511,7 +519,58 @@ pgarch_archiveXlog(char *xlog)\n> \tsnprintf(activitymsg, sizeof(activitymsg), \"archiving %s\", xlog);\n> \tset_ps_display(activitymsg);\n>\n> -\tret = ArchiveCallbacks->archive_file_cb(archive_module_state, xlog, pathname);\n> +\toldcontext = MemoryContextSwitchTo(archive_context);\n> +\n> +\t/*\n> +\t * Since the archiver operates at the bottom of the exception stack,\n> +\t * ERRORs turn into FATALs and cause the archiver process to restart.\n> +\t * However, using ereport(ERROR, ...) when there are problems is easy to\n> +\t * code and maintain. Therefore, we create our own exception handler to\n> +\t * catch ERRORs and return false instead of restarting the archiver\n> +\t * whenever there is a failure.\n> +\t */\n> +\tif (sigsetjmp(local_sigjmp_buf, 1) != 0)\n> +\t{\n> +\t\t/* Since not using PG_TRY, must reset error stack by hand */\n> +\t\terror_context_stack = NULL;\n> +\n> +\t\t/* Prevent interrupts while cleaning up */\n> +\t\tHOLD_INTERRUPTS();\n> +\n> +\t\t/* Report the error and clear ErrorContext for next time */\n> +\t\tEmitErrorReport();\n> +\t\tMemoryContextSwitchTo(oldcontext);\n> +\t\tFlushErrorState();\n> +\n> +\t\t/* Flush any leaked data */\n> +\t\tMemoryContextReset(archive_context);\n> +\n> +\t\t/* Remove our exception handler */\n> +\t\tPG_exception_stack = NULL;\n> +\n> +\t\t/* Now we can allow interrupts again */\n> +\t\tRESUME_INTERRUPTS();\n> +\n> +\t\t/* Report failure so that the archiver retries this file */\n> +\t\tret = false;\n> +\t}\n> +\telse\n> +\t{\n> +\t\t/* Enable our exception handler */\n> +\t\tPG_exception_stack = &local_sigjmp_buf;\n> +\n> +\t\t/* Archive the file! */\n> +\t\tret = ArchiveCallbacks->archive_file_cb(archive_module_state,\n> +\t\t\t\t\t\t\t\t\t\t\t\txlog, pathname);\n> +\n> +\t\t/* Remove our exception handler */\n> +\t\tPG_exception_stack = NULL;\n> +\n> +\t\t/* Reset our memory context and switch back to the original one */\n> +\t\tMemoryContextSwitchTo(oldcontext);\n> +\t\tMemoryContextReset(archive_context);\n> +\t}\n\nIt could be worth setting up an errcontext providing the module and file\nthat's being processed. I personally find that at least as important as\nsetting up a ps string detailing the log file... But I guess that could be a\nseparate patch.\n\n\nIt'd be nice to add a comment explaining why pgarch_archiveXlog() is the right\nplace to handle errors.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 15:35:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: archive modules loose ends"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 03:35:28PM -0800, Andres Freund wrote:\n> On 2023-11-13 16:42:31 -0600, Nathan Bossart wrote:\n>> There seems to be no interest in this patch, so I plan to withdraw it from\n>> the commitfest system by the end of the month unless such interest\n>> materializes.\n> \n> I think it might just have arrived too shortly before the feature freeze to be\n> worth looking at at the time, and then it didn't really re-raise attention\n> until now. I'm so far behind on keeping up with the list that I rarely end up\n> looking far back for things I'd like to have answered... Sorry.\n\nNo worries. I appreciate the review.\n\n>> I see two main options for dealing with this. One option is to simply have\n>> shell_archive (and any other archive modules out there) maintain its own\n>> memory context like basic_archive does. This ends up requiring a whole lot\n>> of duplicate code between the two built-in modules, though. Another option\n>> is to have the archiver manage a memory context that it resets after every\n>> invocation of the archiving callback, ERROR or not.\n> \n> I think passing in a short-lived memory context is a lot nicer to deal with.\n\nCool.\n\n>> This has the advantage of avoiding code duplication and simplifying things\n>> for the built-in modules, but any external modules that rely on palloc'd\n>> state being long-lived would need to be adjusted to manage their own\n>> long-lived context. (This would need to be appropriately documented.)\n> \n> Alternatively we could provide a longer-lived memory context in\n> ArchiveModuleState, set up by the genric infrastructure. That context would\n> obviously still need to be explicitly utilized by a module, but no duplicated\n> setup code would be required.\n\nSure. Right now, I'm not sure there's too much need for that. A module\ncould just throw stuff in TopMemoryContext, and you probably wouldn't have\nany leaks because the archiver just restarts on any ERROR or\narchive_library change. But that's probably not a pattern we want to\nencourage long-term. I'll jot this down for a follow-up patch idea.\n\n> I think we should just have the AtEOXact_Files() in pgarch.c, then no\n> PG_TRY/CATCH is needed here. At the moment I think just about every possible\n> use of an archive modules would require using files, so there doesn't seem\n> much of a reason to not handle it in pgarch.c.\n\nWFM\n\n> I'd probably reset a few other subsystems at the same time (there's probably\n> more):\n> - disable_all_timeouts()\n> - LWLockReleaseAll()\n> - ConditionVariableCancelSleep()\n> - pgstat_report_wait_end()\n> - ReleaseAuxProcessResources()\n\nI looked around a bit and thought AtEOXact_HashTables() belonged here as\nwell. I'll probably give this one another pass to see if there's anything\nelse obvious.\n\n> It could be worth setting up an errcontext providing the module and file\n> that's being processed. I personally find that at least as important as\n> setting up a ps string detailing the log file... But I guess that could be a\n> separate patch.\n\nIndeed. Right now we rely on the module to emit sufficiently-detailed\nlogs, but it'd be nice if they got that for free.\n\n> It'd be nice to add a comment explaining why pgarch_archiveXlog() is the right\n> place to handle errors.\n\nWill do.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 20 Nov 2023 22:30:44 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: archive modules loose ends"
},
{
"msg_contents": "Here is a new version of the patch with feedback addressed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 28 Nov 2023 11:18:32 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: archive modules loose ends"
},
{
"msg_contents": "\r\n\r\n> On Nov 29, 2023, at 01:18, Nathan Bossart <nathandbossart@gmail.com> wrote:\r\n> \r\n> External Email\r\n> \r\n> Here is a new version of the patch with feedback addressed.\r\n> \r\n> --\r\n> Nathan Bossart\r\n> Amazon Web Services: https://aws.amazon.com\r\n\r\nHi Nathan,\r\n\r\nThe patch looks good to me. With the context explained in the thread, the patch is easy to understand.\r\nThe patch serves as a refactoring which pulls up common memory management and error handling concerns into the pgarch.c. With the patch, individual archive callbacks can focus on copying the files and leave the boilerplate code to pgarch.c. \r\n\r\nThe patch applies cleanly to HEAD. “make check-world” also runs cleanly with no error.\r\n\r\n\r\nRegards,\r\nYong",
"msg_date": "Mon, 15 Jan 2024 12:21:44 +0000",
"msg_from": "\"Li, Yong\" <yoli@ebay.com>",
"msg_from_op": false,
"msg_subject": "Re: archive modules loose ends"
},
{
"msg_contents": "On Mon, Jan 15, 2024 at 12:21:44PM +0000, Li, Yong wrote:\n> The patch looks good to me. With the context explained in the thread,\n> the patch is easy to understand.\n> The patch serves as a refactoring which pulls up common memory management\n> and error handling concerns into the pgarch.c. With the patch,\n> individual archive callbacks can focus on copying the files and leave the\n> boilerplate code to pgarch.c..\n> \n> The patch applies cleanly to HEAD. “make check-world” also runs cleanly\n> with no error.\n\nThanks for reviewing. I've marked this as ready-for-committer, and I'm\nhoping to commit it in the near future.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 15 Jan 2024 08:50:25 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: archive modules loose ends"
},
{
"msg_contents": "On Mon, Jan 15, 2024 at 08:50:25AM -0600, Nathan Bossart wrote:\n> Thanks for reviewing. I've marked this as ready-for-committer, and I'm\n> hoping to commit it in the near future.\n\nThis one probably ought to go into v17, but I wanted to do one last call\nfor feedback prior to committing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 26 Mar 2024 14:14:14 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: archive modules loose ends"
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 02:14:14PM -0500, Nathan Bossart wrote:\n> On Mon, Jan 15, 2024 at 08:50:25AM -0600, Nathan Bossart wrote:\n>> Thanks for reviewing. I've marked this as ready-for-committer, and I'm\n>> hoping to commit it in the near future.\n> \n> This one probably ought to go into v17, but I wanted to do one last call\n> for feedback prior to committing.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 2 Apr 2024 22:35:43 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: archive modules loose ends"
}
] |
[
{
"msg_contents": "Hi,\n\nI am looking for a way to define a global variable in CustomScan plugin\nthat is shared between different psql backends. Is it possible without\nusing shared memory? Does postgresql implement any function that\nfacilitates this?\n\nThank you,\nAmin\n\nHi,I am looking for a way to define a global variable in CustomScan plugin that is shared between different psql backends. Is it possible without using shared memory? Does postgresql implement any function that facilitates this?Thank you,Amin",
"msg_date": "Fri, 17 Feb 2023 16:36:25 -0800",
"msg_from": "Amin <amin.fallahi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Share variable between psql backends in CustomScan"
},
{
"msg_contents": "Hi\n\n\nso 18. 2. 2023 v 1:37 odesílatel Amin <amin.fallahi@gmail.com> napsal:\n\n> Hi,\n>\n> I am looking for a way to define a global variable in CustomScan plugin\n> that is shared between different psql backends. Is it possible without\n> using shared memory? Does postgresql implement any function that\n> facilitates this?\n>\n\nNo - there is nothing like this. You need to use shared memory.\n\nRegards\n\nPavel\n\n\n>\n> Thank you,\n> Amin\n>\n\nHiso 18. 2. 2023 v 1:37 odesílatel Amin <amin.fallahi@gmail.com> napsal:Hi,I am looking for a way to define a global variable in CustomScan plugin that is shared between different psql backends. Is it possible without using shared memory? Does postgresql implement any function that facilitates this?No - there is nothing like this. You need to use shared memory.RegardsPavel Thank you,Amin",
"msg_date": "Sat, 18 Feb 2023 05:25:59 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Share variable between psql backends in CustomScan"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile trying to add additional File* functions (FileZero, FileFallocat) I went\nback and forth about the argument order between \"amount\" and \"offset\".\n\nWe have:\n\nextern int FilePrefetch(File file, off_t offset, off_t amount, uint32 wait_event_info);\nextern int FileRead(File file, void *buffer, size_t amount, off_t offset, uint32 wait_event_info);\nextern int FileWrite(File file, const void *buffer, size_t amount, off_t offset, uint32 wait_event_info);\nextern int FileTruncate(File file, off_t offset, uint32 wait_event_info);\nextern void FileWriteback(File file, off_t offset, off_t nbytes, uint32 wait_event_info);\n\nand I want to add (for [1])\nextern int FileZero(File file, off_t amount, off_t offset, uint32 wait_event_info);\nextern int FileFallocate(File file, off_t amount, off_t offset, uint32 wait_event_info);\n\nThe differences originate in trying to mirror the underlying function's\nsignatures:\n\nint posix_fadvise(int fd, off_t offset, off_t len, int advice);\nssize_t pread(int fd, void buf[.count], size_t count, off_t offset);\nssize_t pwrite(int fd, const void buf[.count], size_t count, off_t offset);\nint ftruncate(int fd, off_t length);\nint posix_fallocate(int fd, off_t offset, off_t len);\nint sync_file_range(int fd, off64_t offset, off64_t nbytes, unsigned int flags);\n\n\nIt seems quite confusing to be this inconsistent about argument order and\nargument types in the File* functions. For one, the relation to the underlying\nposix functions isn't always obvious. For another, we're not actually\nmirroring the signatures all that well, our argument and return types don't\nactually match.\n\n\nIt'd be easy enough to decide on a set of types for the arguments, that'd be\nAPI (but not necessarily ABI compatible, but we don't care) compatible. But\nchanging the argument order would commonly lead to silent breakage, which\nobviously would be bad. Or maybe it's unlikely enough that there are external\ncallers?\n\nI don't know what to actually propose. I guess the least bad I can see is to\npick one type & argument order that we document to be the default, with a\ncaveat placed above the functions not following the argument order.\n\nOrder wise, I think we should choose amount, offset. For the return type we\nprobably should pick ssize_t? I don't know what we should standardize on for\n'amount', I'd probably be inclined to go for size_t.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de\n\n\n",
"msg_date": "Fri, 17 Feb 2023 16:52:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "File* argument order, argument types"
},
{
"msg_contents": "On Sat, Feb 18, 2023 at 6:23 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> While trying to add additional File* functions (FileZero, FileFallocat) I went\n> back and forth about the argument order between \"amount\" and \"offset\".\n>\n> We have:\n>\n> extern int FilePrefetch(File file, off_t offset, off_t amount, uint32 wait_event_info);\n> extern int FileRead(File file, void *buffer, size_t amount, off_t offset, uint32 wait_event_info);\n> extern int FileWrite(File file, const void *buffer, size_t amount, off_t offset, uint32 wait_event_info);\n> extern int FileTruncate(File file, off_t offset, uint32 wait_event_info);\n> extern void FileWriteback(File file, off_t offset, off_t nbytes, uint32 wait_event_info);\n>\n> and I want to add (for [1])\n> extern int FileZero(File file, off_t amount, off_t offset, uint32 wait_event_info);\n> extern int FileFallocate(File file, off_t amount, off_t offset, uint32 wait_event_info);\n>\n> The differences originate in trying to mirror the underlying function's\n> signatures:\n>\n> int posix_fadvise(int fd, off_t offset, off_t len, int advice);\n> ssize_t pread(int fd, void buf[.count], size_t count, off_t offset);\n> ssize_t pwrite(int fd, const void buf[.count], size_t count, off_t offset);\n> int ftruncate(int fd, off_t length);\n> int posix_fallocate(int fd, off_t offset, off_t len);\n> int sync_file_range(int fd, off64_t offset, off64_t nbytes, unsigned int flags);\n>\n>\n> It seems quite confusing to be this inconsistent about argument order and\n> argument types in the File* functions. For one, the relation to the underlying\n> posix functions isn't always obvious. For another, we're not actually\n> mirroring the signatures all that well, our argument and return types don't\n> actually match.\n>\n>\n> It'd be easy enough to decide on a set of types for the arguments, that'd be\n> API (but not necessarily ABI compatible, but we don't care) compatible. But\n> changing the argument order would commonly lead to silent breakage, which\n> obviously would be bad. Or maybe it's unlikely enough that there are external\n> callers?\n\nI am sure there are extensions and forks which use these APIs and they\nwill be surprised to see this change OR will face silent breakage.\nDo you consider those as external callers?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 20 Feb 2023 17:02:55 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: File* argument order, argument types"
},
{
"msg_contents": "On 18.02.23 01:52, Andres Freund wrote:\n> I don't know what to actually propose. I guess the least bad I can see is to\n> pick one type & argument order that we document to be the default, with a\n> caveat placed above the functions not following the argument order.\n> \n> Order wise, I think we should choose amount, offset. For the return type we\n> probably should pick ssize_t? I don't know what we should standardize on for\n> 'amount', I'd probably be inclined to go for size_t.\n\nThis reminds me that most people talk about LIMIT X OFFSET Y, even \nthough what actually happens is that the offset is handled first and \nthen the limit is applied. This is also reflected in the standard SQL \nspelling OFFSET Y FETCH FIRST Z ROWS, in that order. So, just saying \nthat there is universal disagreement on the order of these things.\n\nI think the correct order is offset, then amount. And I think the OS C \nAPIs mostly agree with that, if you look at the newer ones. The \nexceptions are the likes of pread() and pwrite(); I think they just kept \nthe signature of read() and write() and added the additional offset \nargument at the end, which I think is a sensible compromise.\n\nFor the proposed FileFallocate() I would therefore also keep the order \nof posix_fallocate(), so it would be\n\n FileFallocate(File file, off_t offset, off_t len, ...)\n\n\n\n",
"msg_date": "Mon, 20 Feb 2023 14:48:01 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: File* argument order, argument types"
}
] |
[
{
"msg_contents": "Hi\n\nI am starting to think about the next generation of pspg (\nhttps://github.com/okbob/pspg).\n\nNow, the communication between psql and pspg is very simple. psql reads all\ndata, does formatting to pretty table format, and sends it through pipe to\npspg. pspg stores all data and tries to detect header lines that are used\nfor identification of column widths. The information about number of header\nrows and about width of columns are used for printing fixed or moveable\ndata.\n\nIt is working surprisingly well, but there are limits.\n\n1. In some cases it can be slow - you can try \\x and select * from pg_proc.\nThe formatted table can be full of spaces, and the formatting can be slow,\nand passing via pipe too. The difference is in 2 versus 20 seconds.\n\n2. It cannot to work when FETCH_COUNT is non zero\n\nPassing data in csv format to pager can be very significantly faster.\nProcessing csv data can be much more robust than processing tabular format\nthat depends on a lot of pset settings. Unfortunately, psql doesn't send in\ncsv all information. There is not any information about used data types,\nthere is no title. Unfortunately, there is not any info about wanted\nformatting settings from psql - so the user's comfort is less than could be.\n\nCan be nice (from my perspective) if pspg can read some metadata about the\nresult. The question is - how to do it?\n\nThere are three possibilities:\n\na) psql sends some control data through a pipe. Currently we use only text\nprotocol, but we can use some ascii control chars, so it is not a problem\nto detect start of header, and detect start of data, and possibly we can\ndetect end of data.\n\nb) psql can send data like now, but before the start of the pager can fill\nsome environment variables. A pager can read these variables - like\nPSQL_PAGER_SETTING, PSQL_PAGER_DATADESC, ...\n\nc) we can introduce a new custom format (can be named \"pspg\")- it can be\nbased on csv or tsv, where the first part is data description, following\ndata, and it can be ended by some special flag.\n\nWhat do you think about described possibilities?\n\nregards\n\nPavel\n\nHiI am starting to think about the next generation of pspg (https://github.com/okbob/pspg).Now, the communication between psql and pspg is very simple. psql reads all data, does formatting to pretty table format, and sends it through pipe to pspg. pspg stores all data and tries to detect header lines that are used for identification of column widths. The information about number of header rows and about width of columns are used for printing fixed or moveable data.It is working surprisingly well, but there are limits.1. In some cases it can be slow - you can try \\x and select * from pg_proc. The formatted table can be full of spaces, and the formatting can be slow, and passing via pipe too. The difference is in 2 versus 20 seconds.2. It cannot to work when FETCH_COUNT is non zeroPassing data in csv format to pager can be very significantly faster. Processing csv data can be much more robust than processing tabular format that depends on a lot of pset settings. Unfortunately, psql doesn't send in csv all information. There is not any information about used data types, there is no title. Unfortunately, there is not any info about wanted formatting settings from psql - so the user's comfort is less than could be.Can be nice (from my perspective) if pspg can read some metadata about the result. The question is - how to do it?There are three possibilities:a) psql sends some control data through a pipe. Currently we use only text protocol, but we can use some ascii control chars, so it is not a problem to detect start of header, and detect start of data, and possibly we can detect end of data.b) psql can send data like now, but before the start of the pager can fill some environment variables. A pager can read these variables - like PSQL_PAGER_SETTING, PSQL_PAGER_DATADESC, ...c) we can introduce a new custom format (can be named \"pspg\")- it can be based on csv or tsv, where the first part is data description, following data, and it can be ended by some special flag.What do you think about described possibilities? regardsPavel",
"msg_date": "Sat, 18 Feb 2023 09:40:59 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "questions about possible enhancing protocol of communication between\n psql and pager"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been running a lot of valgrind tests on 32-bit arm recently, and\nfrom time to time I get a failure in handle_sig_alarm like this:\n\n ==13605== Use of uninitialised value of size 4\n ==13605== at 0x88DA98: handle_sig_alarm (timeout.c:457)\n ==13605== by 0xFFFFFFFF: ???\n ==13605== Uninitialised value was created by a heap allocation\n ==13605== at 0x8A0374: MemoryContextAllocExtended (mcxt.c:1149)\n ==13605== by 0x86A187: DynaHashAlloc (dynahash.c:292)\n ==13605== by 0x86CB07: element_alloc (dynahash.c:1715)\n ==13605== by 0x86A9E7: hash_create (dynahash.c:611)\n ==13605== by 0x8A1CE3: EnablePortalManager (portalmem.c:122)\n ==13605== by 0x8716CF: InitPostgres (postinit.c:806)\n ==13605== by 0x653F63: PostgresMain (postgres.c:4141)\n ==13605== by 0x5651CB: BackendRun (postmaster.c:4461)\n ==13605== by 0x564A43: BackendStartup (postmaster.c:4189)\n ==13605== by 0x560663: ServerLoop (postmaster.c:1779)\n ==13605== by 0x55FE27: PostmasterMain (postmaster.c:1463)\n ==13605== by 0x4107F3: main (main.c:200)\n ==13605==\n {\n <insert_a_suppression_name_here>\n Memcheck:Value4\n fun:handle_sig_alarm\n obj:*\n }\n\nor (somewhat weird)\n\n ==23734== Use of uninitialised value of size 4\n ==23734== at 0x88DDC8: handle_sig_alarm (timeout.c:457)\n ==23734== by 0xFFFFFFFF: ???\n ==23734== Uninitialised value was created by a stack allocation\n ==23734== at 0x64CE2C: EndCommand (dest.c:167)\n ==23734==\n {\n <insert_a_suppression_name_here>\n Memcheck:Value4\n fun:handle_sig_alarm\n obj:*\n }\n\nIt might be a valgrind issue and/or false positive, but I don't think\nI've seen such failures before, so I'm wondering if this might be due to\nsome recent changes?\n\nIt's pretty rare, as it depends on the timing of the signal being just\n\"right\" (I wonder if there's a way to increase the frequency).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 18 Feb 2023 13:56:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "occasional valgrind reports for handle_sig_alarm on 32-bit ARM"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-18 13:56:38 +0100, Tomas Vondra wrote:\n> or (somewhat weird)\n> \n> ==23734== Use of uninitialised value of size 4\n> ==23734== at 0x88DDC8: handle_sig_alarm (timeout.c:457)\n> ==23734== by 0xFFFFFFFF: ???\n> ==23734== Uninitialised value was created by a stack allocation\n> ==23734== at 0x64CE2C: EndCommand (dest.c:167)\n> ==23734==\n> {\n> <insert_a_suppression_name_here>\n> Memcheck:Value4\n> fun:handle_sig_alarm\n> obj:*\n> }\n\nI'd try using valgrind's --vgdb-error=1, and inspecting the state.\n\nI assume this is without specifying --read-var-info=yes? Might be worth\ntrying, sometimes the increased detail can be really helpful.\n\n\nIt's certainly interesting that the error happens in timeout.c:457 - currently\nthat's the end of the function. And dest.c:167 is the entry of EndCommand().\n\nPerhaps there's some confusion around the state of the stack? The fact that it\nlooks like the function epilogue of handle_sig_alarm() uses an uninitialized\nvariable created by the function prologue of EndCommand() does seem to suggest\nsomething like that.\n\nIt'd be interesting to see the exact instruction triggering the failure +\nsurroundings.\n\n\n> It might be a valgrind issue and/or false positive, but I don't think\n> I've seen such failures before, so I'm wondering if this might be due to\n> some recent changes?\n\nHave you run 32bit arm valgrind before? It'd not surprise me if there are some\n32bit arm issues in valgrind, libc, or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Feb 2023 13:12:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: occasional valgrind reports for handle_sig_alarm on 32-bit ARM"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen building with meson, TEMP_CONFIG is supported for TAP tests, but doesn't\ndo anything for regress/isolation.\n\nThe reason for that is that meson's (and ninja's) architecture is to separate\n\"build setup\" from the \"build/test/whatever\" stage, moving dynamism (and more\ncostly operations) to the \"setup\" phase.\n\nIn this case the implication is that the command line for the test isn't\nre-computed dynamically. But pg_regress doesn't look at TEMP_CONFIG, it just\nhas a --temp-config=... parameter, that src/Makefile.global.in dynamically\nadds if TEMP_CONFIG is set.\n\nIn contrast to that, TEMP_CONFIG support for tap tests is implemented in\nCluster.pm, and thus works transparently.\n\nMy inclination is to move TEMP_CONFIG support from the Makefile to\npg_regress.c. That way it's consistent across the build tools and isn't\nduplicated. pg_regress already looks at a bunch of temporary variables\n(e.g. PG_REGRESS_SOCK_DIR, PG_TEST_USE_UNIX_SOCKETS), so this isn't really\nbreaking new ground.\n\nIt can be implemented differently, e.g. by adding the parameter dynamically in\nthe wrapper around pg_regress, but I don't see an advantage in that.\n\nPatch attached.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 18 Feb 2023 12:26:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Handle TEMP_CONFIG for pg_regress style tests in pg_regress.c"
},
{
"msg_contents": "On 18.02.23 21:26, Andres Freund wrote:\n> When building with meson, TEMP_CONFIG is supported for TAP tests, but doesn't\n> do anything for regress/isolation.\n> \n> The reason for that is that meson's (and ninja's) architecture is to separate\n> \"build setup\" from the \"build/test/whatever\" stage, moving dynamism (and more\n> costly operations) to the \"setup\" phase.\n> \n> In this case the implication is that the command line for the test isn't\n> re-computed dynamically. But pg_regress doesn't look at TEMP_CONFIG, it just\n> has a --temp-config=... parameter, that src/Makefile.global.in dynamically\n> adds if TEMP_CONFIG is set.\n> \n> In contrast to that, TEMP_CONFIG support for tap tests is implemented in\n> Cluster.pm, and thus works transparently.\n> \n> My inclination is to move TEMP_CONFIG support from the Makefile to\n> pg_regress.c. That way it's consistent across the build tools and isn't\n> duplicated. pg_regress already looks at a bunch of temporary variables\n> (e.g. PG_REGRESS_SOCK_DIR, PG_TEST_USE_UNIX_SOCKETS), so this isn't really\n> breaking new ground.\n\nI'm having a hard time understanding what TEMP_CONFIG is for. It \nappears that the intention is to allow injecting arbitrary configuration \ninto the tests? In that case, I think your proposal makes sense. But I \ndon't see this documented, so who knows what it is actually used for.\n\n\n\n",
"msg_date": "Sun, 19 Feb 2023 08:25:57 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Handle TEMP_CONFIG for pg_regress style tests in pg_regress.c"
},
{
"msg_contents": "On 2023-02-19 Su 02:25, Peter Eisentraut wrote:\n> On 18.02.23 21:26, Andres Freund wrote:\n>> When building with meson, TEMP_CONFIG is supported for TAP tests, but \n>> doesn't\n>> do anything for regress/isolation.\n>>\n>> The reason for that is that meson's (and ninja's) architecture is to \n>> separate\n>> \"build setup\" from the \"build/test/whatever\" stage, moving dynamism \n>> (and more\n>> costly operations) to the \"setup\" phase.\n>>\n>> In this case the implication is that the command line for the test isn't\n>> re-computed dynamically. But pg_regress doesn't look at TEMP_CONFIG, \n>> it just\n>> has a --temp-config=... parameter, that src/Makefile.global.in \n>> dynamically\n>> adds if TEMP_CONFIG is set.\n>>\n>> In contrast to that, TEMP_CONFIG support for tap tests is implemented in\n>> Cluster.pm, and thus works transparently.\n>>\n>> My inclination is to move TEMP_CONFIG support from the Makefile to\n>> pg_regress.c. That way it's consistent across the build tools and isn't\n>> duplicated. pg_regress already looks at a bunch of temporary variables\n>> (e.g. PG_REGRESS_SOCK_DIR, PG_TEST_USE_UNIX_SOCKETS), so this isn't \n>> really\n>> breaking new ground.\n>\n> I'm having a hard time understanding what TEMP_CONFIG is for. It \n> appears that the intention is to allow injecting arbitrary \n> configuration into the tests? In that case, I think your proposal \n> makes sense. But I don't see this documented, so who knows what it is \n> actually used for.\n>\n>\n>\n\nIt started here quite a long time ago:\n\n\ncommit 0cb74d3cec\nAuthor: Andrew Dunstan <andrew@dunslane.net>\nDate: Sun Sep 9 20:40:54 2007 +0000\n\n Provide for a file specifying non-standard config options for temp \ninstall\n for pg_regress, via --temp-config option. Pick this up in the make file\n via TEMP_CONFIG setting.\n\nIt's used by the buildfarm to add the extra config settings from its \nconfiguration file.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-19 Su 02:25, Peter\n Eisentraut wrote:\n\nOn\n 18.02.23 21:26, Andres Freund wrote:\n \nWhen building with meson, TEMP_CONFIG is\n supported for TAP tests, but doesn't\n \n do anything for regress/isolation.\n \n\n The reason for that is that meson's (and ninja's) architecture\n is to separate\n \n \"build setup\" from the \"build/test/whatever\" stage, moving\n dynamism (and more\n \n costly operations) to the \"setup\" phase.\n \n\n In this case the implication is that the command line for the\n test isn't\n \n re-computed dynamically. But pg_regress doesn't look at\n TEMP_CONFIG, it just\n \n has a --temp-config=... parameter, that src/Makefile.global.in\n dynamically\n \n adds if TEMP_CONFIG is set.\n \n\n In contrast to that, TEMP_CONFIG support for tap tests is\n implemented in\n \n Cluster.pm, and thus works transparently.\n \n\n My inclination is to move TEMP_CONFIG support from the Makefile\n to\n \n pg_regress.c. That way it's consistent across the build tools\n and isn't\n \n duplicated. pg_regress already looks at a bunch of temporary\n variables\n \n (e.g. PG_REGRESS_SOCK_DIR, PG_TEST_USE_UNIX_SOCKETS), so this\n isn't really\n \n breaking new ground.\n \n\n\n I'm having a hard time understanding what TEMP_CONFIG is for. It\n appears that the intention is to allow injecting arbitrary\n configuration into the tests? In that case, I think your proposal\n makes sense. But I don't see this documented, so who knows what\n it is actually used for.\n \n\n\n\n\n\n\nIt started here quite a long time ago:\n\n\ncommit 0cb74d3cec\n Author: Andrew Dunstan <andrew@dunslane.net>\n Date: Sun Sep 9 20:40:54 2007 +0000\n\n Provide for a file specifying non-standard config options for\n temp install\n for pg_regress, via --temp-config option. Pick this up in the\n make file\n via TEMP_CONFIG setting.\n\n\nIt's used by the buildfarm to add the extra config settings from\n its configuration file.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 19 Feb 2023 08:58:59 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Handle TEMP_CONFIG for pg_regress style tests in pg_regress.c"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-02-19 Su 02:25, Peter Eisentraut wrote:\n>> On 18.02.23 21:26, Andres Freund wrote:\n>>> My inclination is to move TEMP_CONFIG support from the Makefile to\n>>> pg_regress.c. That way it's consistent across the build tools and isn't\n>>> duplicated.\n\n>> I'm having a hard time understanding what TEMP_CONFIG is for.\n\n> It's used by the buildfarm to add the extra config settings from its \n> configuration file.\n\nI have also used it manually to inject configuration changes into\nTAP tests, for instance running them with debug_discard_caches = 1.\nIt's quite handy, but I agree the lack of documentation is bad.\n\nIt looks to me like pg_regress already does implement this; that\nis, the Makefiles convert TEMP_CONFIG into a --temp-config switch\nto pg_[isolation_]regress. So if we made pg_regress responsible\nfor examining the envvar directly, very little new code would be\nneeded. (Maybe net negative code if we remove the command line\nswitch, but I'm not sure if we should.) What we'd lose is the\nability to write\n\tmake TEMP_CONFIG=foo check\nbut I wouldn't miss that. Having a uniform rule that TEMP_CONFIG\nis an environment variable and nothing else seems good.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 Feb 2023 11:13:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Handle TEMP_CONFIG for pg_regress style tests in pg_regress.c"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-19 11:13:38 -0500, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 2023-02-19 Su 02:25, Peter Eisentraut wrote:\n> >> On 18.02.23 21:26, Andres Freund wrote:\n> >>> My inclination is to move TEMP_CONFIG support from the Makefile to\n> >>> pg_regress.c. That way it's consistent across the build tools and isn't\n> >>> duplicated.\n> \n> >> I'm having a hard time understanding what TEMP_CONFIG is for.\n> \n> > It's used by the buildfarm to add the extra config settings from its \n> > configuration file.\n> \n> I have also used it manually to inject configuration changes into\n> TAP tests, for instance running them with debug_discard_caches = 1.\n\nSimilar. Explicitly turning on fsync, changing the log level to debug etc.\n\n\n> It's quite handy, but I agree the lack of documentation is bad.\n\nWe have some minimal documentation for EXTRA_REGRESS_OPTS, but imo that's a\nbit of a different use case, as it adds actual commandline options for\npg_regress (and thus doesn't work for tap tests).\n\nSeems we'd need a section in regress.sgml documenting the various environment\nvariables?\n\n\n> It looks to me like pg_regress already does implement this; that\n> is, the Makefiles convert TEMP_CONFIG into a --temp-config switch\n> to pg_[isolation_]regress. So if we made pg_regress responsible\n> for examining the envvar directly, very little new code would be\n> needed.\n\nIt's very little, indeed - the patch upthread ends up with:\n 4 files changed, 11 insertions(+), 16 deletions(-)\n\n\n> (Maybe net negative code if we remove the command line\n> switch, but I'm not sure if we should.)\n\nI don't think we should - we use it for various regression tests, to specify a\nconfig file they should load (shared_preload_libraries, wal_level, etc).\n\nThe way I implemented it now is that TEMP_CONFIG is added earlier in the\nresulting config file, than the contents of the file explicitly specified on\nthe commandline.\n\n\n> What we'd lose is the ability to write make TEMP_CONFIG=foo check but I\n> wouldn't miss that. Having a uniform rule that TEMP_CONFIG is an\n> environment variable and nothing else seems good.\n\nIf we were concerned about it we could just add an export of TEMP_CONFIG to\nsrc/Makefile.global.in\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 19 Feb 2023 12:46:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Handle TEMP_CONFIG for pg_regress style tests in pg_regress.c"
}
] |
[
{
"msg_contents": "Hi hackers,\r\n\r\nAfter multiple calls to the function pg_logical_slot_get_binary_changes() in\r\nsingle client backend (the output plugin of the slot is pgoutput), I got the\r\nfollowing error:\r\n\r\nclient backend FATAL: out of relcache_callback_list slots\r\nclient backend CONTEXT: slot \"testslot\", output plugin \"pgoutput\", in the startup callback\r\nclient backend STATEMENT: SELECT data FROM pg_logical_slot_get_binary_changes('testslot', NULL, NULL, 'proto_version', '3', 'streaming', 'off', 'publication_names', 'pub');\r\n\r\nI tried to look into it and found that it's because every time the function\r\n(pg_logical_slot_get_binary_changes) is called, relcache callback and syscache\r\ncallbacks are registered when initializing pgoutput (see pgoutput_startup() and\r\ninit_rel_sync_cache()), but they are not unregistered when it shutdowns. So,\r\nafter multiple calls to the function, MAX_RELCACHE_CALLBACKS is exceeded. This\r\nis mentioned in the following comment.\r\n\r\n\t/*\r\n\t * We can get here if the plugin was used in SQL interface as the\r\n\t * RelSchemaSyncCache is destroyed when the decoding finishes, but there\r\n\t * is no way to unregister the relcache invalidation callback.\r\n\t */\r\n\tif (RelationSyncCache == NULL)\r\n\t\treturn;\r\n\r\nCould we fix it by adding two new function to unregister relcache callback and\r\nsyscache callback? I tried to do so in the attached patch.\r\n\r\nRegards,\r\nShi Yu",
"msg_date": "Sun, 19 Feb 2023 02:40:31 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?B?Im91dCBvZiByZWxjYWNoZV9jYWxsYmFja19saXN0IHNsb3RzIiBhZnRlciBt?=\n =?utf-8?B?dWx0aXBsZSBjYWxscyB0b8KgcGdfbG9naWNhbF9zbG90X2dldF9iaW5hcnlf?=\n =?utf-8?Q?changes?="
},
{
"msg_contents": "Good catch!\n\nAt Sun, 19 Feb 2023 02:40:31 +0000, \"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com> wrote in \n> init_rel_sync_cache()), but they are not unregistered when it shutdowns. So,\n> after multiple calls to the function, MAX_RELCACHE_CALLBACKS is exceeded. This\n> is mentioned in the following comment.\n> \n> \t/*\n> \t * We can get here if the plugin was used in SQL interface as the\n> \t * RelSchemaSyncCache is destroyed when the decoding finishes, but there\n> \t * is no way to unregister the relcache invalidation callback.\n> \t */\n> \tif (RelationSyncCache == NULL)\n> \t\treturn;\n> \n> Could we fix it by adding two new function to unregister relcache callback and\n> syscache callback? I tried to do so in the attached patch.\n\nI'm pretty sure that everytime an output plugin is initialized on a\nprocess, it installs the same set of syscache/relcache callbacks each\ntime. Do you think we could simply stop duplicate registration of\nthose callbacks by using a static boolean? It would be far simpler.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 20 Feb 2023 17:08:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"out of relcache_callback_list slots\" after multiple calls\n =?iso-8859-1?Q?to=A0pg=5Flogical=5Fslot=5Fget=5Fbinary=5Fchanges?="
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> I'm pretty sure that everytime an output plugin is initialized on a\n> process, it installs the same set of syscache/relcache callbacks each\n> time. Do you think we could simply stop duplicate registration of\n> those callbacks by using a static boolean? It would be far simpler.\n\nYeah, I think that's the way it's done elsewhere. Removing and\nre-registering your callback seems expensive, and it also destroys\nany reasoning that anyone might have made about the order in which\ndifferent callbacks will get called. (Admittedly, that's probably not\nimportant for invalidation callbacks, but it does matter for e.g.\nprocess exit callbacks.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Feb 2023 10:30:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"out of relcache_callback_list slots\" after multiple calls\n =?iso-8859-1?Q?to=A0pg=5Flogical=5Fslot=5Fget=5Fbinary=5Fchanges?="
},
{
"msg_contents": "On Mon, Feb 20, 2023 11:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> \r\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\r\n> > I'm pretty sure that everytime an output plugin is initialized on a\r\n> > process, it installs the same set of syscache/relcache callbacks each\r\n> > time. Do you think we could simply stop duplicate registration of\r\n> > those callbacks by using a static boolean? It would be far simpler.\r\n> \r\n> Yeah, I think that's the way it's done elsewhere. Removing and\r\n> re-registering your callback seems expensive, and it also destroys\r\n> any reasoning that anyone might have made about the order in which\r\n> different callbacks will get called. (Admittedly, that's probably not\r\n> important for invalidation callbacks, but it does matter for e.g.\r\n> process exit callbacks.)\r\n> \r\n\r\nThanks for your reply. I agree that's expensive. Attach a new patch which adds a\r\nstatic boolean to avoid duplicate registration.\r\n\r\nRegards,\r\nShi Yu",
"msg_date": "Tue, 21 Feb 2023 10:31:29 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?B?UkU6ICJvdXQgb2YgcmVsY2FjaGVfY2FsbGJhY2tfbGlzdCBzbG90cyIgYWZ0?=\n =?utf-8?B?ZXIgbXVsdGlwbGUgY2FsbHMgdG/CoHBnX2xvZ2ljYWxfc2xvdF9nZXRfYmlu?=\n =?utf-8?Q?ary=5Fchanges?="
},
{
"msg_contents": "At Tue, 21 Feb 2023 10:31:29 +0000, \"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com> wrote in \n> Thanks for your reply. I agree that's expensive. Attach a new patch which adds a\n> static boolean to avoid duplicate registration.\n\nThank you for the patch. It is exactly what I had in my mind. But now\nthat I've had a chance to mull it over, I came to think it might be\nbetter to register the callbacks at one place. I'm thinking we could\ncreate a new function called register_callbacks() or something and\nmove all the calls to CacheRegisterSyscacheCallback() into that. What\ndo you think about that refactoring?\n\nI guess you could say that that refactoring somewhat weakens the\nconnection or dependency between init_rel_sync_cache and\nrel_sync_cache_relation_cb, but anyway the callback works even if\nRelationSyncCache is not around.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 22 Feb 2023 10:03:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"out of relcache_callback_list slots\" after multiple calls\n =?iso-8859-1?Q?to=A0pg=5Flogical=5Fslot=5Fget=5Fbinary=5Fchanges?="
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 12:03 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 21 Feb 2023 10:31:29 +0000, \"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com> wrote in\n> > Thanks for your reply. I agree that's expensive. Attach a new patch which adds a\n> > static boolean to avoid duplicate registration.\n>\n> Thank you for the patch. It is exactly what I had in my mind. But now\n> that I've had a chance to mull it over, I came to think it might be\n> better to register the callbacks at one place. I'm thinking we could\n> create a new function called register_callbacks() or something and\n> move all the calls to CacheRegisterSyscacheCallback() into that. What\n> do you think about that refactoring?\n>\n> I guess you could say that that refactoring somewhat weakens the\n> connection or dependency between init_rel_sync_cache and\n> rel_sync_cache_relation_cb, but anyway the callback works even if\n> RelationSyncCache is not around.\n>\n\nIf you are going to do that, then won't just copying the\nCacheRegisterSyscacheCallback(PUBLICATIONOID... into function\ninit_rel_sync_cache() be effectively the same as doing that?\n\nThen almost nothing else to do...e.g. no need for a new extra static\nboolean if static RelationSyncCache is acting as the one-time guard\nanyway.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 22 Feb 2023 12:29:59 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"out of relcache_callback_list slots\" after multiple calls to\n pg_logical_slot_get_binary_changes"
},
{
"msg_contents": "Thanks for the comment.\n\nAt Wed, 22 Feb 2023 12:29:59 +1100, Peter Smith <smithpb2250@gmail.com> wrote in \n> On Wed, Feb 22, 2023 at 12:03 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 21 Feb 2023 10:31:29 +0000, \"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com> wrote in\n> > > Thanks for your reply. I agree that's expensive. Attach a new patch which adds a\n> > > static boolean to avoid duplicate registration.\n> >\n> > Thank you for the patch. It is exactly what I had in my mind. But now\n> > that I've had a chance to mull it over, I came to think it might be\n> > better to register the callbacks at one place. I'm thinking we could\n> > create a new function called register_callbacks() or something and\n> > move all the calls to CacheRegisterSyscacheCallback() into that. What\n> > do you think about that refactoring?\n> >\n> > I guess you could say that that refactoring somewhat weakens the\n> > connection or dependency between init_rel_sync_cache and\n> > rel_sync_cache_relation_cb, but anyway the callback works even if\n> > RelationSyncCache is not around.\n> >\n> \n> If you are going to do that, then won't just copying the\n> CacheRegisterSyscacheCallback(PUBLICATIONOID... into function\n> init_rel_sync_cache() be effectively the same as doing that?\n\nI'm not sure if it has anything to do with the relation sync cache.\nOn the other hand, moving all the content of init_rel_sync_cache() up\nto pgoutput_startup() doesn't seem like a good idea.. Another option,\nas you see, was to separate callback registration code.\n\n> Then almost nothing else to do...e.g. no need for a new extra static\n> boolean if static RelationSyncCache is acting as the one-time guard\n> anyway.\n\nUnfortunately, RelationSyncCache doesn't work - it is set to NULL at\nplugin shutdown.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 22 Feb 2023 12:07:06 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"out of relcache_callback_list slots\" after multiple calls to\n pg_logical_slot_get_binary_changes"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 12:07:06PM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 22 Feb 2023 12:29:59 +1100, Peter Smith <smithpb2250@gmail.com> wrote in \n>> If you are going to do that, then won't just copying the\n>> CacheRegisterSyscacheCallback(PUBLICATIONOID... into function\n>> init_rel_sync_cache() be effectively the same as doing that?\n> \n> I'm not sure if it has anything to do with the relation sync cache.\n> On the other hand, moving all the content of init_rel_sync_cache() up\n> to pgoutput_startup() doesn't seem like a good idea.. Another option,\n> as you see, was to separate callback registration code.\n\nBoth are kept separate in the code, so keeping this separation makes\nsense to me.\n\n+ /* Register callbacks if we didn't do that. */\n+ if (!callback_registered)\n+ CacheRegisterSyscacheCallback(PUBLICATIONOID,\n+ publication_invalidation_cb,\n+ (Datum) 0);\n \n /* Initialize relation schema cache. */\n init_rel_sync_cache(CacheMemoryContext);\n+ callback_registered = true;\n[...]\n+ /* Register callbacks if we didn't do that. */\n+ if (!callback_registered)\n\nI am a bit confused by the use of one single flag called\ncallback_registered to track both the publication callback and the\nrelation callbacks. Wouldn't it be cleaner to use two flags? I don't\nthink that we'll have soon a second code path calling\ninit_rel_sync_cache(), but if we do then the callback load could again\nbe messed up.\n\n(FYI, we use this method of callback registration for everything\nthat's not a one-time code path, like hash tables for RI triggers,\nbase backup callbacks, etc.)\n--\nMichael",
"msg_date": "Wed, 22 Feb 2023 15:19:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"out of relcache_callback_list slots\" after multiple calls to\n pg_logical_slot_get_binary_changes"
},
{
"msg_contents": "On Wed, Feb 22, 2023 2:20 PM Michael Paquier <michael@paquier.xyz> wrote:\r\n> \r\n> On Wed, Feb 22, 2023 at 12:07:06PM +0900, Kyotaro Horiguchi wrote:\r\n> > At Wed, 22 Feb 2023 12:29:59 +1100, Peter Smith <smithpb2250@gmail.com>\r\n> wrote in\r\n> >> If you are going to do that, then won't just copying the\r\n> >> CacheRegisterSyscacheCallback(PUBLICATIONOID... into function\r\n> >> init_rel_sync_cache() be effectively the same as doing that?\r\n> >\r\n> > I'm not sure if it has anything to do with the relation sync cache.\r\n> > On the other hand, moving all the content of init_rel_sync_cache() up\r\n> > to pgoutput_startup() doesn't seem like a good idea.. Another option,\r\n> > as you see, was to separate callback registration code.\r\n> \r\n> Both are kept separate in the code, so keeping this separation makes\r\n> sense to me.\r\n> \r\n> + /* Register callbacks if we didn't do that. */\r\n> + if (!callback_registered)\r\n> + CacheRegisterSyscacheCallback(PUBLICATIONOID,\r\n> + publication_invalidation_cb,\r\n> + (Datum) 0);\r\n> \r\n> /* Initialize relation schema cache. */\r\n> init_rel_sync_cache(CacheMemoryContext);\r\n> + callback_registered = true;\r\n> [...]\r\n> + /* Register callbacks if we didn't do that. */\r\n> + if (!callback_registered)\r\n> \r\n> I am a bit confused by the use of one single flag called\r\n> callback_registered to track both the publication callback and the\r\n> relation callbacks. Wouldn't it be cleaner to use two flags? I don't\r\n> think that we'll have soon a second code path calling\r\n> init_rel_sync_cache(), but if we do then the callback load could again\r\n> be messed up.\r\n> \r\n\r\nThanks for your reply. Using two flags makes sense to me.\r\nAttach the updated patch.\r\n\r\nRegards,\r\nShi Yu",
"msg_date": "Wed, 22 Feb 2023 10:21:51 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: \"out of relcache_callback_list slots\" after multiple calls to\n pg_logical_slot_get_binary_changes"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 10:21:51AM +0000, shiy.fnst@fujitsu.com wrote:\n> Thanks for your reply. Using two flags makes sense to me.\n> Attach the updated patch.\n\nFine by me as far as it goes. Any thoughts from others?\n--\nMichael",
"msg_date": "Thu, 23 Feb 2023 09:28:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"out of relcache_callback_list slots\" after multiple calls to\n pg_logical_slot_get_binary_changes"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 11:28 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Feb 22, 2023 at 10:21:51AM +0000, shiy.fnst@fujitsu.com wrote:\n> > Thanks for your reply. Using two flags makes sense to me.\n> > Attach the updated patch.\n>\n> Fine by me as far as it goes. Any thoughts from others?\n> --\n\nShould the 'relation_callback_registered' variable name be plural?\n\nOtherwise, LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 23 Feb 2023 13:07:38 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"out of relcache_callback_list slots\" after multiple calls to\n pg_logical_slot_get_binary_changes"
},
{
"msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> Should the 'relation_callback_registered' variable name be plural?\n\nYeah, plural seems better to me too. I fixed that and did a little\ncomment-editing and pushed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Feb 2023 15:42:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"out of relcache_callback_list slots\" after multiple calls to\n pg_logical_slot_get_binary_changes"
}
] |
[
{
"msg_contents": "On 2023-02-11, Andres Freund wrote 20230212004254.3lp22a7bpkcjo3y6@awork3.anarazel.de:\n> The windows test failure is a transient issue independent of the patch\n> (something went wrong with image permissions).\n\nThat's happening again since 3h ago.\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql\n\nI suggested in the past that cfbot should delay if (say) the last 5 or\n10 consecutive runs all failed (or maybe all failed on the same \"task\").\n\nMaybe that should only apply to re-tests but not to new patches. It\ncould inject 15min delays until the condition is resolved. Or it could\nrun retests on a longer interval like 96h instead of 24. And add a\nwarning or start beeping about the issue.\n\nThat would mitigate not only issues in the master branch but also issues\nwith CI infrastructure (cirrus/google/images).\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 19 Feb 2023 19:08:41 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "cfbot failures"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-19 19:08:41 -0600, Justin Pryzby wrote:\n> On 2023-02-11, Andres Freund wrote 20230212004254.3lp22a7bpkcjo3y6@awork3.anarazel.de:\n> > The windows test failure is a transient issue independent of the patch\n> > (something went wrong with image permissions).\n> \n> That's happening again since 3h ago.\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql\n\nFixed manually. This is some sort of gcp issue. Bilal tried to deploy a\nworkaround, but that didn't yet work.\n\n[21:39:06.006] 2023-02-19T21:39:06Z: ==> windows.googlecompute.windows-ci-vs-2019: Creating image...\n[21:44:08.025] 2023-02-19T21:44:08Z: ==> windows.googlecompute.windows-ci-vs-2019: Error waiting for image: time out while waiting for image to register\n...\n\n[21:44:10.990] gcloud compute images deprecate pg-ci-${CIRRUS_TASK_NAME}-${DATE} --state=DEPRECATED\n[21:44:33.834] ERROR: (gcloud.compute.images.deprecate) Could not fetch resource:\n[21:44:33.834] - Required 'compute.images.deprecate' permission for 'projects/cirrus-ci-community/global/images/pg-ci-windows-ci-vs-2019-2023-02-19t21-30-43'\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 19 Feb 2023 17:18:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cfbot failures"
}
] |
[
{
"msg_contents": "This patch adds support for the unit \"B\" to pg_size_pretty(). This \nmakes it consistent with the units support in GUC. (pg_size_pretty() \nonly supports \"bytes\", but GUC only supports \"B\". -- I opted against \nadding support for \"bytes\" to GUC.)",
"msg_date": "Mon, 20 Feb 2023 07:44:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Add support for unit \"B\" to pg_size_pretty()"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 07:44:15AM +0100, Peter Eisentraut wrote:\n> This patch adds support for the unit \"B\" to pg_size_pretty(). This makes it\n\nIt seems like what it actually does is to support \"B\" in pg_size_bytes()\n- is that what you meant ?\n\npg_size_pretty() already supports \"bytes\", so this doesn't actually make\nsizes any more pretty, or evidently change its output at all.\n\n> diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c\n> index dbd404101f..9ecd5428c3 100644\n> --- a/src/backend/utils/adt/dbsize.c\n> +++ b/src/backend/utils/adt/dbsize.c\n> @@ -49,6 +49,7 @@ struct size_pretty_unit\n> /* When adding units here also update the error message in pg_size_bytes */\n> static const struct size_pretty_unit size_pretty_units[] = {\n> \t{\"bytes\", 10 * 1024, false, 0},\n> +\t{\"B\", 10 * 1024, false, 0},\n\nThis adds a duplicate line (unitbits=0) where no other existing line\nuses duplicates. If that's intentional, I think it deserves a comment\nhighlighting that it's an /*alias*/, and about why that does the right\nthing, either here about or in the commit message.\n\n> \t{\"kB\", 20 * 1024 - 1, true, 10},\n> \t{\"MB\", 20 * 1024 - 1, true, 20},\n> \t{\"GB\", 20 * 1024 - 1, true, 30},\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 20 Feb 2023 08:34:52 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for unit \"B\" to pg_size_pretty()"
},
{
"msg_contents": "On 20.02.23 15:34, Justin Pryzby wrote:\n> On Mon, Feb 20, 2023 at 07:44:15AM +0100, Peter Eisentraut wrote:\n>> This patch adds support for the unit \"B\" to pg_size_pretty(). This makes it\n> \n> It seems like what it actually does is to support \"B\" in pg_size_bytes()\n> - is that what you meant ?\n\nyes\n\n> pg_size_pretty() already supports \"bytes\", so this doesn't actually make\n> sizes any more pretty, or evidently change its output at all.\n\nRight, this is for the input side.\n\n>> diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c\n>> index dbd404101f..9ecd5428c3 100644\n>> --- a/src/backend/utils/adt/dbsize.c\n>> +++ b/src/backend/utils/adt/dbsize.c\n>> @@ -49,6 +49,7 @@ struct size_pretty_unit\n>> /* When adding units here also update the error message in pg_size_bytes */\n>> static const struct size_pretty_unit size_pretty_units[] = {\n>> \t{\"bytes\", 10 * 1024, false, 0},\n>> +\t{\"B\", 10 * 1024, false, 0},\n> \n> This adds a duplicate line (unitbits=0) where no other existing line\n> uses duplicates. If that's intentional, I think it deserves a comment\n> highlighting that it's an /*alias*/, and about why that does the right\n> thing, either here about or in the commit message.\n\nI have added a comment about that.",
"msg_date": "Wed, 22 Feb 2023 00:47:04 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for unit \"B\" to pg_size_pretty()"
},
{
"msg_contents": "On Wed, 22 Feb 2023 at 12:47, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> >> diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c\n> >> index dbd404101f..9ecd5428c3 100644\n> >> --- a/src/backend/utils/adt/dbsize.c\n> >> +++ b/src/backend/utils/adt/dbsize.c\n> >> @@ -49,6 +49,7 @@ struct size_pretty_unit\n> >> /* When adding units here also update the error message in pg_size_bytes */\n> >> static const struct size_pretty_unit size_pretty_units[] = {\n> >> {\"bytes\", 10 * 1024, false, 0},\n> >> + {\"B\", 10 * 1024, false, 0},\n> >\n> > This adds a duplicate line (unitbits=0) where no other existing line\n> > uses duplicates. If that's intentional, I think it deserves a comment\n> > highlighting that it's an /*alias*/, and about why that does the right\n> > thing, either here about or in the commit message.\n>\n> I have added a comment about that.\n\nhmm. I didn't really code pg_size_pretty with aliases in mind. I don't\nthink you can do this. There's code in pg_size_pretty() and\npg_size_pretty_numeric() that'll not work correctly. We look ahead to\nthe next unit to check if there is one so we know we must use this\nunit if there are no other units to convert to.\n\nLet's assume someone in the future reads your comment about aliases\nand thinks we can just go and add an alias for any unit. Here we'll\nadd PiB for PB.\n\ndiff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c\nindex dbd404101f..8e22969a76 100644\n--- a/src/backend/utils/adt/dbsize.c\n+++ b/src/backend/utils/adt/dbsize.c\n@@ -54,6 +54,7 @@ static const struct size_pretty_unit size_pretty_units[] = {\n {\"GB\", 20 * 1024 - 1, true, 30},\n {\"TB\", 20 * 1024 - 1, true, 40},\n {\"PB\", 20 * 1024 - 1, true, 50},\n+ {\"PiB\", 20 * 1024 - 1, true, 50},\n {NULL, 0, false, 0}\n };\n\ntesting it, I see:\n\npostgres=# select pg_size_pretty(10000::numeric * 1024*1024*1024*1024*1024);\n pg_size_pretty\n----------------\n 10000 PB\n(1 row)\n\npostgres=# select pg_size_pretty(20000::numeric * 1024*1024*1024*1024*1024);\n pg_size_pretty\n----------------\n 20000 PiB\n(1 row)\n\nI think we'll likely get complaints about PB being used sometimes and\nPiB being used at other times.\n\nI think you'll need to find another way to make the aliases work.\nMaybe another array with the name and an int to reference the\ncorresponding index in size_pretty_units.\n\nDavid\n\n\n",
"msg_date": "Wed, 22 Feb 2023 15:39:15 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for unit \"B\" to pg_size_pretty()"
},
{
"msg_contents": "On 22.02.23 03:39, David Rowley wrote:\n> hmm. I didn't really code pg_size_pretty with aliases in mind. I don't\n> think you can do this. There's code in pg_size_pretty() and\n> pg_size_pretty_numeric() that'll not work correctly. We look ahead to\n> the next unit to check if there is one so we know we must use this\n> unit if there are no other units to convert to.\n\n> I think you'll need to find another way to make the aliases work.\n> Maybe another array with the name and an int to reference the\n> corresponding index in size_pretty_units.\n\nOk, here is a new patch with a separate table of aliases. (Might look \nlike overkill, but I think the \"PiB\" etc. example you had could actually \nbe a good use case for this as well.)",
"msg_date": "Mon, 27 Feb 2023 09:34:10 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for unit \"B\" to pg_size_pretty()"
},
{
"msg_contents": "On Mon, 27 Feb 2023 at 21:34, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 22.02.23 03:39, David Rowley wrote:\n> > I think you'll need to find another way to make the aliases work.\n> > Maybe another array with the name and an int to reference the\n> > corresponding index in size_pretty_units.\n>\n> Ok, here is a new patch with a separate table of aliases. (Might look\n> like overkill, but I think the \"PiB\" etc. example you had could actually\n> be a good use case for this as well.)\n\nI think I'd prefer to see the size_bytes_unit_alias struct have an\nindex into size_pretty_units[] array. i.e:\n\nstruct size_bytes_unit_alias\n{\n const char *alias; /* aliased unit name */\n const int unit_index; /* corresponding size_pretty_units element */\n};\n\nthen the pg_size_bytes code can be simplified to:\n\n/* If not found, look in the table of aliases */\nif (unit->name == NULL)\n{\n for (const struct size_bytes_unit_alias *a = size_bytes_aliases;\na->alias != NULL; a++)\n {\n if (pg_strcasecmp(strptr, a->alias) == 0)\n {\n unit = &size_pretty_units[a->unit_index];\n break;\n }\n }\n}\n\nwhich saves having to have the additional and slower nested loop code.\n\nApart from that, the patch looks fine.\n\nDavid\n\n\n",
"msg_date": "Fri, 3 Mar 2023 08:58:02 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for unit \"B\" to pg_size_pretty()"
},
{
"msg_contents": "On Thu, 2 Mar 2023 at 19:58, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I think I'd prefer to see the size_bytes_unit_alias struct have an\n> index into size_pretty_units[] array. i.e:\n>\n> struct size_bytes_unit_alias\n> {\n> const char *alias; /* aliased unit name */\n> const int unit_index; /* corresponding size_pretty_units element */\n> };\n>\n> then the pg_size_bytes code can be simplified to:\n>\n> /* If not found, look in the table of aliases */\n> if (unit->name == NULL)\n> {\n> for (const struct size_bytes_unit_alias *a = size_bytes_aliases;\n> a->alias != NULL; a++)\n> {\n> if (pg_strcasecmp(strptr, a->alias) == 0)\n> {\n> unit = &size_pretty_units[a->unit_index];\n> break;\n> }\n> }\n> }\n>\n> which saves having to have the additional and slower nested loop code.\n>\n\nHmm, I think it would be easier to just have a separate table for\npg_size_bytes(), rather than reusing pg_size_pretty()'s table. I.e.,\nsize_bytes_units[], which would only need name and multiplier columns\n(not round and limit). Done that way, it would be easier to add other\nunits later (e.g., non-base-2 units).\n\nAlso, it looks to me as though the doc change is for pg_size_pretty()\ninstead of pg_size_bytes().\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 2 Mar 2023 20:32:26 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for unit \"B\" to pg_size_pretty()"
},
{
"msg_contents": "On Fri, 3 Mar 2023 at 09:32, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> Hmm, I think it would be easier to just have a separate table for\n> pg_size_bytes(), rather than reusing pg_size_pretty()'s table. I.e.,\n> size_bytes_units[], which would only need name and multiplier columns\n> (not round and limit). Done that way, it would be easier to add other\n> units later (e.g., non-base-2 units).\n\nMaybe that's worthwhile if we were actually thinking of adding any\nnon-base 2 units in the future, but if we're not, perhaps it's better\njust to have the smaller alias array which for Peter's needs will just\nrequire 1 element + the NULL one instead of 6 + NULL.\n\nIn any case, I'm not really sure I see what the path forward would be\nto add something like base-10 units would be for pg_size_bytes(). If\nwe were to change MB to mean 10^6 rather than 2^20 I think many people\nwould get upset.\n\nDavid\n\n\n",
"msg_date": "Sat, 4 Mar 2023 00:23:21 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for unit \"B\" to pg_size_pretty()"
},
{
"msg_contents": "On Fri, 3 Mar 2023 at 11:23, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 3 Mar 2023 at 09:32, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> > Hmm, I think it would be easier to just have a separate table for\n> > pg_size_bytes(), rather than reusing pg_size_pretty()'s table.\n>\n> Maybe that's worthwhile if we were actually thinking of adding any\n> non-base 2 units in the future, but if we're not, perhaps it's better\n> just to have the smaller alias array which for Peter's needs will just\n> require 1 element + the NULL one instead of 6 + NULL.\n>\n\nMaybe. It's the tradeoff between having a smaller array and more code\n(2 loops) vs a larger array and less code (1 loop).\n\n> In any case, I'm not really sure I see what the path forward would be\n> to add something like base-10 units would be for pg_size_bytes(). If\n> we were to change MB to mean 10^6 rather than 2^20 I think many people\n> would get upset.\n>\n\nYeah, that's probably true. Given the way this and configuration\nparameters currently work, I think we're stuck with 1MB meaning 2^20\nbytes.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 3 Mar 2023 12:22:38 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for unit \"B\" to pg_size_pretty()"
},
{
"msg_contents": "On 02.03.23 20:58, David Rowley wrote:\n> On Mon, 27 Feb 2023 at 21:34, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 22.02.23 03:39, David Rowley wrote:\n>>> I think you'll need to find another way to make the aliases work.\n>>> Maybe another array with the name and an int to reference the\n>>> corresponding index in size_pretty_units.\n>>\n>> Ok, here is a new patch with a separate table of aliases. (Might look\n>> like overkill, but I think the \"PiB\" etc. example you had could actually\n>> be a good use case for this as well.)\n> \n> I think I'd prefer to see the size_bytes_unit_alias struct have an\n> index into size_pretty_units[] array. i.e:\n\nOk, done that way. (I had thought about that, but I was worried that \nthat would be too error-prone to maintain. But I suppose the tables \ndon't change that often, and test cases would easily catch mistakes.)\n\nI also updated the documentation a bit more.",
"msg_date": "Mon, 6 Mar 2023 09:13:41 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for unit \"B\" to pg_size_pretty()"
},
{
"msg_contents": "On Mon, 6 Mar 2023 at 21:13, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 02.03.23 20:58, David Rowley wrote:\n> > I think I'd prefer to see the size_bytes_unit_alias struct have an\n> > index into size_pretty_units[] array. i.e:\n>\n> Ok, done that way. (I had thought about that, but I was worried that\n> that would be too error-prone to maintain. But I suppose the tables\n> don't change that often, and test cases would easily catch mistakes.)\n\nPatch looks pretty good. I just see a small spelling mistake in:\n\n+/* Additional unit aliases acceted by pg_size_bytes */\n\n> I also updated the documentation a bit more.\n\nI see I must have forgotten to add PB to the docs when pg_size_pretty\nhad that unit added. I guess you added the \"etc\" to fix that? I'm\nwondering if that's the right choice. You modified the comment above\nsize_pretty_units[] to remind us to update the docs when adding units,\nbut the docs now say \"etc\", so do we need to? I'd likely have gone\nwith just adding \"PB\" to the docs, that way it's pretty clear that new\nunits need to be mentioned in the docs.\n\nDavid\n\n\n",
"msg_date": "Mon, 6 Mar 2023 21:27:36 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for unit \"B\" to pg_size_pretty()"
},
{
"msg_contents": "On 06.03.23 09:27, David Rowley wrote:\n> On Mon, 6 Mar 2023 at 21:13, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 02.03.23 20:58, David Rowley wrote:\n>>> I think I'd prefer to see the size_bytes_unit_alias struct have an\n>>> index into size_pretty_units[] array. i.e:\n>>\n>> Ok, done that way. (I had thought about that, but I was worried that\n>> that would be too error-prone to maintain. But I suppose the tables\n>> don't change that often, and test cases would easily catch mistakes.)\n> \n> Patch looks pretty good. I just see a small spelling mistake in:\n> \n> +/* Additional unit aliases acceted by pg_size_bytes */\n> \n>> I also updated the documentation a bit more.\n> \n> I see I must have forgotten to add PB to the docs when pg_size_pretty\n> had that unit added. I guess you added the \"etc\" to fix that? I'm\n> wondering if that's the right choice. You modified the comment above\n> size_pretty_units[] to remind us to update the docs when adding units,\n> but the docs now say \"etc\", so do we need to? I'd likely have gone\n> with just adding \"PB\" to the docs, that way it's pretty clear that new\n> units need to be mentioned in the docs.\n\nOk, I have fixed the original documentation to that effect and \nbackpatched it.\n\nThe remaining patch has been updated accordingly and committed also.\n\n\n\n",
"msg_date": "Tue, 7 Mar 2023 21:22:30 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for unit \"B\" to pg_size_pretty()"
},
{
"msg_contents": "On Wed, 8 Mar 2023 at 09:22, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> Ok, I have fixed the original documentation to that effect and\n> backpatched it.\n\nThanks for fixing that.\n\nDavid\n\n\n",
"msg_date": "Wed, 8 Mar 2023 10:25:34 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for unit \"B\" to pg_size_pretty()"
}
] |
[
{
"msg_contents": "Hi all,\n I have written an FDW, which is similar to the file_fdw. I need the\nsupport of WAL to perform logical and stream replication. I have knowledge\nabout custom WAL but do not have clarity about implementing WAL(writing,\nredo, desc, identify, etc..) and cases where WAL can be applied.\n\nkindly share some documents, and links regarding WAL implementation.\n\nReference of Custom WAL: -\nhttps://www.postgresql.org/docs/current/custom-rmgr.html\n\n--------\nKomal Habura\n\nHi all, I have written an FDW, which is similar to the file_fdw. I need the support of WAL to perform logical and stream replication. I have knowledge about custom WAL but do not have clarity about implementing WAL(writing, redo, desc, identify, etc..) and cases where WAL can be applied.kindly share some documents, and links regarding WAL implementation.Reference of Custom WAL: - https://www.postgresql.org/docs/current/custom-rmgr.html--------Komal Habura",
"msg_date": "Mon, 20 Feb 2023 15:23:46 +0530",
"msg_from": "Komal Habura <komalhabura2@gmail.com>",
"msg_from_op": true,
"msg_subject": "Seek for helper documents to implement WAL with an FDW"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 3:01 PM Komal Habura <komalhabura2@gmail.com> wrote:\n>\n> Hi all,\n> I have written an FDW, which is similar to the file_fdw. I need the support of WAL to perform logical and stream replication. I have knowledge about custom WAL but do not have clarity about implementing WAL(writing, redo, desc, identify, etc..) and cases where WAL can be applied.\n>\n> kindly share some documents, and links regarding WAL implementation.\n>\n> Reference of Custom WAL: - https://www.postgresql.org/docs/current/custom-rmgr.html\n\nYou can look at a sample extension called test_custom_rmgrs that\nimplements custom WAL rmgr -\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=src/test/modules/test_custom_rmgrs;h=2144037578b01e56cbc8bf80af4fbdaa94c07c17;hb=HEAD.\nBasically, custom WAL rmgrs allow one to write WAL records of their\nown choice and define what to do when the server is in recovery i.e.\nreplaying those WAL records or when the server is decoding (for\nlogical replication) those WAL records.\n\nComing to whether you need to write WAL at all in your FDW, it depends\non what the FDW does.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 21 Feb 2023 15:16:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Seek for helper documents to implement WAL with an FDW"
},
{
"msg_contents": "On 20.02.23 10:53, Komal Habura wrote:\n> I have written an FDW, which is similar to the file_fdw. I need \n> the support of WAL to perform logical and stream replication. I have \n> knowledge about custom WAL but do not have clarity about implementing \n> WAL(writing, redo, desc, identify, etc..) and cases where WAL can be \n> applied.\n\nA foreign-data wrapper manages *foreign* data, which almost by \ndefinition means that it does not participate in the transaction \nmanagement of the local PostgreSQL instance, including in the WAL. If \nyou want to build a custom storage format that does participate in the \nlocal transaction management, you should probably look at building \neither a table access method or a storage manager.\n\n\n",
"msg_date": "Wed, 22 Feb 2023 14:35:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Seek for helper documents to implement WAL with an FDW"
}
] |
[
{
"msg_contents": "I want to resolve the domain name asynchronously before connecting to the target;\r\none way is parse sql connect parameters string; like \r\nif (\"host=\" in conn_params) and (\"hostaddr=\" not in conn_params):\r\n get_domain_name_and_resolv_and_add_hostaddr_to_conn_params_string()\r\n\r\n\r\nAnother way is to add it to libq;\r\nI write a simple example:\r\n https://github.com/gamefunc/Aiolibpq_simple\r\nI added the code to libq the build for use:\r\n https://github.com/gamefunc/Aiolibpq_simple/blob/main/Modify_Libpq_Source_Code.py\r\nand use it in(awaitable<int> Aiolibpq::connect()):\r\n https://github.com/gamefunc/Aiolibpq_simple/blob/main/Aiolibpq_simple.cpp\r\n\r\n\r\n const char* host_name = nullptr;\r\n while((host_name = PQgetUnresolvHost(conn)) != nullptr){\r\n tcp::resolver resolver(loop);\r\n tcp::resolver::iterator ep_iter = \r\n co_await resolver.async_resolve(\r\n host_name, \"\", use_awaitable);\r\n tcp::endpoint ep = *ep_iter;\r\n PQSetUnresolvHost(\r\n conn, host_name,\r\n ep.address().to_string().data(),\r\n ep.address().to_string().size());\r\n }// while()\r\n\r\n\r\n\r\nOf course, there will be many situations in this code, \r\nfor example, when one of the domain names fails to resolve the ip, it will boom;\nI want to resolve the domain name asynchronously before connecting to the target;one way is parse sql connect parameters string; like if (\"host=\" in conn_params) and (\"hostaddr=\" not in conn_params): get_domain_name_and_resolv_and_add_hostaddr_to_conn_params_string()Another way is to add it to libq;I write a simple example: https://github.com/gamefunc/Aiolibpq_simpleI added the code to libq the build for use: https://github.com/gamefunc/Aiolibpq_simple/blob/main/Modify_Libpq_Source_Code.pyand use it in(awaitable<int> Aiolibpq::connect()): https://github.com/gamefunc/Aiolibpq_simple/blob/main/Aiolibpq_simple.cpp const char* host_name = nullptr; while((host_name = PQgetUnresolvHost(conn)) != nullptr){ tcp::resolver resolver(loop); tcp::resolver::iterator ep_iter = co_await resolver.async_resolve( host_name, \"\", use_awaitable); tcp::endpoint ep = *ep_iter; PQSetUnresolvHost( conn, host_name, ep.address().to_string().data(), ep.address().to_string().size()); }// while()Of course, there will be many situations in this code, for example, when one of the domain names fails to resolve the ip, it will boom;",
"msg_date": "Mon, 20 Feb 2023 20:17:07 +0800",
"msg_from": "\"=?ISO-8859-1?B?MzI2ODY2NDc=?=\" <32686647@qq.com>",
"msg_from_op": true,
"msg_subject": "I wish libpq add get_unresolv_host() and set_unresolv_host_ip()\n function"
}
] |
[
{
"msg_contents": "I suddenly (?, IIRC a few days ago this worked fine) started getting the following error while trying to following while building from source on Apple M1\n\nThis is on REL_13_9 but happens to all other releases too.\n\nIn file included from pg_crc32c_armv8.c:17:\n/Library/Developer/CommandLineTools/usr/lib/clang/14.0.0/include/arm_acle.h:14:2: error: \"ACLE intrinsics support not enabled.\"\n#error \"ACLE intrinsics support not enabled.\"\n ^\npg_crc32c_armv8.c:35:9: error: implicit declaration of function '__crc32cb' is invalid in C99 [-Werror,-Wimplicit-function-declaration]\n crc = __crc32cb(crc, *p);\n\n\n\n./configure output below. \n\n\n\nchecking build system type... x86_64-apple-darwin22.3.0\nchecking host system type... x86_64-apple-darwin22.3.0\nchecking which template to use... darwin\nchecking whether NLS is wanted... no\nchecking for default port number... 5432\nchecking for block size... 8kB\nchecking for segment size... 1GB\nchecking for WAL block size... 8kB\nchecking for gcc... gcc\nchecking whether the C compiler works... yes\nchecking for C compiler default output file name... a.out\nchecking for suffix of executables... \nchecking whether we are cross compiling... no\nchecking for suffix of object files... o\nchecking whether we are using the GNU C compiler... yes\nchecking whether gcc accepts -g... yes\nchecking for gcc option to accept ISO C89... none needed\nchecking for gcc option to accept ISO C99... none needed\nchecking for g++... g++\nchecking whether we are using the GNU C++ compiler... yes\nchecking whether g++ accepts -g... yes\nchecking for gawk... gawk\nchecking whether gcc supports -Wdeclaration-after-statement, for CFLAGS... yes\nchecking whether gcc supports -Werror=vla, for CFLAGS... yes\nchecking whether gcc supports -Wendif-labels, for CFLAGS... yes\nchecking whether g++ supports -Wendif-labels, for CXXFLAGS... yes\nchecking whether gcc supports -Wmissing-format-attribute, for CFLAGS... yes\nchecking whether g++ supports -Wmissing-format-attribute, for CXXFLAGS... yes\nchecking whether gcc supports -Wimplicit-fallthrough=3, for CFLAGS... no\nchecking whether g++ supports -Wimplicit-fallthrough=3, for CXXFLAGS... no\nchecking whether gcc supports -Wformat-security, for CFLAGS... yes\nchecking whether g++ supports -Wformat-security, for CXXFLAGS... yes\nchecking whether gcc supports -fno-strict-aliasing, for CFLAGS... yes\nchecking whether g++ supports -fno-strict-aliasing, for CXXFLAGS... yes\nchecking whether gcc supports -fwrapv, for CFLAGS... yes\nchecking whether g++ supports -fwrapv, for CXXFLAGS... yes\nchecking whether gcc supports -fexcess-precision=standard, for CFLAGS... no\nchecking whether g++ supports -fexcess-precision=standard, for CXXFLAGS... no\nchecking whether gcc supports -funroll-loops, for CFLAGS_VECTOR... yes\nchecking whether gcc supports -ftree-vectorize, for CFLAGS_VECTOR... yes\nchecking whether gcc supports -Wunused-command-line-argument, for NOT_THE_CFLAGS... yes\nchecking whether gcc supports -Wcompound-token-split-by-macro, for NOT_THE_CFLAGS... yes\nchecking whether gcc supports -Wdeprecated-non-prototype, for NOT_THE_CFLAGS... no\nchecking whether gcc supports -Wformat-truncation, for NOT_THE_CFLAGS... no\nchecking whether gcc supports -Wstringop-truncation, for NOT_THE_CFLAGS... no\nchecking whether the C compiler still works... yes\nchecking how to run the C preprocessor... gcc -E\nchecking for pkg-config... /usr/local/bin/pkg-config\nchecking pkg-config is at least version 0.9.0... yes\nchecking allow thread-safe client libraries... yes\nchecking whether to build with ICU support... no\nchecking whether to build with Tcl... no\nchecking whether to build Perl modules... no\nchecking whether to build Python modules... no\nchecking whether to build with GSSAPI support... no\nchecking whether to build with PAM support... no\nchecking whether to build with BSD Authentication support... no\nchecking whether to build with LDAP support... no\nchecking whether to build with Bonjour support... no\nchecking whether to build with OpenSSL support... no\nchecking whether to build with SELinux support... no\nchecking whether to build with systemd support... no\nchecking whether to build with XML support... no\nchecking for ld used by GCC... /Library/Developer/CommandLineTools/usr/bin/ld\nchecking if the linker (/Library/Developer/CommandLineTools/usr/bin/ld) is GNU ld... no\nchecking for ranlib... ranlib\nchecking for strip... strip\nchecking whether it is possible to strip libraries... yes\nchecking for ar... ar\nchecking for a BSD-compatible install... /usr/local/opt/coreutils/libexec/gnubin/install -c\nchecking for tar... /usr/bin/tar\nchecking whether ln -s works... yes\nchecking for a thread-safe mkdir -p... /usr/local/opt/coreutils/libexec/gnubin/mkdir -p\nchecking for bison... /usr/local/bin/bison\nconfigure: using bison (GNU Bison) 3.8.2-dirty\nchecking for flex... /usr/bin/flex\nconfigure: using flex 2.6.4 Apple(flex-34)\nchecking for perl... /usr/local/bin/perl\nconfigure: using perl 5.36.0\nchecking for a sed that does not truncate output... /usr/local/opt/gnu-sed/libexec/gnubin/sed\nchecking for grep that handles long lines and -e... /usr/local/opt/grep/libexec/gnubin/grep\nchecking for egrep... /usr/local/opt/grep/libexec/gnubin/grep -E\nchecking for ANSI C header files... yes\nchecking for sys/types.h... yes\nchecking for sys/stat.h... yes\nchecking for stdlib.h... yes\nchecking for string.h... yes\nchecking for memory.h... yes\nchecking for strings.h... yes\nchecking for inttypes.h... yes\nchecking for stdint.h... yes\nchecking for unistd.h... yes\nchecking whether gcc is Clang... yes\nchecking whether Clang needs flag to prevent \"argument unused\" warning when linking with -pthread... no\nchecking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE\nchecking whether more special flags are required for pthreads... no\nchecking for PTHREAD_PRIO_INHERIT... yes\nchecking pthread.h usability... yes\nchecking pthread.h presence... yes\nchecking for pthread.h... yes\nchecking for strerror_r... yes\nchecking for getpwuid_r... yes\nchecking for gethostbyname_r... no\nchecking whether strerror_r returns int... yes\nchecking for main in -lm... yes\nchecking for library containing setproctitle... no\nchecking for library containing dlsym... none required\nchecking for library containing socket... none required\nchecking for library containing shl_load... no\nchecking for library containing getopt_long... none required\nchecking for library containing shm_open... none required\nchecking for library containing shm_unlink... none required\nchecking for library containing clock_gettime... none required\nchecking for library containing fdatasync... none required\nchecking for library containing sched_yield... none required\nchecking for library containing gethostbyname_r... no\nchecking for library containing shmget... none required\nchecking for library containing backtrace_symbols... none required\nchecking for library containing readline... -lreadline\nchecking for inflate in -lz... yes\nchecking for stdbool.h that conforms to C99... yes\nchecking for _Bool... yes\nchecking atomic.h usability... no\nchecking atomic.h presence... no\nchecking for atomic.h... no\nchecking copyfile.h usability... yes\nchecking copyfile.h presence... yes\nchecking for copyfile.h... yes\nchecking execinfo.h usability... yes\nchecking execinfo.h presence... yes\nchecking for execinfo.h... yes\nchecking getopt.h usability... yes\nchecking getopt.h presence... yes\nchecking for getopt.h... yes\nchecking ifaddrs.h usability... yes\nchecking ifaddrs.h presence... yes\nchecking for ifaddrs.h... yes\nchecking langinfo.h usability... yes\nchecking langinfo.h presence... yes\nchecking for langinfo.h... yes\nchecking mbarrier.h usability... no\nchecking mbarrier.h presence... no\nchecking for mbarrier.h... no\nchecking poll.h usability... yes\nchecking poll.h presence... yes\nchecking for poll.h... yes\nchecking sys/epoll.h usability... no\nchecking sys/epoll.h presence... no\nchecking for sys/epoll.h... no\nchecking sys/event.h usability... yes\nchecking sys/event.h presence... yes\nchecking for sys/event.h... yes\nchecking sys/ipc.h usability... yes\nchecking sys/ipc.h presence... yes\nchecking for sys/ipc.h... yes\nchecking sys/prctl.h usability... no\nchecking sys/prctl.h presence... no\nchecking for sys/prctl.h... no\nchecking sys/procctl.h usability... no\nchecking sys/procctl.h presence... no\nchecking for sys/procctl.h... no\nchecking sys/pstat.h usability... no\nchecking sys/pstat.h presence... no\nchecking for sys/pstat.h... no\nchecking sys/resource.h usability... yes\nchecking sys/resource.h presence... yes\nchecking for sys/resource.h... yes\nchecking sys/select.h usability... yes\nchecking sys/select.h presence... yes\nchecking for sys/select.h... yes\nchecking sys/sem.h usability... yes\nchecking sys/sem.h presence... yes\nchecking for sys/sem.h... yes\nchecking sys/shm.h usability... yes\nchecking sys/shm.h presence... yes\nchecking for sys/shm.h... yes\nchecking sys/sockio.h usability... yes\nchecking sys/sockio.h presence... yes\nchecking for sys/sockio.h... yes\nchecking sys/tas.h usability... no\nchecking sys/tas.h presence... no\nchecking for sys/tas.h... no\nchecking sys/un.h usability... yes\nchecking sys/un.h presence... yes\nchecking for sys/un.h... yes\nchecking termios.h usability... yes\nchecking termios.h presence... yes\nchecking for termios.h... yes\nchecking ucred.h usability... no\nchecking ucred.h presence... no\nchecking for ucred.h... no\nchecking wctype.h usability... yes\nchecking wctype.h presence... yes\nchecking for wctype.h... yes\nchecking for net/if.h... yes\nchecking for sys/ucred.h... yes\nchecking for netinet/tcp.h... yes\nchecking readline/readline.h usability... yes\nchecking readline/readline.h presence... yes\nchecking for readline/readline.h... yes\nchecking readline/history.h usability... yes\nchecking readline/history.h presence... yes\nchecking for readline/history.h... yes\nchecking zlib.h usability... yes\nchecking zlib.h presence... yes\nchecking for zlib.h... yes\nchecking whether byte ordering is bigendian... no\nchecking for inline... inline\nchecking for printf format archetype... printf\nchecking for __func__... yes\nchecking for _Static_assert... yes\nchecking for typeof... typeof\nchecking for __builtin_types_compatible_p... yes\nchecking for __builtin_constant_p... yes\nchecking for __builtin_unreachable... yes\nchecking for computed goto support... yes\nchecking for struct tm.tm_zone... yes\nchecking for union semun... yes\nchecking for struct sockaddr_un... yes\nchecking for struct sockaddr_storage... yes\nchecking for struct sockaddr_storage.ss_family... yes\nchecking for struct sockaddr_storage.__ss_family... no\nchecking for struct sockaddr_storage.ss_len... yes\nchecking for struct sockaddr_storage.__ss_len... no\nchecking for struct sockaddr.sa_len... yes\nchecking for struct addrinfo... yes\nchecking for locale_t... yes (in xlocale.h)\nchecking for C/C++ restrict keyword... __restrict\nchecking for struct cmsgcred... no\nchecking for struct option... yes\nchecking for z_streamp... yes\nchecking whether assembler supports x86_64 popcntq... no\nchecking for special C compiler options needed for large files... no\nchecking for _FILE_OFFSET_BITS value needed for large files... no\nchecking size of off_t... 8\nchecking size of bool... 1\nchecking for int timezone... yes\nchecking types of arguments for accept()... int, int, struct sockaddr *, socklen_t *\nchecking whether gettimeofday takes only one argument... no\nchecking for wcstombs_l declaration... yes (in xlocale.h)\nchecking for backtrace_symbols... yes\nchecking for clock_gettime... yes\nchecking for copyfile... yes\nchecking for fdatasync... yes\nchecking for getifaddrs... yes\nchecking for getpeerucred... no\nchecking for getrlimit... yes\nchecking for kqueue... yes\nchecking for mbstowcs_l... yes\nchecking for memset_s... yes\nchecking for poll... yes\nchecking for posix_fallocate... no\nchecking for ppoll... no\nchecking for pstat... no\nchecking for pthread_is_threaded_np... yes\nchecking for readlink... yes\nchecking for setproctitle... no\nchecking for setproctitle_fast... no\nchecking for setsid... yes\nchecking for shm_open... yes\nchecking for strchrnul... no\nchecking for strsignal... yes\nchecking for symlink... yes\nchecking for sync_file_range... no\nchecking for uselocale... yes\nchecking for wcstombs_l... yes\nchecking for __builtin_bswap16... yes\nchecking for __builtin_bswap32... yes\nchecking for __builtin_bswap64... yes\nchecking for __builtin_clz... yes\nchecking for __builtin_ctz... yes\nchecking for __builtin_popcount... yes\nchecking for __builtin_frame_address... yes\nchecking for _LARGEFILE_SOURCE value needed for large files... no\nchecking how gcc reports undeclared, standard C functions... error\nchecking for posix_fadvise... no\nchecking whether posix_fadvise is declared... no\nchecking whether fdatasync is declared... no\nchecking whether strlcat is declared... yes\nchecking whether strlcpy is declared... yes\nchecking whether strnlen is declared... yes\nchecking whether F_FULLFSYNC is declared... yes\nchecking whether RTLD_GLOBAL is declared... yes\nchecking whether RTLD_NOW is declared... yes\nchecking for struct sockaddr_in6... yes\nchecking for PS_STRINGS... no\nchecking for dlopen... yes\nchecking for explicit_bzero... no\nchecking for fls... yes\nchecking for getopt... yes\nchecking for getpeereid... yes\nchecking for getrusage... yes\nchecking for inet_aton... yes\nchecking for link... yes\nchecking for mkdtemp... yes\nchecking for pread... yes\nchecking for pwrite... yes\nchecking for random... yes\nchecking for srandom... yes\nchecking for strlcat... yes\nchecking for strlcpy... yes\nchecking for strnlen... yes\nchecking for strtof... yes\nchecking for setenv... yes\nchecking for unsetenv... yes\nchecking for getaddrinfo... yes\nchecking for getopt_long... yes\nchecking for syslog... yes\nchecking syslog.h usability... yes\nchecking syslog.h presence... yes\nchecking for syslog.h... yes\nchecking for opterr... yes\nchecking for optreset... yes\nchecking for strtoll... yes\nchecking for strtoull... yes\nchecking whether strtoll is declared... yes\nchecking whether strtoull is declared... yes\nchecking for rl_completion_append_character... yes\nchecking for rl_completion_suppress_quote... no\nchecking for rl_filename_quote_characters... no\nchecking for rl_filename_quoting_function... no\nchecking for rl_completion_matches... yes\nchecking for rl_filename_completion_function... yes\nchecking for rl_reset_screen_size... no\nchecking for append_history... no\nchecking for history_truncate_file... yes\nchecking test program... ok\nchecking whether long int is 64 bits... yes\nchecking for __builtin_mul_overflow... yes\nchecking size of void *... 8\nchecking size of size_t... 8\nchecking size of long... 8\nchecking alignment of short... 2\nchecking alignment of int... 4\nchecking alignment of long... 8\nchecking alignment of double... 8\nchecking for int8... no\nchecking for uint8... no\nchecking for int64... no\nchecking for uint64... no\nchecking for __int128... yes\nchecking for __int128 alignment bug... ok\nchecking alignment of PG_INT128_TYPE... 16\nchecking for builtin __sync char locking functions... yes\nchecking for builtin __sync int32 locking functions... yes\nchecking for builtin __sync int32 atomic operations... yes\nchecking for builtin __sync int64 atomic operations... yes\nchecking for builtin __atomic int32 atomic operations... yes\nchecking for builtin __atomic int64 atomic operations... yes\nchecking for __get_cpuid... no\nchecking for __cpuid... no\nchecking for _mm_crc32_u8 and _mm_crc32_u32 with CFLAGS=... no\nchecking for _mm_crc32_u8 and _mm_crc32_u32 with CFLAGS=-msse4.2... no\nchecking for __crc32cb, __crc32ch, __crc32cw, and __crc32cd with CFLAGS=... yes\nchecking which CRC-32C implementation to use... ARMv8 CRC instructions\nchecking which semaphore API to use... System V\nchecking for /dev/urandom... yes\nchecking which random number source to use... /dev/urandom\nchecking for fop... /usr/local/bin/fop\nchecking for dbtoepub... /opt/homebrew/bin/dbtoepub\nchecking thread safety of required library functions... yes\nchecking whether gcc supports -Wl,-dead_strip_dylibs... yes\nconfigure: using compiler=Apple clang version 14.0.0 (clang-1400.0.29.202)\nconfigure: using CFLAGS=-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -O2\nconfigure: using CPPFLAGS=-isysroot $(PG_SYSROOT) \nconfigure: using LDFLAGS=-isysroot $(PG_SYSROOT) -Wl,-dead_strip_dylibs\nconfigure: creating ./config.status\nconfig.status: creating GNUmakefile\nconfig.status: creating src/Makefile.global\nconfig.status: creating src/include/pg_config.h\nconfig.status: src/include/pg_config.h is unchanged\nconfig.status: creating src/include/pg_config_ext.h\nconfig.status: src/include/pg_config_ext.h is unchanged\nconfig.status: creating src/interfaces/ecpg/include/ecpg_config.h\nconfig.status: src/interfaces/ecpg/include/ecpg_config.h is unchanged\nconfig.status: linking src/backend/port/tas/dummy.s to src/backend/port/tas.s\nconfig.status: linking src/backend/port/sysv_sema.c to src/backend/port/pg_sema.c\nconfig.status: linking src/backend/port/sysv_shmem.c to src/backend/port/pg_shmem.c\nconfig.status: linking src/include/port/darwin.h to src/include/pg_config_os.h\nconfig.status: linking src/makefiles/Makefile.darwin to src/Makefile.port\n\n\n\n",
"msg_date": "Mon, 20 Feb 2023 17:25:23 +0200",
"msg_from": "Markur Sens <markursens@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_crc32c_armv8.c:35:9: error: implicit declaration of function\n '__crc32cb' is invalid in C99"
},
{
"msg_contents": "Markur Sens <markursens@gmail.com> writes:\n> I suddenly (?, IIRC a few days ago this worked fine) started getting the following error while trying to following while building from source on Apple M1\n> This is on REL_13_9 but happens to all other releases too.\n\n> In file included from pg_crc32c_armv8.c:17:\n> /Library/Developer/CommandLineTools/usr/lib/clang/14.0.0/include/arm_acle.h:14:2: error: \"ACLE intrinsics support not enabled.\"\n> #error \"ACLE intrinsics support not enabled.\"\n> ^\n> pg_crc32c_armv8.c:35:9: error: implicit declaration of function '__crc32cb' is invalid in C99 [-Werror,-Wimplicit-function-declaration]\n> crc = __crc32cb(crc, *p);\n\nHmph. Not seeing that here, on either my M1 laptop or sifaka's M1-mini\nhost (both running up-to-date Ventura). Nobody else has reported it\neither. What configure options are you using? Any non-default\nsoftware involved (e.g. from MacPorts or Homebrew)?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Feb 2023 10:47:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_crc32c_armv8.c:35:9: error: implicit declaration of function\n '__crc32cb' is invalid in C99"
},
{
"msg_contents": "> On 20 Feb 2023, at 5:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Markur Sens <markursens@gmail.com <mailto:markursens@gmail.com>> writes:\n>> I suddenly (?, IIRC a few days ago this worked fine) started getting the following error while trying to following while building from source on Apple M1\n>> This is on REL_13_9 but happens to all other releases too.\n> \n>> In file included from pg_crc32c_armv8.c:17:\n>> /Library/Developer/CommandLineTools/usr/lib/clang/14.0.0/include/arm_acle.h:14:2: error: \"ACLE intrinsics support not enabled.\"\n>> #error \"ACLE intrinsics support not enabled.\"\n>> ^\n>> pg_crc32c_armv8.c:35:9: error: implicit declaration of function '__crc32cb' is invalid in C99 [-Werror,-Wimplicit-function-declaration]\n>> crc = __crc32cb(crc, *p);\n> \n> Hmph. Not seeing that here, on either my M1 laptop or sifaka's M1-mini\n> host (both running up-to-date Ventura). Nobody else has reported it\n> either. What configure options are you using?\n\nThe output above is from plain ./configure\n\n> Any non-default\n> software involved (e.g. from MacPorts or Homebrew)?\n\nMost of the packages involved are installed through Homebrew , but I don’t see smoothing special\n\n> \n> \t\t\tregards, tom lane\n\n\nOn 20 Feb 2023, at 5:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:Markur Sens <markursens@gmail.com> writes:I suddenly (?, IIRC a few days ago this worked fine) started getting the following error while trying to following while building from source on Apple M1This is on REL_13_9 but happens to all other releases too.In file included from pg_crc32c_armv8.c:17:/Library/Developer/CommandLineTools/usr/lib/clang/14.0.0/include/arm_acle.h:14:2: error: \"ACLE intrinsics support not enabled.\"#error \"ACLE intrinsics support not enabled.\"^pg_crc32c_armv8.c:35:9: error: implicit declaration of function '__crc32cb' is invalid in C99 [-Werror,-Wimplicit-function-declaration] crc = __crc32cb(crc, *p);Hmph. Not seeing that here, on either my M1 laptop or sifaka's M1-minihost (both running up-to-date Ventura). Nobody else has reported iteither. What configure options are you using?The output above is from plain ./configure Any non-defaultsoftware involved (e.g. from MacPorts or Homebrew)?Most of the packages involved are installed through Homebrew , but I don’t see smoothing special regards, tom lane",
"msg_date": "Mon, 20 Feb 2023 18:05:40 +0200",
"msg_from": "Markur Sens <markursens@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_crc32c_armv8.c:35:9: error: implicit declaration of function\n '__crc32cb' is invalid in C99"
}
] |
[
{
"msg_contents": "Playing around with MERGE some more, I noticed that the command tag\nrow count is wrong if it does a cross-partition update:\n\nCREATE TABLE target (a int, b int) PARTITION BY LIST (b);\nCREATE TABLE target_p1 PARTITION OF target FOR VALUES IN (1);\nCREATE TABLE target_p2 PARTITION OF target FOR VALUES IN (2);\nINSERT INTO target VALUES (1,1);\n\nMERGE INTO target t USING (VALUES (1)) v(a) ON t.a = v.a\n WHEN MATCHED THEN UPDATE SET b = 2;\n\nwhich returns \"MERGE 2\" when only 1 row was updated, because\nExecUpdateAct() will update estate->es_processed for a cross-partition\nupdate (but not for a normal update), and then ExecMergeMatched() will\nupdate it again.\n\nI think the best fix is to have ExecMergeMatched() pass canSetTag =\nfalse to ExecUpdateAct(), so that ExecMergeMatched() takes\nresponsibility for updating estate->es_processed in all cases.\n\nRegards,\nDean",
"msg_date": "Mon, 20 Feb 2023 15:56:27 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Incorrect command tag row count for MERGE with a cross-partition\n update"
},
{
"msg_contents": "On 2023-Feb-20, Dean Rasheed wrote:\n\n> Playing around with MERGE some more, I noticed that the command tag\n> row count is wrong if it does a cross-partition update:\n> \n> CREATE TABLE target (a int, b int) PARTITION BY LIST (b);\n> CREATE TABLE target_p1 PARTITION OF target FOR VALUES IN (1);\n> CREATE TABLE target_p2 PARTITION OF target FOR VALUES IN (2);\n> INSERT INTO target VALUES (1,1);\n> \n> MERGE INTO target t USING (VALUES (1)) v(a) ON t.a = v.a\n> WHEN MATCHED THEN UPDATE SET b = 2;\n> \n> which returns \"MERGE 2\" when only 1 row was updated, because\n> ExecUpdateAct() will update estate->es_processed for a cross-partition\n> update (but not for a normal update), and then ExecMergeMatched() will\n> update it again.\n\nHah.\n\n> I think the best fix is to have ExecMergeMatched() pass canSetTag =\n> false to ExecUpdateAct(), so that ExecMergeMatched() takes\n> responsibility for updating estate->es_processed in all cases.\n\nSounds sensible.\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Las mujeres son como hondas: mientras más resistencia tienen,\n más lejos puedes llegar con ellas\" (Jonas Nightingale, Leap of Faith)\n\n\n",
"msg_date": "Tue, 21 Feb 2023 10:34:11 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect command tag row count for MERGE with a cross-partition\n update"
},
{
"msg_contents": "On Tue, 21 Feb 2023 at 09:34, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > I think the best fix is to have ExecMergeMatched() pass canSetTag =\n> > false to ExecUpdateAct(), so that ExecMergeMatched() takes\n> > responsibility for updating estate->es_processed in all cases.\n>\n> Sounds sensible.\n>\n\nI decided it was also probably worth having a regression test covering\nthis, since it would be quite easy to break if the code is ever\nrefactored.\n\nPushed and back-patched.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 22 Feb 2023 09:50:05 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect command tag row count for MERGE with a cross-partition\n update"
}
] |
[
{
"msg_contents": "As part of the MERGE RETURNING patch I noticed a suspicious Assert()\nin ExecInitPartitionInfo() that looked like it needed updating for\nMERGE.\n\nAfter more testing, I can confirm that this is indeed a pre-existing\nbug, that can be triggered using MERGE into a partitioned table that\nhas RLS enabled (and hence non-empty withCheckOptionLists to\ninitialise).\n\nSo I think we need something like the attached.\n\nRegards,\nDean",
"msg_date": "Mon, 20 Feb 2023 16:18:19 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Assert failure with MERGE into partitioned table with RLS"
},
{
"msg_contents": "On Mon, 20 Feb 2023 at 16:18, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> As part of the MERGE RETURNING patch I noticed a suspicious Assert()\n> in ExecInitPartitionInfo() that looked like it needed updating for\n> MERGE.\n>\n> After more testing, I can confirm that this is indeed a pre-existing\n> bug, that can be triggered using MERGE into a partitioned table that\n> has RLS enabled (and hence non-empty withCheckOptionLists to\n> initialise).\n>\n> So I think we need something like the attached.\n>\n\nPushed and back-patched.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 22 Feb 2023 11:01:51 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure with MERGE into partitioned table with RLS"
}
] |
[
{
"msg_contents": "Another one noticed from the MERGE RETURNING patch -- the switch\nstatement in SPI_result_code_string() is missing cases for\nSPI_OK_TD_REGISTER and SPI_OK_MERGE.\n\nThe SPI_OK_TD_REGISTER case goes back all the way, so I suppose it\nshould be back-patched to all supported branches, though evidently\nthis is not something anyone is likely to care about.\n\nThe SPI_OK_MERGE case is perhaps a little more visible (e.g., execute\nMERGE from PL/Perl using $rv = spi_exec_query() and then examine\n$rv->{status}). It's also missing from the docs for SPI_Execute().\nHaving tested that it now works as expected, I don't think there's\nmuch point in adding a regression test case for it though.\n\nRegards,\nDean",
"msg_date": "Mon, 20 Feb 2023 18:52:16 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Missing cases from SPI_result_code_string()"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> Another one noticed from the MERGE RETURNING patch -- the switch\n> statement in SPI_result_code_string() is missing cases for\n> SPI_OK_TD_REGISTER and SPI_OK_MERGE.\n\nUgh. Grepping around, it looks like pltcl_process_SPI_result\nis missing a case for SPI_OK_MERGE as well.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Feb 2023 14:39:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing cases from SPI_result_code_string()"
},
{
"msg_contents": "On Mon, 20 Feb 2023 at 19:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ugh. Grepping around, it looks like pltcl_process_SPI_result\n> is missing a case for SPI_OK_MERGE as well.\n>\n\nYes, I was about to post a patch for that too. That's the last case\nthat I found, looking around.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 20 Feb 2023 19:49:23 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing cases from SPI_result_code_string()"
},
{
"msg_contents": "On Mon, 20 Feb 2023 at 19:49, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Mon, 20 Feb 2023 at 19:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Ugh. Grepping around, it looks like pltcl_process_SPI_result\n> > is missing a case for SPI_OK_MERGE as well.\n>\n> Yes, I was about to post a patch for that too. That's the last case\n> that I found, looking around.\n>\n\nOK, I've pushed fixes for those.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 22 Feb 2023 13:46:50 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing cases from SPI_result_code_string()"
}
] |
[
{
"msg_contents": "Another one noticed in the MERGE RETURNING patch -- this allows PL/Tcl\nto execute MERGE (i.e., don't fail when SPI returns SPI_OK_MERGE). I'm\nnot sure if anyone uses PL/Tcl anymore, but it's a trivial fix,\nprobably not worth a regression test case.\n\nRegards,\nDean",
"msg_date": "Mon, 20 Feb 2023 19:49:56 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Allow MERGE to be executed from PL/Tcl"
}
] |
[
{
"msg_contents": "Greetings,\n\nThe name canonicalization support for Kerberos is doing us more harm\nthan good in the regression tests, so I propose we disable it. Patch\nattached.\n\nThoughts?\n\nThanks,\n\nStephen",
"msg_date": "Mon, 20 Feb 2023 18:35:55 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Disable rdns for Kerberos tests"
},
{
"msg_contents": "On 21/02/2023 01:35, Stephen Frost wrote:\n> Greetings,\n> \n> The name canonicalization support for Kerberos is doing us more harm\n> than good in the regression tests, so I propose we disable it. Patch\n> attached.\n> \n> Thoughts?\n\nMakes sense. A brief comment in 001_auth.pl itself to mention why we \ndisable rdns would be nice.\n\n- Heikki\n\n\n\n",
"msg_date": "Wed, 22 Feb 2023 12:57:16 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Disable rdns for Kerberos tests"
},
{
"msg_contents": "Greetings,\n\n* Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> On 21/02/2023 01:35, Stephen Frost wrote:\n> > The name canonicalization support for Kerberos is doing us more harm\n> > than good in the regression tests, so I propose we disable it. Patch\n> > attached.\n> > \n> > Thoughts?\n> \n> Makes sense. A brief comment in 001_auth.pl itself to mention why we disable\n> rdns would be nice.\n\nThanks for reviewing! Comments added and updated the commit message.\n\nUnless there's anything else, I'll push this early next week.\n\nThanks again!\n\nStephen",
"msg_date": "Fri, 24 Feb 2023 17:50:30 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Disable rdns for Kerberos tests"
},
{
"msg_contents": "On 25 February 2023 00:50:30 EET, Stephen Frost <sfrost@snowman.net> wrote:\n>Thanks for reviewing! Comments added and updated the commit message.\n>\n>Unless there's anything else, I'll push this early next week.\n\ns/capture portal/captive portal/. Other than that, looks good to me.\n\n- Heikki\n\n\n",
"msg_date": "Sat, 25 Feb 2023 12:36:26 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Disable rdns for Kerberos tests"
},
{
"msg_contents": "Greetings,\n\n* Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> On 25 February 2023 00:50:30 EET, Stephen Frost <sfrost@snowman.net> wrote:\n> >Thanks for reviewing! Comments added and updated the commit message.\n> >\n> >Unless there's anything else, I'll push this early next week.\n> \n> s/capture portal/captive portal/. Other than that, looks good to me.\n\nPush, thanks again!\n\nStephen",
"msg_date": "Thu, 9 Mar 2023 10:35:02 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Disable rdns for Kerberos tests"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Push, thanks again!\n\nWhy'd you only change HEAD? Isn't the test equally fragile in the\nback branches?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Mar 2023 11:10:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Disable rdns for Kerberos tests"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Push, thanks again!\n> \n> Why'd you only change HEAD? Isn't the test equally fragile in the\n> back branches?\n\nWe hadn't had any complaints about it and so I wasn't sure if it was\nuseful to back-patch it. I'm happy to do so though.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 9 Mar 2023 14:48:09 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Disable rdns for Kerberos tests"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Push, thanks again!\n> \n> Why'd you only change HEAD? Isn't the test equally fragile in the\n> back branches?\n\nFollowing on from this after some additional cross-platform testing,\nturns out there's other options we should be disabling in these tests to\navoid depending on DNS for the test.\n\nAttached is another patch which, for me at least, seems to prevent the\ntests from causing any DNS requests to happen. This also means that the\ntests run in a reasonable time even in cases where DNS is entirely\nbroken (the resolver set in /etc/resolv.conf doesn't respond).\n\nBarring objections, my plan is to commit this change soon and to\nback-patch both patches to supported branches.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 5 Apr 2023 12:10:00 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Disable rdns for Kerberos tests"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Push, thanks again!\n> \n> Why'd you only change HEAD? Isn't the test equally fragile in the\n> back branches?\n\nBack-patched.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 7 Apr 2023 19:41:11 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Disable rdns for Kerberos tests"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > Push, thanks again!\n> > \n> > Why'd you only change HEAD? Isn't the test equally fragile in the\n> > back branches?\n> \n> Following on from this after some additional cross-platform testing,\n> turns out there's other options we should be disabling in these tests to\n> avoid depending on DNS for the test.\n> \n> Attached is another patch which, for me at least, seems to prevent the\n> tests from causing any DNS requests to happen. This also means that the\n> tests run in a reasonable time even in cases where DNS is entirely\n> broken (the resolver set in /etc/resolv.conf doesn't respond).\n> \n> Barring objections, my plan is to commit this change soon and to\n> back-patch both patches to supported branches.\n\nDone.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 7 Apr 2023 19:41:48 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Disable rdns for Kerberos tests"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile playing with the SSL tests last week, I have noticed that we\ndon't have a way to regenerate the SSL files with meson for its TAP\nsuite. It seems to me that we had better transfer the rules currently\nstored in sslfiles.mk into something that meson can use?\n\nAnother approach may be to keep sslfiles.mk around, still allow meson\nto invoke it with a specific target? I am not exactly sure how this\nwould be handled, but it looks like a transfer would make sense in the\nlong-term if we target a removal of the dependency with make.\n\nThoughts are welcome.\n--\nMichael",
"msg_date": "Tue, 21 Feb 2023 08:54:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "meson and sslfiles.mk in src/test/ssl/"
},
{
"msg_contents": "On 21.02.23 00:54, Michael Paquier wrote:\n> While playing with the SSL tests last week, I have noticed that we\n> don't have a way to regenerate the SSL files with meson for its TAP\n> suite. It seems to me that we had better transfer the rules currently\n> stored in sslfiles.mk into something that meson can use?\n> \n> Another approach may be to keep sslfiles.mk around, still allow meson\n> to invoke it with a specific target? I am not exactly sure how this\n> would be handled, but it looks like a transfer would make sense in the\n> long-term if we target a removal of the dependency with make.\n\nI think the tradeoff here is, given how rarely those rules are used, is \nit worth maintaining duplicate implementations or complicated bridging?\n\nIt's clearly something to deal with eventually, but it's not high priority.\n\n\n\n",
"msg_date": "Wed, 22 Feb 2023 14:40:41 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: meson and sslfiles.mk in src/test/ssl/"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 5:40 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> I think the tradeoff here is, given how rarely those rules are used, is\n> it worth maintaining duplicate implementations or complicated bridging?\n>\n> It's clearly something to deal with eventually, but it's not high priority.\n\nYeah... in the same vein, I originally thought that I'd need to\nquickly add VPATH support to sslfiles.mk, but it seems like it just\nhasn't been a problem in practice and now I'm glad I didn't spend much\ntime on it.\n\nI'm happy to contribute cycles to a Meson port when you're ready for\nit. From a skim it seems like maybe in-source generation isn't a focus\nfor Meson [1]. They might encourage us to write custom Python for it\nanyway?\n\n--Jacob\n\n[1] https://github.com/mesonbuild/meson/issues/5434\n\n\n",
"msg_date": "Wed, 22 Feb 2023 09:42:54 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: meson and sslfiles.mk in src/test/ssl/"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-22 09:42:54 -0800, Jacob Champion wrote:\n> I'm happy to contribute cycles to a Meson port when you're ready for\n> it. From a skim it seems like maybe in-source generation isn't a focus for\n> Meson [1]. They might encourage us to write custom Python for it anyway?\n\nYou'd normally just invoke commands for updating sources as run_target()s, not\ncustom targets. Or generate them in the build tree via custom_target() and\nthen copy to the source tree via a run_target().\n\nFor this case I think it'd suffice to add a run target that does something\nlike\nmake -C ~/src/postgresql/src/test/ssl/ -f ~/src/postgresql/src/test/ssl/sslfiles.mk sslfiles OPENSSL=openssl\n\nobviously with the necessary things being replaced by the relevant variables.\n\n\nsslfiles.mk doesn't depend on the rest of the buildsystem, and is a rarely\nexecuted command, so I don't see a problem with using make to update the\nfiles. At least for a long while.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Feb 2023 11:33:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson and sslfiles.mk in src/test/ssl/"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 11:33:09AM -0800, Andres Freund wrote:\n> sslfiles.mk doesn't depend on the rest of the buildsystem, and is a rarely\n> executed command, so I don't see a problem with using make to update the\n> files. At least for a long while.\n\nAgreed to keep things simple for now, even if it means an implicit\ndependency to make for developers.\n--\nMichael",
"msg_date": "Thu, 23 Feb 2023 09:43:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: meson and sslfiles.mk in src/test/ssl/"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.