threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Folks,\n\nI noticed that the API document for IMPORT FOREIGN SCHEMA states in\npart:\n\n It should return a list of C strings, each of which must contain a\n CREATE FOREIGN TABLE command. These strings will be parsed and\n executed by the core server.\n\nA reasonable reading of the above is that it disallows statements\nother than CREATE FOREIGN TABLE, which seems overly restrictive for no\nreason I can discern. The list of C strings seems reasonable as a\nrequirement, but I think it would be better to rephrase this along the\nlines of:\n\n It should return a list of C strings, each of which must contain a\n DDL command, for example CREATE FOREIGN TABLE. These strings will\n be parsed and executed by the core server in order to create the\n objects in the schema.\n\nas a foreign schema might need types (the case I ran across) or other\ndatabase objects like CREATE EXTERNAL ROUTINE, when we dust off the\nimplementation of that, to support it.\n\nI was unable to discern from my draft version of the spec whether\nstatements other than CREATE FOREIGN TABLE are specifically\ndisallowed, or whether it is intended to (be able to) contain CREATE\nROUTINE MAPPING statements.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 4 Aug 2020 05:07:51 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Clarifying the ImportForeignSchema API"
},
{
"msg_contents": "2020年8月4日(火) 12:08 David Fetter <david@fetter.org>:\n>\n> Folks,\n>\n> I noticed that the API document for IMPORT FOREIGN SCHEMA states in\n> part:\n>\n> It should return a list of C strings, each of which must contain a\n> CREATE FOREIGN TABLE command. These strings will be parsed and\n> executed by the core server.\n>\n> A reasonable reading of the above is that it disallows statements\n> other than CREATE FOREIGN TABLE, which seems overly restrictive for no\n> reason I can discern. The list of C strings seems reasonable as a\n> requirement, but I think it would be better to rephrase this along the\n> lines of:\n>\n> It should return a list of C strings, each of which must contain a\n> DDL command, for example CREATE FOREIGN TABLE. These strings will\n> be parsed and executed by the core server in order to create the\n> objects in the schema.\n>\n> as a foreign schema might need types (the case I ran across) or other\n> database objects like CREATE EXTERNAL ROUTINE, when we dust off the\n> implementation of that, to support it.\n\n+1\n\nA while back I was considering using IMPORT FOREIGN SCHEMA to import\nobject comments (which IMHO can be considered part of the schema) and was\npuzzled by the above. I never pursued that further due to lack of\ntime/priorities;\nIIRC technically it wouldn't have been an issue regardless of what the spec\nmay or may not say (I couldn't find anything at the time).\n\nRegards\n\nIan Barwick\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 4 Aug 2020 13:39:35 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clarifying the ImportForeignSchema API"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhen I define a view relevant with `char varying`, it shows the type is casted to text. See,\n\ngpadmin=# CREATE TABLE foobar (a character varying);\nCREATE TABLE\ngpadmin=# CREATE VIEW fooview AS SELECT * FROM foobar WHERE a::character varying = ANY(ARRAY['foo'::character varying, 'bar'::character varying]);\nCREATE VIEW\ngpadmin=# \\d+ fooview\n View \"public.fooview\"\n Column | Type | Collation | Nullable | Default | Storage | Description\n--------+-------------------+-----------+----------+---------+----------+-------------\n a | character varying | | | | extended |\nView definition:\n SELECT foobar.a\n FROM foobar\n WHERE foobar.a::text = ANY (ARRAY['foo'::character varying, 'bar'::character varying]::text[]);\n\ngpadmin=# create view barview as select * from foobar where a=any(array['foo','bar']);\nCREATE VIEW\ngpadmin=# \\d+ barview\n View \"public.barview\"\n Column | Type | Collation | Nullable | Default | Storage | Description\n--------+-------------------+-----------+----------+---------+----------+-------------\n a | character varying | | | | extended |\nView definition:\n SELECT foobar.a\n FROM foobar\n WHERE foobar.a::text = ANY (ARRAY['foo'::text, 'bar'::text]);\n\n\nMy question is that is it an expected behavior or not?\nThank you.\n\nRegards,\nHao Wu\n\n\n\n\n\n\n\n\nHi hackers,\n\n\n\n\nWhen I define a view relevant with `char varying`, it shows the type is casted to text. See,\n\n\n\n\ngpadmin=# CREATE TABLE foobar (a character varying);\nCREATE TABLE\ngpadmin=# CREATE VIEW fooview AS SELECT * FROM foobar WHERE a::character varying = ANY(ARRAY['foo'::character varying, 'bar'::character varying]);\nCREATE VIEW\ngpadmin=# \\d+ fooview\n View \"public.fooview\"\n Column | Type | Collation | Nullable | Default | Storage | Description\n--------+-------------------+-----------+----------+---------+----------+-------------\n a | character varying | | | | extended |\nView definition:\n SELECT foobar.a\n FROM foobar\n WHERE foobar.a::text = ANY (ARRAY['foo'::character varying, 'bar'::character varying]::text[]);\n\n\n\n\n\ngpadmin=# create view barview as select * from foobar where a=any(array['foo','bar']);\nCREATE VIEW\ngpadmin=# \\d+ barview\n View \"public.barview\"\n Column | Type | Collation | Nullable | Default | Storage | Description\n--------+-------------------+-----------+----------+---------+----------+-------------\n a | character varying | | | | extended |\nView definition:\n SELECT foobar.a\n FROM foobar\n WHERE foobar.a::text = ANY (ARRAY['foo'::text, 'bar'::text]);\n\n\n\n\n\n\n\n\nMy question is that is it an expected behavior or not?\n\nThank you.\n\n\n\n\nRegards,\n\nHao Wu",
"msg_date": "Tue, 4 Aug 2020 03:55:43 +0000",
"msg_from": "Hao Wu <hawu@vmware.com>",
"msg_from_op": true,
"msg_subject": "Rewrite view?"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI want to share results of my last research of implementing LSM index in \nPostgres.\nMost of modern databases (RocksDB, MongoDB, Tarantool,...) are using LSM \ntree instead of classical B-Tree.\n\n From one side, capacity of RAM at modern servers allows to keep the \nwhole database in memory.\nIt leads to the anti-caching approach proposed by Michael Stonebraker\nhttps://15721.courses.cs.cmu.edu/spring2016/papers/hstore-anticaching.pdf\n\n From the other side: maintaining if indexes is one of the most \nimportant factor limiting database performance.\nPostgres is able to insert records in table without indexes almost with \nlinear disk write speed(hundred megabytes per second).\nBut if table contains indexes, inserted keys have random values and \nindexes don't fill in memory then we observer dramatic degradation of \nperformance. Average HDD access time is about 10msec, which corresponds \nto 100 random reads per second. If table has several indexes\nand is large enough not to fit in memory, then insert speed can be as \nlow as tens TPS. Certainly SSD can provide much better random access time,\nbut still random reads are slow.\n\nLSM approach tries to address this problem.\nFirst of all I started my experiments with RocksDB (may be most popular \nLSM-based key-value storage, used in many databases).\nThere was FDW for RocksDB from VidarDB project: \nhttps://github.com/vidardb/pgrocks\nAs far as RocksDB is multuthreaded embedded database engine and Postgres \nis based on multiprocess architecture,\nthem used interesting approach \"server inside server\": there is bgworker \nprocess which works with RocksDB and\nbackends sendind requests to it through shared memory queue.\n\nI have significantly rewriten their FDW implementation: original RocksDB \nserver implementation was single threaded.\nI have made it multitheaded making it possible to run multiple RocksDB \nrequests in parallel.\nMy implementation can be found there:\nhttps://github.com/postgrespro/lsm\n\nSome results of benchmarking.\nBenchmark is just insertion of 250 millions of records with random key \nin inclusive index containing one bigint key and 8 bigint fields.\nSIze of index is about 20Gb and target system has 16GB of RAM:\n\n\nIndex \tClients \tTPS\nInclusive B-Tree \t1 \t9387\nInclusive B-Tree \t10 \t18761\nRocksDB FDW \t1 \t138350\nRocksDB FDW \t10 \t122369\nRocksDB \t1 \t166333\nRocksDB \t10 \t141482\n\n\n\nAs you can see there is about 10 times difference.\nMay be somebody will find useful this idea of using IOT (index organized \ntable) based on RocksDB in Postgres.\nBut this approach violates all ACID properties of Postgres:\nthere is no atomicity and consistency (in principle RocksDB supports \n2PC, but it is not used here),\nisolation corresponds to something like \"read uncommitted\",\nand concerning durability - it is all up to RocksDB and I have serious \ndoubts that it will survive failure especially with sync write mode \ndisabled.\nSo I considered this project mostly as prototype for estimating \nefficiency of LSM approach.\n\nThen I think about implementing ideas of LSM using standard Postgres nbtree.\n\nWe need two indexes: one small for fast inserts and another - big (main) \nindex. This top index is small enough to fit in memory\nso inserts in this index are very fast.\nPeriodically we will merge data from top index to base index and \ntruncate the top index. To prevent blocking of inserts in the table\nwhile we are merging indexes we can add ... on more index, which will be \nused during merge.\n\nSo final architecture of Lsm3 is the following:\ntwo top indexes used in cyclic way and one main index. When top index \nreaches some threshold value\nwe initiate merge with main index, done by bgworker and switch to \nanother top index.\nAs far as merging indexes is done in background, it doesn't affect \ninsert speed.\nUnfortunately Postgres Index AM has not bulk insert operation, so we \nhave to perform normal inserts.\nBut inserted data is already sorted by key which should improve access \nlocality and partly solve random reads problem for base index.\n\nCertainly to perform search in Lsm3 we have to make lookups in all three \nindexes and merge search results.\n(in case of unique index we can avoid extra searches if searched value \nis found in top index).\nIt can happen that during merge of top and base indexes the same TID can \nbe found in both of them.\nBut such duplicates can be easily eliminated during merge of search results.\n\nAs far as we are using standard nbtree indexes there is no need to worry \nabout logging information in WAL.\nThere is no need to use inefficient \"generic WAL records\" or patch \nkernel by adding own WAL records.\n\nImplementation of Lsm3 Index AM as standart Postgres extension is \navailable here:\nhttps://github.com/postgrespro/lsm3\n\nI have performed the same benchmark with random inserts (described \nabove) for Lsm3:\n\nIndex \tClients \tTPS\nInclusive B-Tree \t1 \t9387\nInclusive B-Tree \t10 \t18761\nRocksDB FDW \t1 \t138350\nRocksDB FDW \t10 \t122369\nRocksDB \t1 \t166333\nRocksDB \t10 \t141482\nLsm3 \t1 \t151699\nLsm3 \t10 \t65997\n\n\nSize of nbtree is about 29Gb, single client performance is even higher \nthan of RocksDB FDW, but parallel results are signficantly worser.\nSo Lsm3 can provide significant improve of performance for large indexes \nnot fitting in main memory.\nAnd the larger ratio between index size and RAM size is, the larger \nbenefit in insertion speed you get.\nLsm3 is just standard postgres extension, fully integrated in Postgres \ninfrastructure (MVCC, WAL, backups,...).\nSO I hope it can be useful when standard indexes becomes bottleneck.\n\n\nI will be glad to receive any feedback, may be some change requests or \nproposals.\n\nBest regards,\nKonstantin\n\n\n\n\n\n\n\n Hi hackers, \n\n I want to share results of my last research of implementing LSM\n index in Postgres.\n Most of modern databases (RocksDB, MongoDB, Tarantool,...) are using\n LSM tree instead of classical B-Tree.\n\n From one side, capacity of RAM at modern servers allows to keep the\n whole database in memory.\n It leads to the anti-caching approach proposed by Michael\n Stonebraker \nhttps://15721.courses.cs.cmu.edu/spring2016/papers/hstore-anticaching.pdf\n\n From the other side: maintaining if indexes is one of the most\n important factor limiting database performance.\n Postgres is able to insert records in table without indexes almost\n with linear disk write speed(hundred megabytes per second).\n But if table contains indexes, inserted keys have random values and\n indexes don't fill in memory then we observer dramatic degradation\n of performance. Average HDD access time is about 10msec, which\n corresponds to 100 random reads per second. If table has several\n indexes\n and is large enough not to fit in memory, then insert speed can be\n as low as tens TPS. Certainly SSD can provide much better random\n access time,\n but still random reads are slow.\n\n LSM approach tries to address this problem.\n First of all I started my experiments with RocksDB (may be most\n popular LSM-based key-value storage, used in many databases).\n There was FDW for RocksDB from VidarDB project:\n https://github.com/vidardb/pgrocks\n As far as RocksDB is multuthreaded embedded database engine and\n Postgres is based on multiprocess architecture,\n them used interesting approach \"server inside server\": there is\n bgworker process which works with RocksDB and\n backends sendind requests to it through shared memory queue.\n\n I have significantly rewriten their FDW implementation: original\n RocksDB server implementation was single threaded.\n I have made it multitheaded making it possible to run multiple\n RocksDB requests in parallel.\n My implementation can be found there:\nhttps://github.com/postgrespro/lsm\n\n Some results of benchmarking.\n Benchmark is just insertion of 250 millions of records with random\n key in inclusive index containing one bigint key and 8 bigint\n fields.\n SIze of index is about 20Gb and target system has 16GB of RAM:\n\n\n\n\n\nIndex\nClients\nTPS \n\n\nInclusive B-Tree\n1\n9387\n\n\nInclusive B-Tree\n10\n18761\n\n\nRocksDB FDW\n1\n138350\n\n\nRocksDB FDW\n10\n122369\n\n\nRocksDB\n1\n166333\n\n\nRocksDB\n10\n141482\n\n\n\n\n\n As you can see there is about 10 times difference.\n May be somebody will find useful this idea of using IOT (index\n organized table) based on RocksDB in Postgres.\n But this approach violates all ACID properties of Postgres: \n there is no atomicity and consistency (in principle RocksDB supports\n 2PC, but it is not used here),\n isolation corresponds to something like \"read uncommitted\", \n and concerning durability - it is all up to RocksDB and I have\n serious doubts that it will survive failure especially with sync\n write mode disabled.\n So I considered this project mostly as prototype for estimating\n efficiency of LSM approach.\n\n Then I think about implementing ideas of LSM using standard Postgres\n nbtree.\n\n We need two indexes: one small for fast inserts and another - big\n (main) index. This top index is small enough to fit in memory\n so inserts in this index are very fast. \n Periodically we will merge data from top index to base index and\n truncate the top index. To prevent blocking of inserts in the table\n while we are merging indexes we can add ... on more index, which\n will be used during merge. \n\n So final architecture of Lsm3 is the following: \n two top indexes used in cyclic way and one main index. When top\n index reaches some threshold value\n we initiate merge with main index, done by bgworker and switch to\n another top index.\n As far as merging indexes is done in background, it doesn't affect\n insert speed. \n Unfortunately Postgres Index AM has not bulk insert operation, so we\n have to perform normal inserts.\n But inserted data is already sorted by key which should improve\n access locality and partly solve random reads problem for base\n index.\n\n Certainly to perform search in Lsm3 we have to make lookups in all\n three indexes and merge search results.\n (in case of unique index we can avoid extra searches if searched\n value is found in top index).\n It can happen that during merge of top and base indexes the same TID\n can be found in both of them.\n But such duplicates can be easily eliminated during merge of search\n results.\n\n As far as we are using standard nbtree indexes there is no need to\n worry about logging information in WAL.\n There is no need to use inefficient \"generic WAL records\" or patch\n kernel by adding own WAL records.\n\n Implementation of Lsm3 Index AM as standart Postgres extension is\n available here:\nhttps://github.com/postgrespro/lsm3\n\n I have performed the same benchmark with random inserts (described\n above) for Lsm3:\n\n\n\n\nIndex\nClients\nTPS\n\n\nInclusive B-Tree\n1\n9387\n\n\nInclusive B-Tree\n10\n18761\n\n\nRocksDB FDW\n1\n138350\n\n\nRocksDB FDW\n10\n122369\n\n\nRocksDB\n1\n166333\n\n\nRocksDB\n10\n141482\n\n\nLsm3\n1\n151699\n\n\nLsm3\n10\n65997\n\n\n\n\n Size of nbtree is about 29Gb, single client performance is even\n higher than of RocksDB FDW, but parallel results are signficantly\n worser.\n So Lsm3 can provide significant improve of performance for large\n indexes not fitting in main memory.\n And the larger ratio between index size and RAM size is, the larger\n benefit in insertion speed you get.\n Lsm3 is just standard postgres extension, fully integrated in\n Postgres infrastructure (MVCC, WAL, backups,...).\n SO I hope it can be useful when standard indexes becomes bottleneck.\n\n\n I will be glad to receive any feedback, may be some change requests\n or proposals.\n\n Best regards,\n Konstantin",
"msg_date": "Tue, 4 Aug 2020 11:22:13 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "LSM tree for Postgres"
},
{
"msg_contents": "Hi!\n\nOn Tue, Aug 4, 2020 at 11:22 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> I want to share results of my last research of implementing LSM index in Postgres.\n> Most of modern databases (RocksDB, MongoDB, Tarantool,...) are using LSM tree instead of classical B-Tree.\n\nI wouldn't say it that way. I would say they are providing LSM in\naddition to the B-tree. For instance WiredTiger (which is the main\nengine for MongoDB) provides both B-tree and LSM. As I know Tarantool\nprovides at least an in-memory B-tree. If RocksDB is used as an\nengine for MySQL, then it's also an addition to B-tree, which is\nprovided by InnoDB. Also, implementation of B-tree's in mentioned\nDBMSes are very different. I would say none of them is purely\nclassical.\n\n> LSM approach tries to address this problem.\n\nLSM has great use-cases for sure.\n\n> I have significantly rewriten their FDW implementation: original RocksDB server implementation was single threaded.\n> I have made it multitheaded making it possible to run multiple RocksDB requests in parallel.\n> My implementation can be found there:\n> https://github.com/postgrespro/lsm\n\nGreat, thank you for your work.\n\n> Some results of benchmarking.\n> Benchmark is just insertion of 250 millions of records with random key in inclusive index containing one bigint key and 8 bigint fields.\n> SIze of index is about 20Gb and target system has 16GB of RAM:\n\nWhat storage do you use?\n\n> As you can see there is about 10 times difference.\n> May be somebody will find useful this idea of using IOT (index organized table) based on RocksDB in Postgres.\n> But this approach violates all ACID properties of Postgres:\n> there is no atomicity and consistency (in principle RocksDB supports 2PC, but it is not used here),\n> isolation corresponds to something like \"read uncommitted\",\n> and concerning durability - it is all up to RocksDB and I have serious doubts that it will survive failure especially with sync write mode disabled.\n> So I considered this project mostly as prototype for estimating efficiency of LSM approach.\n\nYes, integration of WAL and snapshots between Postgres and RocksDB is\nproblematic. I also doubt that RocksDB can use the full power of\nPostgres extendable type system.\n\n> Then I think about implementing ideas of LSM using standard Postgres nbtree.\n>\n> We need two indexes: one small for fast inserts and another - big (main) index. This top index is small enough to fit in memory\n> so inserts in this index are very fast.\n> Periodically we will merge data from top index to base index and truncate the top index. To prevent blocking of inserts in the table\n> while we are merging indexes we can add ... on more index, which will be used during merge.\n>\n> So final architecture of Lsm3 is the following:\n> two top indexes used in cyclic way and one main index. When top index reaches some threshold value\n> we initiate merge with main index, done by bgworker and switch to another top index.\n> As far as merging indexes is done in background, it doesn't affect insert speed.\n> Unfortunately Postgres Index AM has not bulk insert operation, so we have to perform normal inserts.\n> But inserted data is already sorted by key which should improve access locality and partly solve random reads problem for base index.\n>\n> Certainly to perform search in Lsm3 we have to make lookups in all three indexes and merge search results.\n> (in case of unique index we can avoid extra searches if searched value is found in top index).\n> It can happen that during merge of top and base indexes the same TID can be found in both of them.\n> But such duplicates can be easily eliminated during merge of search results.\n\nYou use a fixed number of trees. Is this a limitation of prototype or\nintention of design? I guess if the base index is orders of magnitude\nbigger than RAM, this scheme can degrade greatly.\n\n> As far as we are using standard nbtree indexes there is no need to worry about logging information in WAL.\n> There is no need to use inefficient \"generic WAL records\" or patch kernel by adding own WAL records.\n\nAs I get the merge operations are logged in the same way as ordinal\ninserts. This seems acceptable, but a single insert operation would\neventually cause in times more WAL than it does in B-tree (especially\nif we'll have an implementation of a flexible number of trees). In\nprinciple that could be evaded if recovery could replay logic of trees\nmerge on its side. But this approach hardly can fit Postgres in many\nways.\n\n> I have performed the same benchmark with random inserts (described above) for Lsm3:\n>\n> Index Clients TPS\n> Inclusive B-Tree 1 9387\n> Inclusive B-Tree 10 18761\n> RocksDB FDW 1 138350\n> RocksDB FDW 10 122369\n> RocksDB 1 166333\n> RocksDB 10 141482\n> Lsm3 1 151699\n> Lsm3 10 65997\n>\n> Size of nbtree is about 29Gb, single client performance is even higher than of RocksDB FDW, but parallel results are signficantly worser.\n\nDid you investigate the source of degradation? Such degradation\ndoesn't yet look inevitable for me. Probably, things could be\nimproved.\n\n> I will be glad to receive any feedback, may be some change requests or proposals.\n\nAs I get you benchmarked just inserts. But what about vacuum? I\nthink the way Postgres vacuum works for now isn't optimal for lsm.\nPostgres vacuum requires full scan of index, because it provides a\nbitmap of tids to be deleted without information of index keys. For\nlsm, it would be better if vacuum would push delete requests to the\ntop level of lsm (as specially marked tuples of something). Thanks to\nthat index deletes could be as efficient as inserts. This is\nespecially important for lsm with many levels and/or aggressive\nvacuum.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 4 Aug 2020 18:04:40 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "On Tue, Aug 04, 2020 at 11:22:13AM +0300, Konstantin Knizhnik wrote:\n>Hi hackers,\n>\n>I want to share results of my last research of implementing LSM index \n>in Postgres.\n>Most of modern databases (RocksDB, MongoDB, Tarantool,...) are using \n>LSM tree instead of classical B-Tree.\n>\n\nI was under the impression that LSM is more an alternative primary\nstorage, not for indexes. Or am I wrong / confused?\n\n>From one side, capacity of RAM at modern servers allows to keep the \n>whole database in memory.\n>It leads to the anti-caching approach proposed by Michael Stonebraker\n>https://15721.courses.cs.cmu.edu/spring2016/papers/hstore-anticaching.pdf\n>\n>From the other side: maintaining if indexes is one of the most \n>important factor limiting database performance.\n>Postgres is able to insert records in table without indexes almost \n>with linear disk write speed(hundred megabytes per second).\n>But if table contains indexes, inserted keys have random values and \n>indexes don't fill in memory then we observer dramatic degradation of \n>performance. Average HDD access time is about 10msec, which \n>corresponds to 100 random reads per second. If table has several \n>indexes\n>and is large enough not to fit in memory, then insert speed can be as \n>low as tens TPS. Certainly SSD can provide much better random access \n>time,\n>but still random reads are slow.\n>\n\nTrue. Indexes (the way we do them) almost inevitably cause random I/O.\n\n>LSM approach tries to address this problem.\n>First of all I started my experiments with RocksDB (may be most \n>popular LSM-based key-value storage, used in many databases).\n>There was FDW for RocksDB from VidarDB project: \n>https://github.com/vidardb/pgrocks\n>As far as RocksDB is multuthreaded embedded database engine and \n>Postgres is based on multiprocess architecture,\n>them used interesting approach \"server inside server\": there is \n>bgworker process which works with RocksDB and\n>backends sendind requests to it through shared memory queue.\n>\n>I have significantly rewriten their FDW implementation: original \n>RocksDB server implementation was single threaded.\n>I have made it multitheaded making it possible to run multiple RocksDB \n>requests in parallel.\n>My implementation can be found there:\n>https://github.com/postgrespro/lsm\n>\n>Some results of benchmarking.\n>Benchmark is just insertion of 250 millions of records with random key \n>in inclusive index containing one bigint key and 8 bigint fields.\n>SIze of index is about 20Gb and target system has 16GB of RAM:\n>\n>\n>Index \tClients \tTPS\n>Inclusive B-Tree \t1 \t9387\n>Inclusive B-Tree \t10 \t18761\n>RocksDB FDW \t1 \t138350\n>RocksDB FDW \t10 \t122369\n>RocksDB \t1 \t166333\n>RocksDB \t10 \t141482\n>\n\nInteresting, although those are just writes, right? Do you have any\nnumbers for read? Also, what are the numbers when you start with \"larger\nthan RAM\" data (i.e. ignoring the initial period when the index fits\ninto memory)?\n\n>\n>As you can see there is about 10 times difference.\n>May be somebody will find useful this idea of using IOT (index \n>organized table) based on RocksDB� in Postgres.\n>But this approach violates all ACID properties of Postgres:\n>there is no atomicity and consistency (in principle RocksDB supports \n>2PC, but it is not used here),\n>isolation corresponds to something like \"read uncommitted\",\n>and� concerning durability - it is all up to RocksDB and I have \n>serious doubts that it will survive failure especially with sync write \n>mode disabled.\n>So I considered this project mostly as prototype for estimating \n>efficiency of LSM approach.\n>\n\nYeah, I think in general FDW to a database with different consistency\nmodel is not going to get us very far ... Good for PoC experiments, but\nnot really designed for stuff like this. Also, in my experience FDW has\nsiginficant per-row overhead.\n\n>Then I think about implementing ideas of LSM using standard Postgres nbtree.\n>\n>We need two indexes: one small for fast inserts and another - big \n>(main) index. This top index is small enough to fit in memory\n>so inserts in this index are very fast.\n>Periodically we will merge data from top index to base index and \n>truncate the top index. To prevent blocking of inserts in the table\n>while we are merging indexes we can add ... on more index, which will \n>be used during merge.\n>\n>So final architecture of Lsm3 is the following:\n>two top indexes used in cyclic way and one main index. When top index \n>reaches some threshold value\n>we initiate merge with main index, done by bgworker and switch to \n>another top index.\n>As far as merging indexes is done in background, it doesn't� affect \n>insert speed.\n>Unfortunately Postgres Index AM has not bulk insert operation, so we \n>have to perform normal inserts.\n>But inserted data is already sorted by key which should improve access \n>locality and partly solve random reads problem for base index.\n>\n>Certainly to perform search in Lsm3 we have to make lookups in all \n>three indexes and merge search results.\n>(in case of unique index we can avoid extra searches if searched value \n>is found in top index).\n>It can happen that during merge of top and base indexes the same TID \n>can be found in both of them.\n>But such duplicates can be easily eliminated during merge of search results.\n>\n>As far as we are using standard nbtree indexes there is no need to \n>worry about logging information in WAL.\n>There is no need to use inefficient \"generic WAL records\" or patch \n>kernel by adding own WAL records.\n>\n\nMakes sense, I guess. I always imagined we could do something like this\nby adding \"buffers\" into the btree directly, and instead of pushing them\nall the way down to the leaf pages we'd only insert them into the first\nbuffer (and full buffers would get \"propagated\" in the background).\n\nHow many such \"buffers\" we'd need / in which places in the btree is an\nopen question - I suppose we could have a buffer for every internal page\nTypical indexes have ~1% in internal pages, so even if each 8kB internal\npage has an associated 8kB buffer page, it's not going to increase the\nsize significantly. Of course, it's going to make lookups more expensive\nbecause you have to search all the buffers on the way to the leaf page\n(I wonder if we could improve this by keeping a a tiny bloom filters for\nthose buffers, representing data in the subtree).\n\nNot sure if this would be simpler/cheaper than maintaining multiple\nseparate indexes, which is what you propose.\n\nBTW how would your approach work with unique indexes, speculative\ninserts etc.?\n\n>Implementation of Lsm3 Index AM as standart Postgres extension� is \n>available here:\n>https://github.com/postgrespro/lsm3\n>\n>I have performed the same benchmark with random inserts (described \n>above) for Lsm3:\n>\n>Index \tClients \tTPS\n>Inclusive B-Tree \t1 \t9387\n>Inclusive B-Tree \t10 \t18761\n>RocksDB FDW \t1 \t138350\n>RocksDB FDW \t10 \t122369\n>RocksDB \t1 \t166333\n>RocksDB \t10 \t141482\n>Lsm3 \t1 \t151699\n>Lsm3 \t10 \t65997\n>\n>\n>Size of nbtree is about 29Gb, single client performance is even higher \n>than of RocksDB FDW, but parallel results are signficantly worser.\n>So Lsm3 can provide significant improve of performance for large \n>indexes not fitting in main memory.\n>And the larger ratio between index size and RAM size is, the larger \n>benefit in insertion speed you get.\n>Lsm3 is just standard postgres extension, fully integrated in Postgres \n>infrastructure (MVCC, WAL, backups,...).\n>SO I hope it can be useful when standard indexes becomes bottleneck.\n>\n\nIsn't it a bit suspicious that with more clients the throughput actually\ndrops significantly? Is this merely due to PoC stage, or is there some\ninherent concurrency bottleneck?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 4 Aug 2020 17:11:36 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Tue, Aug 04, 2020 at 11:22:13AM +0300, Konstantin Knizhnik wrote:\n> >two top indexes used in cyclic way and one main index. When top index\n> >reaches some threshold value\n> >we initiate merge with main index, done by bgworker and switch to another\n> >top index.\n> >As far as merging indexes is done in background, it doesn't affect insert\n> >speed.\n> >Unfortunately Postgres Index AM has not bulk insert operation, so we have\n> >to perform normal inserts.\n> >But inserted data is already sorted by key which should improve access\n> >locality and partly solve random reads problem for base index.\n> >\n> >Certainly to perform search in Lsm3 we have to make lookups in all three\n> >indexes and merge search results.\n> >(in case of unique index we can avoid extra searches if searched value is\n> >found in top index).\n> >It can happen that during merge of top and base indexes the same TID can\n> >be found in both of them.\n> >But such duplicates can be easily eliminated during merge of search results.\n> >\n> >As far as we are using standard nbtree indexes there is no need to worry\n> >about logging information in WAL.\n> >There is no need to use inefficient \"generic WAL records\" or patch kernel\n> >by adding own WAL records.\n> \n> Makes sense, I guess. I always imagined we could do something like this\n> by adding \"buffers\" into the btree directly, and instead of pushing them\n> all the way down to the leaf pages we'd only insert them into the first\n> buffer (and full buffers would get \"propagated\" in the background).\n\nI get that it's not quite the same, but this all is reminding me of the\nGIN pending list and making me wonder if there's some way to generalize\nthat (or come up with something new that would work for GIN too).\n\nIndependently while considering this, I don't think the issues around\nhow to deal with unique btrees properly has really been considered- you\ncertainly can't stop your search on the first tuple you find even if the\nindex is unique, since the \"unique\" btree could certainly have multiple\nentries for a given key and you might need to find a different one.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 4 Aug 2020 11:18:37 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "On Tue, Aug 4, 2020 at 6:11 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Tue, Aug 04, 2020 at 11:22:13AM +0300, Konstantin Knizhnik wrote:\n> >Hi hackers,\n> >\n> >I want to share results of my last research of implementing LSM index\n> >in Postgres.\n> >Most of modern databases (RocksDB, MongoDB, Tarantool,...) are using\n> >LSM tree instead of classical B-Tree.\n> >\n>\n> I was under the impression that LSM is more an alternative primary\n> storage, not for indexes. Or am I wrong / confused?\n\nAs I understand, there are different use-cases. We can use LSM for\nindex, and this is good already. Such indexes would be faster for\ninsertions and probably even vacuum if we redesign it (see my previous\nmessage), but slower for search. But for updates/deletes you still\nhave to do random access to the heap. And you also need to find a\nheap record to update/delete, probably using the LSM index (and it's\nslower for search than B-tree).\n\nLSM as a primary storage can do more advanced tricks. For instance,\nsome updates/inserts_on_conflict could be also just pushed to the top\nlevel of LSM without fetching the affected record before.\n\nSo, in my point of view LSM as an index AM is far not a full power LSM\nfor PostgreSQL, but it's still useful. Large insert-only tables can\nbenefit from LSM. Large tables with many indexes could also benefit,\nbecause non-HOT updates will become cheaper.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 4 Aug 2020 18:24:39 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "\n\nOn 04.08.2020 18:04, Alexander Korotkov wrote:\n> Hi!\n>\n> On Tue, Aug 4, 2020 at 11:22 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> I want to share results of my last research of implementing LSM index in Postgres.\n>> Most of modern databases (RocksDB, MongoDB, Tarantool,...) are using LSM tree instead of classical B-Tree.\n> I wouldn't say it that way. I would say they are providing LSM in\n> addition to the B-tree. For instance WiredTiger (which is the main\n> engine for MongoDB) provides both B-tree and LSM. As I know Tarantool\n> provides at least an in-memory B-tree. If RocksDB is used as an\n> engine for MySQL, then it's also an addition to B-tree, which is\n> provided by InnoDB. Also, implementation of B-tree's in mentioned\n> DBMSes are very different. I would say none of them is purely\n> classical.\nI am not suggesting to completely replace B-Tree with LSM.\nMy experiments shows tah Postgres nbtree is faster than RocksDB when \nindex is small and fits in memory.\nDefinitely I have suggest to use LSM only for huge tables which indexes \nare much larger than size of available memory.\n\n\n> Some results of benchmarking.\n>> Benchmark is just insertion of 250 millions of records with random key in inclusive index containing one bigint key and 8 bigint fields.\n>> SIze of index is about 20Gb and target system has 16GB of RAM:\n> What storage do you use?\n\nIt is my notebook with 16GB of RAM and SSD.\nCertainly it should be tested at more serious hardware.\nBut there is a \"problem\": powerful servers now have hundreds of \ngigabytes of memory.\nTo let LSM index demonstrates it advantages we need to create index not \nfitting in memory.\nAnd CPU speed of very expensive servers is not significantly faster than \nof my notebook.\nPerforming may inserts in parallel also will not significantly increase \npopulation of table with data: multiple bottlenecks in Postgres\ndo not allow to reach liner scalability even thoughwe have hundreds of \nCPU cores.\n\n> Yes, integration of WAL and snapshots between Postgres and RocksDB is\n> problematic. I also doubt that RocksDB can use the full power of\n> Postgres extendable type system.\nThis implementation support only basic scala Postgres types.\nActually RocksDB is dealing only with string key-value pairs.\nSo we need to serialize Postgres type into array of bytes (and provide \nright ordering)!\n\n\n>\n> You use a fixed number of trees. Is this a limitation of prototype or\n> intention of design? I guess if the base index is orders of magnitude\n> bigger than RAM, this scheme can degrade greatly.\nI do not understand why do we need multiple indexes.\nWe need one \"hot\" index which fits in memory to perform fast inserts.\nBut it should not be too small to be able to accumulate substantial \namount of records to provide efficient bulk insert.\nI expect that top index can be efficiently merger with based index \nbecause of better access locality.\nI.e. we will insert multiple entries into one B-Tree page and so \nminimize slowdown of random reads.\n\nThird index is needed to perform parallel merge (while merge is in \nprogress top index will be blocked and we can not perform inserts in it).\nI do not understand benefits of performing more than one merge in \nparallel: it will only increase fraction of random reads.\n\nDegradation certainly takes place. But it is not so critical as in case \nof standard nbtree.\nIt is possible to tune threshold for top index size to make merge most \nefficient.\nBut we can not truncate and swap index before we complete the merge.\nSo if merge operation takes long time, then it will cause exhaustion of \ntop index and it will not fit in memory any more.\nIt will lead to further slowdown (so we have negative feedback here).\n\nCertainly it is possible to create more top indexes, keeping their size \nsmall enough to fit in memory.\nBut in this case search will require merge not of 3, but of N indexes.\nI think that it may cause unacceptable slowdown of search operations.\nAnd it is highly undesirable, taken in account that most of application \nsend more select-only queries than updates.\n\n\n>\n>> As far as we are using standard nbtree indexes there is no need to worry about logging information in WAL.\n>> There is no need to use inefficient \"generic WAL records\" or patch kernel by adding own WAL records.\n> As I get the merge operations are logged in the same way as ordinal\n> inserts. This seems acceptable, but a single insert operation would\n> eventually cause in times more WAL than it does in B-tree (especially\n> if we'll have an implementation of a flexible number of trees). In\n> principle that could be evaded if recovery could replay logic of trees\n> merge on its side. But this approach hardly can fit Postgres in many\n> ways.\n\nYes, this approach increase mount of logged data twice:\nwe need to write in WAL inserts in top and base indexes.\nAnd it will certainly have negative influence on performance.\nUnfortunately I have no idea how to avoid it without patching Postgres core.\n>\n>> I have performed the same benchmark with random inserts (described above) for Lsm3:\n>>\n>> Index Clients TPS\n>> Inclusive B-Tree 1 9387\n>> Inclusive B-Tree 10 18761\n>> RocksDB FDW 1 138350\n>> RocksDB FDW 10 122369\n>> RocksDB 1 166333\n>> RocksDB 10 141482\n>> Lsm3 1 151699\n>> Lsm3 10 65997\n>>\n>> Size of nbtree is about 29Gb, single client performance is even higher than of RocksDB FDW, but parallel results are signficantly worser.\n> Did you investigate the source of degradation? Such degradation\n> doesn't yet look inevitable for me. Probably, things could be\n> improved.\n\nAs I explained above insertion in Lsm3 now is process with negative \nfeedback.\nIf insertion rate is higher than merge speed then top index is blown and \ndoesn't fit in memory any more which in turn cause\nmore slowdown of merge and as a result - further increase of top index size.\nSeveral parallel clients performing inserts can fill top index faster \nthan single client does and faster than it can be merged with main index.\n\nI have tried several approaches which try to slowdown inserts to prevent \nundesired growth of top index\n(I have considered two criteria: size of top index and time of merge \noperation).\nBut none of my attempts was successful: its leads to even worse performance.\n\n>\n>> I will be glad to receive any feedback, may be some change requests or proposals.\n> As I get you benchmarked just inserts. But what about vacuum? I\n> think the way Postgres vacuum works for now isn't optimal for lsm.\n> Postgres vacuum requires full scan of index, because it provides a\n> bitmap of tids to be deleted without information of index keys. For\n> lsm, it would be better if vacuum would push delete requests to the\n> top level of lsm (as specially marked tuples of something). Thanks to\n> that index deletes could be as efficient as inserts. This is\n> especially important for lsm with many levels and/or aggressive\n> vacuum.\n\nRight now vacuum process Lsm3 indexes in usual way.\nRemoving records from top indexes may be not needed at all (just because \nthis indexes will be truncated in any case).\nBut as far as size of top index is expected to be small enough \nvacuumming it should not take a long time,\nso I didn't to avoid it (although it should not be difficult - just \ndisable ambulkdelete for correspondent nbtree wrappers).\nConcerning deletes from main index - I do not understand how it can be \noptimized.\n\n\n\n",
"msg_date": "Tue, 4 Aug 2020 19:56:55 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "\n\nOn 04.08.2020 18:11, Tomas Vondra wrote:\n> On Tue, Aug 04, 2020 at 11:22:13AM +0300, Konstantin Knizhnik wrote:\n>> Hi hackers,\n>>\n>> I want to share results of my last research of implementing LSM index \n>> in Postgres.\n>> Most of modern databases (RocksDB, MongoDB, Tarantool,...) are using \n>> LSM tree instead of classical B-Tree.\n>>\n>\n> I was under the impression that LSM is more an alternative primary\n> storage, not for indexes. Or am I wrong / confused?\n\nYes, originally I considered LSM for IOT (index organized table).\nAnd RocksDB FDW is actually such implementation of IOT.\nBut implement IOT using existed nbtree indexes is more challenging task:\nI have thought about it but have not tried it yet.\n\n> Interesting, although those are just writes, right? Do you have any\n> numbers for read? Also, what are the numbers when you start with \"larger\n> than RAM\" data (i.e. ignoring the initial period when the index fits\n> into memory)?\n\nWhen benchmark starts, insertion speed is almost the same for Lsm3 and \nstandard nbtree\n(we have to insert record twice, but second insertion s done in background).\nAt the end of benchmark - when we close to 250 million records, Lsm3 \nshows TPS about 20 times faster than nbtree.\nFinally it gives about 6 times difference in elapsed time.\n\n\n> Yeah, I think in general FDW to a database with different consistency\n> model is not going to get us very far ... Good for PoC experiments, but\n> not really designed for stuff like this. Also, in my experience FDW has\n> siginficant per-row overhead.\n\n From my experience the main drawback of FDW is lack of support of \nparallel operations.\nBut it is important mostly for OLAP, not for OLTP.\n\n> BTW how would your approach work with unique indexes, speculative\n> inserts etc.?\n\nUnique indexes are not supported now.\nAnd I do not see some acceptable solution here.\nIf we will have to check presence of duplicate at the time of insert \nthen it will eliminate all advantages of LSM approach.\nAnd if we postpone to the moment of merge, then... I afraid that it will \nbe too late.\n\n>\n> Isn't it a bit suspicious that with more clients the throughput actually\n> drops significantly? Is this merely due to PoC stage, or is there some\n> inherent concurrency bottleneck?\n>\nMy explaination is the following (I am not 100% sure that it is true): \nmultiple clients insert records faster than merge bgworker is able to \nmerge them to main index. It cause blown of top index and as a result it \ndoesn't fir in memory any more.\nSo we loose advantages of fast inserts. If we have N top indexes instead \nof just 2, we can keep size of each top index small enough.\nBut in this case search operations will have to merge N indexes and so \nsearch is almost N times slow (the fact that each top index fits in memory\ndoesn't mean that all of the fits in memory at the same time, so we \nstill have to read pages from disk during lookups in top indexes).\n\n\n\n\n",
"msg_date": "Tue, 4 Aug 2020 20:18:01 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "\n\nOn 04.08.2020 18:18, Stephen Frost wrote:\n>\n> Independently while considering this, I don't think the issues around\n> how to deal with unique btrees properly has really been considered- you\n> certainly can't stop your search on the first tuple you find even if the\n> index is unique, since the \"unique\" btree could certainly have multiple\n> entries for a given key and you might need to find a different one.\nBut search locates not ANY record with specified key in top index but record\nwhich satisfies snapshot of the transaction. Why do we need more records \nif we know that\nthere are no duplicates?\n\n\n",
"msg_date": "Tue, 4 Aug 2020 20:21:18 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "On Tue, Aug 04, 2020 at 08:18:01PM +0300, Konstantin Knizhnik wrote:\n>\n>\n>On 04.08.2020 18:11, Tomas Vondra wrote:\n>>On Tue, Aug 04, 2020 at 11:22:13AM +0300, Konstantin Knizhnik wrote:\n>>>Hi hackers,\n>>>\n>>>I want to share results of my last research of implementing LSM \n>>>index in Postgres.\n>>>Most of modern databases (RocksDB, MongoDB, Tarantool,...) are \n>>>using LSM tree instead of classical B-Tree.\n>>>\n>>\n>>I was under the impression that LSM is more an alternative primary\n>>storage, not for indexes. Or am I wrong / confused?\n>\n>Yes, originally I considered LSM for IOT (index organized table).\n>And RocksDB FDW is actually such implementation of IOT.\n>But implement IOT using existed nbtree indexes is more challenging task:\n>I have thought about it but have not tried it yet.\n>\n>>Interesting, although those are just writes, right? Do you have any\n>>numbers for read? Also, what are the numbers when you start with \"larger\n>>than RAM\" data (i.e. ignoring the initial period when the index fits\n>>into memory)?\n>\n>When benchmark starts, insertion speed is almost the same for Lsm3 and \n>standard nbtree\n>(we have to insert record twice, but second insertion s done in background).\n>At the end of benchmark - when we close to 250 million records, Lsm3 \n>shows TPS about 20 times faster than nbtree.\n>Finally it gives about 6 times difference in elapsed time.\n>\n\nIMO the 6x difference is rather misleading, as it very much depends on\nthe duration of the benchmark and how much data it ends up with. I think\nit's better to test 'stable states' i.e. with small data set that does\nnot exceed RAM during the whole test, and large ones that already starts\nlarger than RAM. Not sure if it makes sense to make a difference between\ncases that fit into shared buffers and those that exceed shared buffers\nbut still fit into RAM.\n\n>\n>>Yeah, I think in general FDW to a database with different consistency\n>>model is not going to get us very far ... Good for PoC experiments, but\n>>not really designed for stuff like this. Also, in my experience FDW has\n>>siginficant per-row overhead.\n>\n>From my experience the main drawback of FDW is lack of support of \n>parallel operations.\n>But it is important mostly for OLAP, not for OLTP.\n>\n\nTrue. There are other overheads, though - having to format/copy the\ntuples is not particularly cheap.\n\n>>BTW how would your approach work with unique indexes, speculative\n>>inserts etc.?\n>\n>Unique indexes are not supported now.\n>And I do not see some acceptable solution here.\n>If we will have to check presence of duplicate at the time of insert \n>then it will eliminate all advantages of LSM approach.\n>And if we postpone to the moment of merge, then... I afraid that it \n>will be too late.\n>\n\nUmmm, but in your response to Stephen you said:\n\n But search locates not ANY record with specified key in top index\n but record which satisfies snapshot of the transaction. Why do we\n need more records if we know that there are no duplicates?\n\nSo how do you know there are no duplicates, if unique indexes are not\nsupported (and may not be for LSM)?\n\n>>\n>>Isn't it a bit suspicious that with more clients the throughput actually\n>>drops significantly? Is this merely due to PoC stage, or is there some\n>>inherent concurrency bottleneck?\n>>\n>My explaination is the following (I am not 100% sure that it is true): \n>multiple clients insert records faster than merge bgworker is able to \n>merge them to main index. It cause blown of top index and as a result \n>it doesn't fir in memory any more.\n>So we loose advantages of fast inserts. If we have N top indexes \n>instead of just 2, we can keep size of each top index small enough.\n>But in this case search operations will have to merge N indexes and so \n>search is almost N times slow (the fact that each top index fits in \n>memory\n>doesn't mean that all of the fits in memory at the same time, so we \n>still have to read pages from disk during lookups in top indexes).\n>\n\nHmmm, maybe. Should be easy to verify by monitoring the size of the top\nindex, and limiting it to some reasonable value to keep good\nperformance. Something like gin_pending_list_size I guess.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 4 Aug 2020 19:44:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "\n\nOn 04.08.2020 20:44, Tomas Vondra wrote:\n> Unique indexes are not supported now.\n>> And I do not see some acceptable solution here.\n>> If we will have to check presence of duplicate at the time of insert \n>> then it will eliminate all advantages of LSM approach.\n>> And if we postpone to the moment of merge, then... I afraid that it \n>> will be too late.\n>>\n>\n> Ummm, but in your response to Stephen you said:\n>\n> But search locates not ANY record with specified key in top index\n> but record which satisfies snapshot of the transaction. Why do we\n> need more records if we know that there are no duplicates?\n>\n> So how do you know there are no duplicates, if unique indexes are not\n> supported (and may not be for LSM)?\n>\n\nIn index AM I marked Lsm3 index as not supporting unique constraint.\nSo it can not be used to enforce unique contraint.\nBut it is possible to specify \"unique\" in index properties.\nIn this case it is responsibility of programmer to guarantee that there \nare no duplicates in the index.\nThis option allows to use this search optimization - locate first record \nsatisfying snapshot and not touch other indexes.\n\n>>>\n>>> Isn't it a bit suspicious that with more clients the throughput \n>>> actually\n>>> drops significantly? Is this merely due to PoC stage, or is there some\n>>> inherent concurrency bottleneck?\n>>>\n>> My explaination is the following (I am not 100% sure that it is \n>> true): multiple clients insert records faster than merge bgworker is \n>> able to merge them to main index. It cause blown of top index and as \n>> a result it doesn't fir in memory any more.\n>> So we loose advantages of fast inserts. If we have N top indexes \n>> instead of just 2, we can keep size of each top index small enough.\n>> But in this case search operations will have to merge N indexes and \n>> so search is almost N times slow (the fact that each top index fits \n>> in memory\n>> doesn't mean that all of the fits in memory at the same time, so we \n>> still have to read pages from disk during lookups in top indexes).\n>>\n>\n> Hmmm, maybe. Should be easy to verify by monitoring the size of the top\n> index, and limiting it to some reasonable value to keep good\n> performance. Something like gin_pending_list_size I guess.\n>\n\nLsm3 provides functions for getting size of active top index, explicitly \nforce merge of top index and\nwait completion of merge operation.\nOnce of use cases of Lsm3 may be delayed update of indexes.\nFor some application insert speed is very critical: them can not loose \ndata which is received at high rate.\nIn this case in working hours we insert data in small index and at night \ninitiate merge of this index with main index.\n\n\n\n",
"msg_date": "Tue, 4 Aug 2020 21:55:30 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "On Tue, Aug 4, 2020 at 7:56 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> I do not understand why do we need multiple indexes.\n> We need one \"hot\" index which fits in memory to perform fast inserts.\n> But it should not be too small to be able to accumulate substantial\n> amount of records to provide efficient bulk insert.\n> I expect that top index can be efficiently merger with based index\n> because of better access locality.\n> I.e. we will insert multiple entries into one B-Tree page and so\n> minimize slowdown of random reads.\n>\n> Third index is needed to perform parallel merge (while merge is in\n> progress top index will be blocked and we can not perform inserts in it).\n> I do not understand benefits of performing more than one merge in\n> parallel: it will only increase fraction of random reads.\n>\n> Degradation certainly takes place. But it is not so critical as in case\n> of standard nbtree.\n> It is possible to tune threshold for top index size to make merge most\n> efficient.\n> But we can not truncate and swap index before we complete the merge.\n> So if merge operation takes long time, then it will cause exhaustion of\n> top index and it will not fit in memory any more.\n> It will lead to further slowdown (so we have negative feedback here).\n>\n> Certainly it is possible to create more top indexes, keeping their size\n> small enough to fit in memory.\n> But in this case search will require merge not of 3, but of N indexes.\n> I think that it may cause unacceptable slowdown of search operations.\n> And it is highly undesirable, taken in account that most of application\n> send more select-only queries than updates.\n\nThe things you're writing makes me uneasy. I initially understood\nlsm3 as a quick and dirty prototype, while you're probably keeping\nsome design in your mind (for instance, original design of LSM).\nHowever, your message makes me think you're trying to defend the\napproach currently implemented in lsm3 extension. Therefore, I've to\ncriticise this approach.\n\n1) The base index can degrade. At first, since merge can cause page\nsplits. Therefore logical ordering of pages will become less\ncorrelated with their physical ordering with each merge.\n2) If your workload will include updates and/or deletes, page\nutilization may also degrade.\n3) While base index degrades, merge performance also degrades.\nTraverse of base index in logical order will require more and more\nrandom reads (at some point almost every page read will be random).\nWhile the base index becomes large and/or bloat, you push less top\nindex tuples to a single base index page (at some point you will push\none tuple per page).\n\nOriginal LSM design implies strict guarantees over average resources\nspent per index operation. Your design doesn't. Moreover, I bet lsm3\nwill degrade significantly even on insert-only workload. It should\ndegrade to the performance level of B-tree once you insert enough\ndata. Try something like number_of_merges =\nnumer_of_tuples_per_index_page * 2 and you should see this\ndegradation. Real LSM doesn't degrade that way.\n\n> Yes, this approach increase mount of logged data twice:\n> we need to write in WAL inserts in top and base indexes.\n> And it will certainly have negative influence on performance.\n> Unfortunately I have no idea how to avoid it without patching Postgres core.\n\nHuh, I didn't mean \"without patching Postgres core\" :) I mean it's\ndifficult in principle, assuming PostgreSQL recovery is single-process\nand doesn't have access to system catalog (because it might be\ninconsistent).\n\n> Right now vacuum process Lsm3 indexes in usual way.\n> Removing records from top indexes may be not needed at all (just because\n> this indexes will be truncated in any case).\n> But as far as size of top index is expected to be small enough\n> vacuumming it should not take a long time,\n> so I didn't to avoid it (although it should not be difficult - just\n> disable ambulkdelete for correspondent nbtree wrappers).\n\nIt doesn't seem important, but I don't get your point here. Postgres\nexpects ambulkdelete to delete TIDs from index. If you don't delete\nit from the top index, this TID will be merged to the base index. And\nthat could lead wrong query answers unless you eliminate those TIDs in\na different way (during the merge stage or something).\n\n> Concerning deletes from main index - I do not understand how it can be\n> optimized.\n\nThis is a trick you can learn from almost every LSM implementation.\nFor instance, check docs for leveldb [1] about \"delete marker\". For\nsure, that requires some redesign of the vacuum and can't be done in\nextension (at least in the reasonable way). But, frankly speaking, I\nthink core modifications are inevitable to utilize the power of LSM in\nPostgreSQL.\n\nLinks\n1. https://github.com/google/leveldb/blob/master/doc/impl.md\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 5 Aug 2020 02:59:14 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "On Tue, Aug 4, 2020 at 8:24 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> So, in my point of view LSM as an index AM is far not a full power LSM\n> for PostgreSQL, but it's still useful. Large insert-only tables can\n> benefit from LSM. Large tables with many indexes could also benefit,\n> because non-HOT updates will become cheaper.\n\nRight -- this is why you usually have to choose one or the other. An\nLSM design typically subsumes not just indexing and table storage, but\nalso checkpointing -- you cannot really compare an LSM to a B-Tree\nbecause you really have to talk about other components to make a\nsensible comparison (at which point you're actually comparing two\ntotally different *storage engines*). Roughly speaking, the compaction\nprocess is the equivalent of checkpointing. So you either use (say)\nInnoDB or RocksDB everywhere -- you usually can't have it both ways.\nWell, maybe you can kind of get the benefits of both, but in practice\nLSMs are usually highly optimized for the things that they're good at,\nat the expense of other things. So in practice you kind of have to\nmake an up-front choice. An LSM is definitely not a natural fit for\nthe index access method interface in Postgres.\n\nOne thing that I don't think anyone else made reference to on the\nthread (which is surprising) is that the performance of an LSM is\nusually not measured using any of the conventional metrics that we\ncare about. For example, consider the Facebook MyRocks paper\n\"Optimizing Space Amplification in RocksDB\" [1]. The reported RocksDB\nthroughput for an LSM-sympathetic workload is not really any faster\nthan InnoDB, and sometimes slower. That's not the point, though; the\nmain advantages of using an LSM are reductions in space amplification\nand write amplification, particularly the latter. This isn't so much\nabout performance as it is about efficiency -- it enabled Facebook to\nget a lot more out of the inexpensive flash storage that they use. It\nlowered the total cost of ownership by *a lot*.\n\nI personally think that we should care about efficiency in this sense\na lot more than we do now, but the fact remains that it hasn't really\nbeen considered an independent problem that could be addressed by\naccepting a tradeoff similar to the tradeoff LSMs make quite explicit\n(apparently you can tune LSMs to get less write amplification but more\nread amplification, or vice versa). In general, we almost always just\ntalk about throughout and latency without considering efficiency\nspecifically. I'm not suggesting that we need an LSM, but an\nappreciation of LSMs could be helpful -- it could lead to better\ndesigns elsewhere.\n\nMark Callaghan's blog is a pretty good resource for learning about\nLSMs [2] (perhaps you've heard of him?). He wrote a bunch of stuff\nabout Postgres recently, which I enjoyed.\n\n[1] http://cidrdb.org/cidr2017/papers/p82-dong-cidr17.pdf\n[2] https://smalldatum.blogspot.com/\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 4 Aug 2020 20:12:13 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "\n\nOn 05.08.2020 02:59, Alexander Korotkov wrote:\n>\n> The things you're writing makes me uneasy. I initially understood\n> lsm3 as a quick and dirty prototype, while you're probably keeping\n> some design in your mind (for instance, original design of LSM).\n> However, your message makes me think you're trying to defend the\n> approach currently implemented in lsm3 extension. Therefore, I've to\n> criticise this approach.\n>\n> 1) The base index can degrade. At first, since merge can cause page\n> splits. Therefore logical ordering of pages will become less\n> correlated with their physical ordering with each merge.\n> 2) If your workload will include updates and/or deletes, page\n> utilization may also degrade.\n> 3) While base index degrades, merge performance also degrades.\n> Traverse of base index in logical order will require more and more\n> random reads (at some point almost every page read will be random).\n> While the base index becomes large and/or bloat, you push less top\n> index tuples to a single base index page (at some point you will push\n> one tuple per page).\n>\n> Original LSM design implies strict guarantees over average resources\n> spent per index operation. Your design doesn't. Moreover, I bet lsm3\n> will degrade significantly even on insert-only workload. It should\n> degrade to the performance level of B-tree once you insert enough\n> data. Try something like number_of_merges =\n> numer_of_tuples_per_index_page * 2 and you should see this\n> degradation. Real LSM doesn't degrade that way.\n\nI mostly agree with your critices.\nMy Lsm3 is not true LSM, but from my point of view it preserves basic \nprinciples of LSM: fast and small top index and bulk updates of main index.\nMy experiments with RocksDB shows that degradation also takes place in \nthis case. More experiments are needed to compare two approaches.\n\nConcerning degrade of basic index - B-Tree itself is balanced tree. Yes, \ninsertion of random keys can cause split of B-Tree page.\nIn the worst case half of B-Tree page will be empty. So B-Tree size will \nbe two times larger than ideal tree.\nIt may cause degrade up to two times. But that is all. There should not \nbe infinite degrade of speed tending to zero.\n\n>\n> Right now vacuum process Lsm3 indexes in usual way.\n> Removing records from top indexes may be not needed at all (just because\n> this indexes will be truncated in any case).\n> But as far as size of top index is expected to be small enough\n> vacuumming it should not take a long time,\n> so I didn't to avoid it (although it should not be difficult - just\n> disable ambulkdelete for correspondent nbtree wrappers).\n> It doesn't seem important, but I don't get your point here. Postgres\n> expects ambulkdelete to delete TIDs from index. If you don't delete\n> it from the top index, this TID will be merged to the base index. And\n> that could lead wrong query answers unless you eliminate those TIDs in\n> a different way (during the merge stage or something).\n\nYes, your are right. It is not possible to avoid delete TIDs from top \nindexes.\n>> Concerning deletes from main index - I do not understand how it can be\n>> optimized.\n> This is a trick you can learn from almost every LSM implementation.\n> For instance, check docs for leveldb [1] about \"delete marker\". For\n> sure, that requires some redesign of the vacuum and can't be done in\n> extension (at least in the reasonable way). But, frankly speaking, I\n> think core modifications are inevitable to utilize the power of LSM in\n> PostgreSQL.\n\nThe main idea of Lsm3 was to investigate whether it is possible to \nachieve the same result as with \"classical\" LSM\nusing standard Postgres nbtree indexes. Right now it seems t me that \nanswer is positive, but I have not performed\nexhaustive measurements. For example I have not measured vacuum overhead \n(it was enabled, so vacuumming takes place\nin my benchmark, but I have not tries to separate its overhead and \ninfluence on performance), index search speed,...\n\n\n\n\n> Links\n> 1. https://github.com/google/leveldb/blob/master/doc/impl.md\n>\n> ------\n> Regards,\n> Alexander Korotkov\n\n\n\n",
"msg_date": "Wed, 5 Aug 2020 09:13:12 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "On 04.08.2020 20:44, Tomas Vondra wrote:\n>\n> IMO the 6x difference is rather misleading, as it very much depends on\n> the duration of the benchmark and how much data it ends up with. I think\n> it's better to test 'stable states' i.e. with small data set that does\n> not exceed RAM during the whole test, and large ones that already starts\n> larger than RAM. Not sure if it makes sense to make a difference between\n> cases that fit into shared buffers and those that exceed shared buffers\n> but still fit into RAM.\n\nI have changed benchmark scenario.\nNow I inserted 200 million records with sequential key: it is fast \nenough and makes index size about 19Gb.\nThen I perform 1 million random inserts.\n\n-- init schema\ncreate table t(k bigint, v1 bigint, v2 bigint, v3 bigint, v4 bigint, v5 \nbigint, v6 bigint, v7 bigint, v8 bigint);\ncreate index lsm_index on t using lsm3(k) include (v1,v2,v3,v4,v5,v6,v7,v8);\ncreate table t2(k bigint, v1 bigint, v2 bigint, v3 bigint, v4 bigint, v5 \nbigint, v6 bigint, v7 bigint, v8 bigint);\ncreate index on t2(k) include (v1,v2,v3,v4,v5,v6,v7,v8);\n\n-- fill with sequential data\ninsert into t values (generate_series(1,200000000),0,0,0,0,0,0,0,0);\nTime: 520655,635 ms (08:40,656)\n\ninsert into t2 values (generate_series(1,200000000),0,0,0,0,0,0,0,0);\nTime: 372245,093 ms (06:12,245)\n\n-- random inserts\ninsert into t (v1,k,v2,v3,v4,v5,v6,v7,v8) values \n(generate_series(1,1000000),(random()*1000000000)::bigint,0,0,0,0,0,0,0);\nTime: 3781,614 ms (00:03,782)\n\ninsert into t2 (v1,k,v2,v3,v4,v5,v6,v7,v8) values \n(generate_series(1,1000000),(random()*1000000000)::bigint,0,0,0,0,0,0,0);\nTime: 39034,574 ms (00:39,035)\n\nThe I perform random selects\n\nselect.sql:\n\\set k random(1, 1000000000)\nselect * from t where k=:k;\n\nselect2.sql:\n\\set k random(1, 1000000000)\nselect * from t2 where k=:k;\n\npgbench -n -T 100 -P 10 -M prepared -f select.sql postgres\ntps = 11372.821006 (including connections establishing)\n\npgbench -n -T 100 -P 10 -M prepared -f select2.sql postgres\ntps = 10392.729026 (including connections establishing)\n\n\nSo as you can see - insertion speed of Lsm3 is ten times higher and \nselect speed is the same as of nbtree.\n\n\n\n\n\n\n\n\n\nOn 04.08.2020 20:44, Tomas Vondra\n wrote:\n\n\n IMO the 6x difference is rather misleading, as it very much\n depends on\n \n the duration of the benchmark and how much data it ends up with. I\n think\n \n it's better to test 'stable states' i.e. with small data set that\n does\n \n not exceed RAM during the whole test, and large ones that already\n starts\n \n larger than RAM. Not sure if it makes sense to make a difference\n between\n \n cases that fit into shared buffers and those that exceed shared\n buffers\n \n but still fit into RAM.\n \n\n\n I have changed benchmark scenario.\n Now I inserted 200 million records with sequential key: it is fast\n enough and makes index size about 19Gb.\n Then I perform 1 million random inserts.\n\n -- init schema\n create table t(k bigint, v1 bigint, v2 bigint, v3 bigint, v4 bigint,\n v5 bigint, v6 bigint, v7 bigint, v8 bigint);\n create index lsm_index on t using lsm3(k) include\n (v1,v2,v3,v4,v5,v6,v7,v8);\n create table t2(k bigint, v1 bigint, v2 bigint, v3 bigint, v4\n bigint, v5 bigint, v6 bigint, v7 bigint, v8 bigint);\n create index on t2(k) include (v1,v2,v3,v4,v5,v6,v7,v8);\n\n -- fill with sequential data\n insert into t values (generate_series(1,200000000),0,0,0,0,0,0,0,0);\n Time: 520655,635 ms (08:40,656)\n\n insert into t2 values\n (generate_series(1,200000000),0,0,0,0,0,0,0,0);\n Time: 372245,093 ms (06:12,245)\n\n -- random inserts\n insert into t (v1,k,v2,v3,v4,v5,v6,v7,v8) values\n(generate_series(1,1000000),(random()*1000000000)::bigint,0,0,0,0,0,0,0);\n Time: 3781,614 ms (00:03,782)\n\n insert into t2 (v1,k,v2,v3,v4,v5,v6,v7,v8) values\n(generate_series(1,1000000),(random()*1000000000)::bigint,0,0,0,0,0,0,0);\n Time: 39034,574 ms (00:39,035)\n\n The I perform random selects\n\n select.sql:\n \\set k random(1, 1000000000)\n select * from t where k=:k;\n\n select2.sql:\n \\set k random(1, 1000000000)\n select * from t2 where k=:k;\n\n pgbench -n -T 100 -P 10 -M prepared -f select.sql postgres\n tps = 11372.821006 (including\n connections establishing)\n\n pgbench -n -T 100 -P 10 -M prepared -f select2.sql postgres\n tps = 10392.729026 (including\n connections establishing)\n\n\n So as you can see - insertion speed of Lsm3 is ten times higher and\n select speed is the same as of nbtree.",
"msg_date": "Wed, 5 Aug 2020 10:08:45 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "> On Tue, Aug 04, 2020 at 11:22:13AM +0300, Konstantin Knizhnik wrote:\n>\n> Then I think about implementing ideas of LSM using standard Postgres\n> nbtree.\n>\n> We need two indexes: one small for fast inserts and another - big\n> (main) index. This top index is small enough to fit in memory so\n> inserts in this index are very fast. Periodically we will merge data\n> from top index to base index and truncate the top index. To prevent\n> blocking of inserts in the table while we are merging indexes we can\n> add ... on more index, which will be used during merge.\n>\n> So final architecture of Lsm3 is the following: two top indexes used\n> in cyclic way and one main index. When top index reaches some\n> threshold value we initiate merge with main index, done by bgworker\n> and switch to another top index. As far as merging indexes is done in\n> background, it doesn't affect insert speed. Unfortunately Postgres\n> Index AM has not bulk insert operation, so we have to perform normal\n> inserts. But inserted data is already sorted by key which should\n> improve access locality and partly solve random reads problem for base\n> index.\n>\n> Certainly to perform search in Lsm3 we have to make lookups in all\n> three indexes and merge search results.\n\nThanks for sharing this! In fact this reminds me more of partitioned\nb-trees [1] (and more older [2]) rather than LSM as it is (although\ncould be that the former was influenced by the latter). What could be\ninteresting is that quite often in these and many other whitepapers\n(e.g. [3]) to address the lookup overhead the design includes bloom\nfilters in one or another way to avoid searching not relevant part of an\nindex. Tomas mentioned them in this thread as well (in the different\ncontext), probably the design suggested here could also benefit from it?\n\n[1]: Riegger Christian, Vincon Tobias, Petrov Ilia. Write-optimized\nindexing with partitioned b-trees. (2017). 296-300. 10.1145/3151759.3151814.\n[2]: Graefe Goetz. Write-Optimized B-Trees. (2004). 672-683.\n10.1016/B978-012088469-8/50060-7.\n[3]: Huanchen Zhang, David G. Andersen, Andrew Pavlo, Michael Kaminsky,\nLin Ma, and Rui Shen. Reducing the Storage Overhead of Main-Memory OLTP\nDatabases with Hybrid Indexes. (2016). 1567–1581. 10.1145/2882903.2915222.\n\n\n",
"msg_date": "Wed, 5 Aug 2020 13:09:38 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "ср, 5 авг. 2020 г., 09:13 Konstantin Knizhnik <k.knizhnik@postgrespro.ru>:\n> Concerning degrade of basic index - B-Tree itself is balanced tree. Yes,\n> insertion of random keys can cause split of B-Tree page.\n> In the worst case half of B-Tree page will be empty. So B-Tree size will\n> be two times larger than ideal tree.\n> It may cause degrade up to two times. But that is all. There should not\n> be infinite degrade of speed tending to zero.\n\nMy concerns are not just about space utilization. My main concern is\nabout the order of the pages. After the first merge the base index\nwill be filled in key order. So physical page ordering perfectly\nmatches their logical ordering. After the second merge some pages of\nbase index splits, and new pages are added to the end of the index.\nSplits also happen in key order. So, now physical and logical\norderings match within two extents corresponding to first and second\nmerges, but not within the whole tree. While there are only few such\nextents, disk page reads may in fact be mostly sequential, thanks to\nOS cache and readahead. But finally, after many merges, we can end up\nwith mostly random page reads. For instance, leveldb doesn't have a\nproblem of ordering degradation, because it stores levels in sorted\nfiles.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 7 Aug 2020 15:31:35 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "\n\nOn 07.08.2020 15:31, Alexander Korotkov wrote:\n> ср, 5 авг. 2020 г., 09:13 Konstantin Knizhnik <k.knizhnik@postgrespro.ru>:\n>> Concerning degrade of basic index - B-Tree itself is balanced tree. Yes,\n>> insertion of random keys can cause split of B-Tree page.\n>> In the worst case half of B-Tree page will be empty. So B-Tree size will\n>> be two times larger than ideal tree.\n>> It may cause degrade up to two times. But that is all. There should not\n>> be infinite degrade of speed tending to zero.\n> My concerns are not just about space utilization. My main concern is\n> about the order of the pages. After the first merge the base index\n> will be filled in key order. So physical page ordering perfectly\n> matches their logical ordering. After the second merge some pages of\n> base index splits, and new pages are added to the end of the index.\n> Splits also happen in key order. So, now physical and logical\n> orderings match within two extents corresponding to first and second\n> merges, but not within the whole tree. While there are only few such\n> extents, disk page reads may in fact be mostly sequential, thanks to\n> OS cache and readahead. But finally, after many merges, we can end up\n> with mostly random page reads. For instance, leveldb doesn't have a\n> problem of ordering degradation, because it stores levels in sorted\n> files.\n>\nI agree with your that loosing sequential order of B-Tree pages may have \nnegative impact on performance.\nBut it first of all critical for order-by and range queries, when we \nshould traverse several subsequent leave pages.\nIt is less critical for exact-search or delete/insert operations. \nEfficiency of merge operations mostly depends on how much keys\nwill be stored at the same B-Tree page. And it is first of all \ndetermined by size of top index and key distribution.\n\n\n\n\n",
"msg_date": "Sat, 8 Aug 2020 17:07:31 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "On Sat, Aug 8, 2020 at 5:07 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> I agree with your that loosing sequential order of B-Tree pages may have\n> negative impact on performance.\n> But it first of all critical for order-by and range queries, when we\n> should traverse several subsequent leave pages.\n> It is less critical for exact-search or delete/insert operations.\n> Efficiency of merge operations mostly depends on how much keys\n> will be stored at the same B-Tree page.\n\nWhat do you mean by \"mostly\"? Given PostgreSQL has quite small (8k)\npages, sequential read in times faster than random read on SSDs\n(dozens of times on HDDs). I don't think this is something to\nneglect.\n\n> And it is first of all\n> determined by size of top index and key distribution.\n\nHow can you be sure that the top index can fit memory? On production\nsystems, typically there are multiple consumers of memory: other\ntables, indexes, other LSMs. This is one of reasons why LSM\nimplementations have multiple levels: they don't know in advance which\nlevels fit memory. Another reason is dealing with very large\ndatasets. And I believe there is a quite strong reason to keep page\norder sequential within level.\n\nI'm OK with your design for a third-party extension. It's very cool\nto have. But I'm -1 for something like this to get into core\nPostgreSQL, assuming it's feasible to push some effort and get\nstate-of-art LSM there.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 8 Aug 2020 21:18:29 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "\n\nOn 08.08.2020 21:18, Alexander Korotkov wrote:\n> On Sat, Aug 8, 2020 at 5:07 PM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> I agree with your that loosing sequential order of B-Tree pages may have\n>> negative impact on performance.\n>> But it first of all critical for order-by and range queries, when we\n>> should traverse several subsequent leave pages.\n>> It is less critical for exact-search or delete/insert operations.\n>> Efficiency of merge operations mostly depends on how much keys\n>> will be stored at the same B-Tree page.\n> What do you mean by \"mostly\"? Given PostgreSQL has quite small (8k)\n> pages, sequential read in times faster than random read on SSDs\n> (dozens of times on HDDs). I don't think this is something to\n> neglect.\n\nWhen yo insert one record in B-Tree, the order of pages doesn't matter \nat all.\nIf you insert ten records at one leaf page then order is also not so \nimportant.\nIf you insert 100 records, 50 got to one page and 50 to the next page,\nthen insertion may be faster if second page follows on the disk first one.\nBut such insertion may cause page split and so allocation of new page,\nso sequential write order can still be violated.\n\n>> And it is first of all\n>> determined by size of top index and key distribution.\n> How can you be sure that the top index can fit memory? On production\n> systems, typically there are multiple consumers of memory: other\n> tables, indexes, other LSMs. This is one of reasons why LSM\n> implementations have multiple levels: they don't know in advance which\n> levels fit memory. Another reason is dealing with very large\n> datasets. And I believe there is a quite strong reason to keep page\n> order sequential within level.\n\nThere is no any warranty that top index is kept in memory.\nBut as far top index pages are frequently accessed, I hope that buffer \nmanagement cache replacement\nalgorithm does it best to keep them in memory.\n\n> I'm OK with your design for a third-party extension. It's very cool\n> to have. But I'm -1 for something like this to get into core\n> PostgreSQL, assuming it's feasible to push some effort and get\n> state-of-art LSM there.\nI realize that it is not true LSM.\nBut still I wan to notice that it is able to provide ~10 times increase \nof insert speed when size of index is comparable with RAM size.\nAnd \"true LSM\" from RocksDB shows similar results. May be if size of \nindex will be 100 times larger then\nsize of RAM, RocksDB will be significantly faster than Lsm3. But modern \nservers has 0.5-1Tb of RAM.\nCan't believe that there are databases with 100Tb indexes.\n\n\n\n",
"msg_date": "Sat, 8 Aug 2020 23:49:17 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "On Sat, Aug 8, 2020 at 11:49 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> On 08.08.2020 21:18, Alexander Korotkov wrote:\n> > On Sat, Aug 8, 2020 at 5:07 PM Konstantin Knizhnik\n> > <k.knizhnik@postgrespro.ru> wrote:\n> >> I agree with your that loosing sequential order of B-Tree pages may have\n> >> negative impact on performance.\n> >> But it first of all critical for order-by and range queries, when we\n> >> should traverse several subsequent leave pages.\n> >> It is less critical for exact-search or delete/insert operations.\n> >> Efficiency of merge operations mostly depends on how much keys\n> >> will be stored at the same B-Tree page.\n> > What do you mean by \"mostly\"? Given PostgreSQL has quite small (8k)\n> > pages, sequential read in times faster than random read on SSDs\n> > (dozens of times on HDDs). I don't think this is something to\n> > neglect.\n>\n> When yo insert one record in B-Tree, the order of pages doesn't matter\n> at all.\n> If you insert ten records at one leaf page then order is also not so\n> important.\n> If you insert 100 records, 50 got to one page and 50 to the next page,\n> then insertion may be faster if second page follows on the disk first one.\n> But such insertion may cause page split and so allocation of new page,\n> so sequential write order can still be violated.\n\nSorry, I've no idea of what you're getting at.\n\n> >> And it is first of all\n> >> determined by size of top index and key distribution.\n> > How can you be sure that the top index can fit memory? On production\n> > systems, typically there are multiple consumers of memory: other\n> > tables, indexes, other LSMs. This is one of reasons why LSM\n> > implementations have multiple levels: they don't know in advance which\n> > levels fit memory. Another reason is dealing with very large\n> > datasets. And I believe there is a quite strong reason to keep page\n> > order sequential within level.\n>\n> There is no any warranty that top index is kept in memory.\n> But as far top index pages are frequently accessed, I hope that buffer\n> management cache replacement\n> algorithm does it best to keep them in memory.\n\nSo, the top index should be small enough that we can safely assume it\nwouldn't be evicted from cache on a heavily loaded production system.\nI think it's evident that it should be in orders of magnitude less\nthan the total amount of server RAM.\n\n> > I'm OK with your design for a third-party extension. It's very cool\n> > to have. But I'm -1 for something like this to get into core\n> > PostgreSQL, assuming it's feasible to push some effort and get\n> > state-of-art LSM there.\n> I realize that it is not true LSM.\n> But still I wan to notice that it is able to provide ~10 times increase\n> of insert speed when size of index is comparable with RAM size.\n> And \"true LSM\" from RocksDB shows similar results.\n\nIt's very far from being shown. All the things you've shown is a\nnaive benchmark. I don't object that your design can work out some\ncases. And it's great that we have the lsm3 extension now. But I\nthink for PostgreSQL core we should think about better design.\n\n> May be if size of\n> index will be 100 times larger then\n> size of RAM, RocksDB will be significantly faster than Lsm3. But modern\n> servers has 0.5-1Tb of RAM.\n> Can't believe that there are databases with 100Tb indexes.\n\nComparison of whole RAM size to single index size looks plain wrong\nfor me. I think we can roughly compare whole RAM size to whole\ndatabase size. But also not the whole RAM size is always available\nfor caching data. Let's assume half of RAM is used for caching data.\nSo, a modern server with 0.5-1Tb of RAM, which suffers from random\nB-tree insertions and badly needs LSM-like data-structure, runs a\ndatabase of 25-50Tb. Frankly speaking, there is nothing\ncounterintuitive for me.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 9 Aug 2020 04:53:00 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "\n\nOn 09.08.2020 04:53, Alexander Korotkov wrote:\n>>\n>> I realize that it is not true LSM.\n>> But still I wan to notice that it is able to provide ~10 times increase\n>> of insert speed when size of index is comparable with RAM size.\n>> And \"true LSM\" from RocksDB shows similar results.\n> It's very far from being shown. All the things you've shown is a\n> naive benchmark. I don't object that your design can work out some\n> cases. And it's great that we have the lsm3 extension now. But I\n> think for PostgreSQL core we should think about better design.\n\nSorry, I mean that at particular benchmark and hardware Lsm3 and RocksDB \nshows similar performance.\nIt definitely doesn't mean that it will be true in all other cases.\nThis is one of the reasons why I have published this Lsm3 and RockDB FDW \nextensions:\nanybody can try to test them at their workload.\nIt will be very interesting to me to know this results, because I \ncertainly understand\nthat measuring of random insert performance in dummy table is not enough \nto make some\nconclusions.\n\nAnd I certainly do not want to say that we do not need \"right\" LSM \nimplementation inside Postgres core.\nIt just requires an order of magnitude more efforts.\nAnd there are many questions and challenges. For example Postgres buffer \nsize (8kb) seems to be too small for LSM.\nShould LSM implementation bypass Postgres buffer cache? There pros and \ncontras...\n\nAnother issue is logging. Should we just log all operations with LSM in \nWAL in usual way (as it is done for nbtree and Lsm3)?\nIt seems to me that for LSM alternative and more efficient solutions may \nbe proposed.\nFor example we may not log inserts in top index at all and just replay \nthem during recovery, assuming that this operation with\nsmall index is fast enough. And merge of top index with base index can \nbe done in atomic way and so also doesn't require WAL.\n\nAs far as I know Anastasia Lubennikova several years ago has implemented \nLSM for Postgres.\nThere was some performance issues (with concurrent access?).\nThis is why the first thing I want to clarify for myself is what are the \nbottlenecks of LSM architecture\nand are them caused by LSM itself or its integration in Postgres \ninfrastructure.\n\nI any case, before thinking about details of in-core LSM implementation \nfor Postgres, I think that\nit is necessary to demonstrate workloads at which RocksDB (or any other \nexisted DBMS with LSM)\nshows significant performance advantages comparing with Postgres with \nnbtree/Lsm3.\n\n>> May be if size of\n>> index will be 100 times larger then\n>> size of RAM, RocksDB will be significantly faster than Lsm3. But modern\n>> servers has 0.5-1Tb of RAM.\n>> Can't believe that there are databases with 100Tb indexes.\n> Comparison of whole RAM size to single index size looks plain wrong\n> for me. I think we can roughly compare whole RAM size to whole\n> database size. But also not the whole RAM size is always available\n> for caching data. Let's assume half of RAM is used for caching data.\n> So, a modern server with 0.5-1Tb of RAM, which suffers from random\n> B-tree insertions and badly needs LSM-like data-structure, runs a\n> database of 25-50Tb. Frankly speaking, there is nothing\n> counterintuitive for me.\n\nThere is actually nothing counterintuitive.\nI just mean that there are not so much 25-50Tb OLTP databases.\n\n\n\n",
"msg_date": "Sun, 9 Aug 2020 10:26:17 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: LSM tree for Postgres"
},
{
"msg_contents": "Dmitry Dolgov wrote\n>> On Tue, Aug 04, 2020 at 11:22:13AM +0300, Konstantin Knizhnik wrote:\n>>\n>> Then I think about implementing ideas of LSM using standard Postgres\n>> nbtree.\n>>\n>> We need two indexes: one small for fast inserts and another - big\n>> (main) index. This top index is small enough to fit in memory so\n>> inserts in this index are very fast. Periodically we will merge data\n>> from top index to base index and truncate the top index. To prevent\n>> blocking of inserts in the table while we are merging indexes we can\n>> add ... on more index, which will be used during merge.\n>>\n>> So final architecture of Lsm3 is the following: two top indexes used\n>> in cyclic way and one main index. When top index reaches some\n>> threshold value we initiate merge with main index, done by bgworker\n>> and switch to another top index. As far as merging indexes is done in\n>> background, it doesn't affect insert speed. Unfortunately Postgres\n>> Index AM has not bulk insert operation, so we have to perform normal\n>> inserts. But inserted data is already sorted by key which should\n>> improve access locality and partly solve random reads problem for base\n>> index.\n>>\n>> Certainly to perform search in Lsm3 we have to make lookups in all\n>> three indexes and merge search results.\n> \n> Thanks for sharing this! In fact this reminds me more of partitioned\n> b-trees [1] (and more older [2]) rather than LSM as it is (although\n> could be that the former was influenced by the latter). What could be\n> interesting is that quite often in these and many other whitepapers\n> (e.g. [3]) to address the lookup overhead the design includes bloom\n> filters in one or another way to avoid searching not relevant part of an\n> index. Tomas mentioned them in this thread as well (in the different\n> context), probably the design suggested here could also benefit from it?\n> \n> [1]: Riegger Christian, Vincon Tobias, Petrov Ilia. Write-optimized\n> indexing with partitioned b-trees. (2017). 296-300.\n> 10.1145/3151759.3151814.\n> [2]: Graefe Goetz. Write-Optimized B-Trees. (2004). 672-683.\n> 10.1016/B978-012088469-8/50060-7.\n> [3]: Huanchen Zhang, David G. Andersen, Andrew Pavlo, Michael Kaminsky,\n> Lin Ma, and Rui Shen. Reducing the Storage Overhead of Main-Memory OLTP\n> Databases with Hybrid Indexes. (2016). 1567–1581. 10.1145/2882903.2915222.\n\n\nI found this 2019 paper recently, might be worth a skim read for some\ndifferent ideas. too technical for me :)\n\"Jungle: Towards Dynamically Adjustable Key-Value Storeby Combining LSM-Tree\nand Copy-On-Write B+-Tree\"\nhttps://www.usenix.org/system/files/hotstorage19-paper-ahn.pdf\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Fri, 14 Aug 2020 16:14:20 -0700 (MST)",
"msg_from": "AJG <ayden@gera.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: LSM tree for Postgres"
}
] |
[
{
"msg_contents": "Commit 13838740f fixed some issues with step generation in partition\npruning, but as I mentioned in [1], I noticed that there is yet\nanother issue: get_steps_using_prefix() assumes that clauses in the\npassed-in prefix list are sorted in ascending order of their partition\nkey numbers, but the caller (i.e., gen_prune_steps_from_opexps())\ndoesn’t ensure that in the case of range partitioning, leading to an\nassertion failure. Here is an example causing such a failure, which\nwould happen with/without that commit:\n\ncreate table rp_prefix_test2 (a int, b int, c int) partition by range (a, b, c);\ncreate table rp_prefix_test2_p1 partition of rp_prefix_test2 for\nvalues from (1, 1, 0) to (1, 1, 10);\ncreate table rp_prefix_test2_p2 partition of rp_prefix_test2 for\nvalues from (2, 2, 0) to (2, 2, 10);\nselect * from rp_prefix_test2 where a <= 1 and b <= 1 and b = 1 and c <= 0;\n\nI don't think we write queries like this, but for this query, the\ncaller would create the prefix list for the last partition key “c”\n{b=1, a<=1, b<=1} (the clauses are not sorted properly!), then calling\nget_steps_using_prefix(), which leads to an assertion failure.\nAttached is a patch for fixing this issue.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK15%3Dc8Q-Ac3ogzZp_d6VsfRYSL2tD8zLwy_WYdrMXQhiCQ%40mail.gmail.com",
"msg_date": "Tue, 4 Aug 2020 21:45:31 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Yet another issue with step generation in partition pruning"
},
{
"msg_contents": "Fujita-san,\n\nThanks a lot for your time on fixing these multi-column range\npartition pruning issues. I'm sorry that I failed to notice the\nprevious two reports on -bugs for which you committed a fix last week.\n\nOn Tue, Aug 4, 2020 at 9:46 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Commit 13838740f fixed some issues with step generation in partition\n> pruning, but as I mentioned in [1], I noticed that there is yet\n> another issue: get_steps_using_prefix() assumes that clauses in the\n> passed-in prefix list are sorted in ascending order of their partition\n> key numbers, but the caller (i.e., gen_prune_steps_from_opexps())\n> doesn’t ensure that in the case of range partitioning, leading to an\n> assertion failure. Here is an example causing such a failure, which\n> would happen with/without that commit:\n>\n> create table rp_prefix_test2 (a int, b int, c int) partition by range (a, b, c);\n> create table rp_prefix_test2_p1 partition of rp_prefix_test2 for\n> values from (1, 1, 0) to (1, 1, 10);\n> create table rp_prefix_test2_p2 partition of rp_prefix_test2 for\n> values from (2, 2, 0) to (2, 2, 10);\n> select * from rp_prefix_test2 where a <= 1 and b <= 1 and b = 1 and c <= 0;\n>\n> I don't think we write queries like this, but for this query, the\n> caller would create the prefix list for the last partition key “c”\n> {b=1, a<=1, b<=1} (the clauses are not sorted properly!), then calling\n> get_steps_using_prefix(), which leads to an assertion failure.\n\nThat analysis is spot on.\n\n> Attached is a patch for fixing this issue.\n\nI have looked at the patch and played around with it using the\nregression tests you've added recently. I was not able to find any\nresults that looked surprising.\n\nThanks again.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Aug 2020 17:12:54 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Yet another issue with step generation in partition pruning"
},
{
"msg_contents": "Amit-san,\n\nOn Wed, Aug 5, 2020 at 5:13 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Thanks a lot for your time on fixing these multi-column range\n> partition pruning issues. I'm sorry that I failed to notice the\n> previous two reports on -bugs for which you committed a fix last week.\n\nNo problem.\n\n> On Tue, Aug 4, 2020 at 9:46 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > Attached is a patch for fixing this issue.\n>\n> I have looked at the patch and played around with it using the\n> regression tests you've added recently. I was not able to find any\n> results that looked surprising.\n\nThat's good to hear! Thanks for reviewing! Will push the patch tomorrow.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 6 Aug 2020 00:20:50 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Yet another issue with step generation in partition pruning"
},
{
"msg_contents": "On Thu, Aug 6, 2020 at 12:20 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Will push the patch tomorrow.\n\nDone. (I didn't have time for this, because I was terribly busy with\nother stuff.)\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 7 Aug 2020 14:55:40 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Yet another issue with step generation in partition pruning"
},
{
"msg_contents": "On Fri, Aug 7, 2020 at 2:55 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Thu, Aug 6, 2020 at 12:20 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > Will push the patch tomorrow.\n>\n> Done. (I didn't have time for this, because I was terribly busy with\n> other stuff.)\n\nI mean I didn't have time for this *yesterday*.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 7 Aug 2020 22:46:39 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Yet another issue with step generation in partition pruning"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI've attached a small patch to add information to rm_redo_error_callback().\n\nThe changes attached in this patch came while working on the \"Add \ninformation during standby recovery conflicts\" patch (See [1]).\n\nThe goal is to add more information during the callback (if doable), so \nthat something like:\n\n2020-08-04 14:42:57.545 UTC [15459] CONTEXT: WAL redo at 0/4A3B0DE0 for \nHeap2/CLEAN: remxid 1168\n\nwould get extra information that way:\n\n2020-08-04 14:42:57.545 UTC [15459] CONTEXT: WAL redo at 0/4A3B0DE0 for \nHeap2/CLEAN: remxid 1168, blkref #0: rel 1663/13586/16850 fork main blk 0\n\nAs this could be useful outside of [1], a dedicated \"sub\" patch has been \ncreated (thanks Sawada for the suggestion).\n\nI will add this patch to the next commitfest. I look forward to your \nfeedback about the idea and/or implementation.\n\nRegards,\n\nBertrand\n\n[1]: https://commitfest.postgresql.org/29/2604",
"msg_date": "Tue, 4 Aug 2020 17:37:05 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Add information to rm_redo_error_callback()"
},
{
"msg_contents": "On Wed, 5 Aug 2020 at 00:37, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi hackers,\n>\n> I've attached a small patch to add information to rm_redo_error_callback().\n>\n> The changes attached in this patch came while working on the \"Add\n> information during standby recovery conflicts\" patch (See [1]).\n>\n> The goal is to add more information during the callback (if doable), so\n> that something like:\n>\n> 2020-08-04 14:42:57.545 UTC [15459] CONTEXT: WAL redo at 0/4A3B0DE0 for\n> Heap2/CLEAN: remxid 1168\n>\n> would get extra information that way:\n>\n> 2020-08-04 14:42:57.545 UTC [15459] CONTEXT: WAL redo at 0/4A3B0DE0 for\n> Heap2/CLEAN: remxid 1168, blkref #0: rel 1663/13586/16850 fork main blk 0\n>\n> As this could be useful outside of [1], a dedicated \"sub\" patch has been\n> created (thanks Sawada for the suggestion).\n>\n> I will add this patch to the next commitfest. I look forward to your\n> feedback about the idea and/or implementation.\n>\n\nThank you for starting the new thread for this patch!\n\nI think this patch is simple enough and improves information shown in\nerrcontext.\n\nI have two comments on the patch:\n\ndiff --git a/src/backend/access/transam/xlog.c\nb/src/backend/access/transam/xlog.c\nindex 756b838e6a..8b2024e9e9 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -11749,10 +11749,22 @@ rm_redo_error_callback(void *arg)\n {\n XLogReaderState *record = (XLogReaderState *) arg;\n StringInfoData buf;\n+ int block_id;\n+ RelFileNode rnode;\n+ ForkNumber forknum;\n+ BlockNumber blknum;\n\n initStringInfo(&buf);\n xlog_outdesc(&buf, record);\n\n+ for (block_id = 0; block_id <= record->max_block_id; block_id++)\n+ {\n+ if (XLogRecGetBlockTag(record, block_id, &rnode, &forknum, &blknum))\n+ appendStringInfo(&buf,\", blkref #%d: rel %u/%u/%u fork %s blk %u\",\n+ block_id, rnode.spcNode, rnode.dbNode,\n+ rnode.relNode, forkNames[forknum],\n+ blknum);\n+ }\n /* translator: %s is a WAL record description */\n errcontext(\"WAL redo at %X/%X for %s\",\n (uint32) (record->ReadRecPtr >> 32),\n\nrnode, forknum and blknum can be declared within the for loop.\n\nI think it's better to put a new line just before the comment starting\nfrom \"translator:\".\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Aug 2020 14:10:24 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add information to rm_redo_error_callback()"
},
{
"msg_contents": "Hi,\n\nOn 8/10/20 7:10 AM, Masahiko Sawada wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Wed, 5 Aug 2020 at 00:37, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi hackers,\n>>\n>> I've attached a small patch to add information to rm_redo_error_callback().\n>>\n>> The changes attached in this patch came while working on the \"Add\n>> information during standby recovery conflicts\" patch (See [1]).\n>>\n>> The goal is to add more information during the callback (if doable), so\n>> that something like:\n>>\n>> 2020-08-04 14:42:57.545 UTC [15459] CONTEXT: WAL redo at 0/4A3B0DE0 for\n>> Heap2/CLEAN: remxid 1168\n>>\n>> would get extra information that way:\n>>\n>> 2020-08-04 14:42:57.545 UTC [15459] CONTEXT: WAL redo at 0/4A3B0DE0 for\n>> Heap2/CLEAN: remxid 1168, blkref #0: rel 1663/13586/16850 fork main blk 0\n>>\n>> As this could be useful outside of [1], a dedicated \"sub\" patch has been\n>> created (thanks Sawada for the suggestion).\n>>\n>> I will add this patch to the next commitfest. I look forward to your\n>> feedback about the idea and/or implementation.\n>>\n> Thank you for starting the new thread for this patch!\n>\n> I think this patch is simple enough and improves information shown in\n> errcontext.\n>\n> I have two comments on the patch:\n>\n> diff --git a/src/backend/access/transam/xlog.c\n> b/src/backend/access/transam/xlog.c\n> index 756b838e6a..8b2024e9e9 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -11749,10 +11749,22 @@ rm_redo_error_callback(void *arg)\n> {\n> XLogReaderState *record = (XLogReaderState *) arg;\n> StringInfoData buf;\n> + int block_id;\n> + RelFileNode rnode;\n> + ForkNumber forknum;\n> + BlockNumber blknum;\n>\n> initStringInfo(&buf);\n> xlog_outdesc(&buf, record);\n>\n> + for (block_id = 0; block_id <= record->max_block_id; block_id++)\n> + {\n> + if (XLogRecGetBlockTag(record, block_id, &rnode, &forknum, &blknum))\n> + appendStringInfo(&buf,\", blkref #%d: rel %u/%u/%u fork %s blk %u\",\n> + block_id, rnode.spcNode, rnode.dbNode,\n> + rnode.relNode, forkNames[forknum],\n> + blknum);\n> + }\n> /* translator: %s is a WAL record description */\n> errcontext(\"WAL redo at %X/%X for %s\",\n> (uint32) (record->ReadRecPtr >> 32),\n>\n> rnode, forknum and blknum can be declared within the for loop.\n>\n> I think it's better to put a new line just before the comment starting\n> from \"translator:\".\n\nThanks for looking at it!\n\nI've attached a new version as per your comments.\n\nThanks,\n\nBertrand",
"msg_date": "Mon, 10 Aug 2020 17:07:02 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add information to rm_redo_error_callback()"
},
{
"msg_contents": "On Tue, 11 Aug 2020 at 00:07, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 8/10/20 7:10 AM, Masahiko Sawada wrote:\n> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >\n> >\n> >\n> > On Wed, 5 Aug 2020 at 00:37, Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> >> Hi hackers,\n> >>\n> >> I've attached a small patch to add information to rm_redo_error_callback().\n> >>\n> >> The changes attached in this patch came while working on the \"Add\n> >> information during standby recovery conflicts\" patch (See [1]).\n> >>\n> >> The goal is to add more information during the callback (if doable), so\n> >> that something like:\n> >>\n> >> 2020-08-04 14:42:57.545 UTC [15459] CONTEXT: WAL redo at 0/4A3B0DE0 for\n> >> Heap2/CLEAN: remxid 1168\n> >>\n> >> would get extra information that way:\n> >>\n> >> 2020-08-04 14:42:57.545 UTC [15459] CONTEXT: WAL redo at 0/4A3B0DE0 for\n> >> Heap2/CLEAN: remxid 1168, blkref #0: rel 1663/13586/16850 fork main blk 0\n> >>\n> >> As this could be useful outside of [1], a dedicated \"sub\" patch has been\n> >> created (thanks Sawada for the suggestion).\n> >>\n> >> I will add this patch to the next commitfest. I look forward to your\n> >> feedback about the idea and/or implementation.\n> >>\n> > Thank you for starting the new thread for this patch!\n> >\n> > I think this patch is simple enough and improves information shown in\n> > errcontext.\n> >\n> > I have two comments on the patch:\n> >\n> > diff --git a/src/backend/access/transam/xlog.c\n> > b/src/backend/access/transam/xlog.c\n> > index 756b838e6a..8b2024e9e9 100644\n> > --- a/src/backend/access/transam/xlog.c\n> > +++ b/src/backend/access/transam/xlog.c\n> > @@ -11749,10 +11749,22 @@ rm_redo_error_callback(void *arg)\n> > {\n> > XLogReaderState *record = (XLogReaderState *) arg;\n> > StringInfoData buf;\n> > + int block_id;\n> > + RelFileNode rnode;\n> > + ForkNumber forknum;\n> > + BlockNumber blknum;\n> >\n> > initStringInfo(&buf);\n> > xlog_outdesc(&buf, record);\n> >\n> > + for (block_id = 0; block_id <= record->max_block_id; block_id++)\n> > + {\n> > + if (XLogRecGetBlockTag(record, block_id, &rnode, &forknum, &blknum))\n> > + appendStringInfo(&buf,\", blkref #%d: rel %u/%u/%u fork %s blk %u\",\n> > + block_id, rnode.spcNode, rnode.dbNode,\n> > + rnode.relNode, forkNames[forknum],\n> > + blknum);\n> > + }\n> > /* translator: %s is a WAL record description */\n> > errcontext(\"WAL redo at %X/%X for %s\",\n> > (uint32) (record->ReadRecPtr >> 32),\n> >\n> > rnode, forknum and blknum can be declared within the for loop.\n> >\n> > I think it's better to put a new line just before the comment starting\n> > from \"translator:\".\n>\n> Thanks for looking at it!\n>\n> I've attached a new version as per your comments.\n\nThank you for updating the patch!\n\nThe patch looks good to me. I've set this patch as Ready for Committer.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Aug 2020 14:45:50 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add information to rm_redo_error_callback()"
},
{
"msg_contents": "On Tue, Aug 11, 2020 at 02:45:50PM +0900, Masahiko Sawada wrote:\n> Thank you for updating the patch!\n> \n> The patch looks good to me. I've set this patch as Ready for Committer.\n\n+ for (block_id = 0; block_id <= record->max_block_id; block_id++)\n+ {\n+ RelFileNode rnode;\n+ ForkNumber forknum;\n+ BlockNumber blknum;\n\nDoesn't this potentially create duplicate information in some of the\nRM's desc() callbacks, and are we sure that the information provided\nis worth having for all the RMs? As one example, gin_desc() looks at\nsome of the block information, so there are overlaps. It may be\nworth thinking about showing more information for has_image and\napply_image if a block is in_use?\n--\nMichael",
"msg_date": "Tue, 11 Aug 2020 15:29:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add information to rm_redo_error_callback()"
},
{
"msg_contents": "On Tue, 11 Aug 2020 at 15:30, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Aug 11, 2020 at 02:45:50PM +0900, Masahiko Sawada wrote:\n> > Thank you for updating the patch!\n> >\n> > The patch looks good to me. I've set this patch as Ready for Committer.\n>\n> + for (block_id = 0; block_id <= record->max_block_id; block_id++)\n> + {\n> + RelFileNode rnode;\n> + ForkNumber forknum;\n> + BlockNumber blknum;\n>\n> Doesn't this potentially create duplicate information in some of the\n> RM's desc() callbacks, and are we sure that the information provided\n> is worth having for all the RMs? As one example, gin_desc() looks at\n> some of the block information, so there are overlaps.\n\nYeah, there is duplicate information in some RMs. I thought that we\ncan change individual RM’s desc() functions to output the block\ninformation but as long as I see the pg_waldump outputs these are not\nannoying to me and many of RM’s desc() doesn’t show the block\ninformation.\n\n> It may be\n> worth thinking about showing more information for has_image and\n> apply_image if a block is in_use?\n\nYes. I’m okay with adding information for has_image and apply_image\nbut IMHO I'm not sure how these shown in errcontext would help. If an\nerror related to has_image or apply_image happens, errmsg should show\nsomething detailed information about FPI.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Aug 2020 19:03:19 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add information to rm_redo_error_callback()"
},
{
"msg_contents": "Hi,\n\nThanks for the feedback.\n\nOn 8/11/20 12:03 PM, Masahiko Sawada wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Tue, 11 Aug 2020 at 15:30, Michael Paquier <michael@paquier.xyz> wrote:\n>> On Tue, Aug 11, 2020 at 02:45:50PM +0900, Masahiko Sawada wrote:\n>>> Thank you for updating the patch!\n>>>\n>>> The patch looks good to me. I've set this patch as Ready for Committer.\n>> + for (block_id = 0; block_id <= record->max_block_id; block_id++)\n>> + {\n>> + RelFileNode rnode;\n>> + ForkNumber forknum;\n>> + BlockNumber blknum;\n>>\n>> Doesn't this potentially create duplicate information in some of the\n>> RM's desc() callbacks, and are we sure that the information provided\n>> is worth having for all the RMs? As one example, gin_desc() looks at\n>> some of the block information, so there are overlaps.\n> Yeah, there is duplicate information in some RMs. I thought that we\n> can change individual RM’s desc() functions to output the block\n> information but as long as I see the pg_waldump outputs these are not\n> annoying to me and many of RM’s desc() doesn’t show the block\n> information.\n\nHaving this \"pg_waldump\" kind of format in this place \n(rm_redo_error_callback()) ensures that we'll always see the same piece \nof information during rm_redo.\n\nI think it's good to guarantee that we'll always see the same piece of \ninformation (should a new RM desc() be created in the future for \nexample), even if it could lead to some information overlap in some cases.\n\n>> It may be\n>> worth thinking about showing more information for has_image and\n>> apply_image if a block is in_use?\n> Yes. I’m okay with adding information for has_image and apply_image\n> but IMHO I'm not sure how these shown in errcontext would help. If an\n> error related to has_image or apply_image happens, errmsg should show\n> something detailed information about FPI.\n\nI am ok too, but I am also not sure that errcontext is the right place \nfor that.\n\nThanks\n\nBertrand\n\n>\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 17 Aug 2020 17:47:13 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add information to rm_redo_error_callback()"
},
{
"msg_contents": "On 2020-Aug-17, Drouvot, Bertrand wrote:\n\n> Having this \"pg_waldump\" kind of format in this place\n> (rm_redo_error_callback()) ensures that we'll always see the same piece of\n> information during rm_redo.\n> \n> I think it's good to guarantee that we'll always see the same piece of\n> information (should a new RM desc() be created in the future for example),\n> even if it could lead to some information overlap in some cases.\n\nI agree.\n\nI think we should treat the changes to remove rm_desc-specific info\nitems that are redundant as separate improvements that don't need to\nblock this patch. They would be, at worst, only minor annoyances.\nAnd the removal, as was said, can affect other things that we might want\nto think about separately.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 17 Aug 2020 13:39:26 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add information to rm_redo_error_callback()"
},
{
"msg_contents": "On Mon, Aug 17, 2020 at 05:47:13PM +0200, Drouvot, Bertrand wrote:\n> I think it's good to guarantee that we'll always see the same piece of\n> information (should a new RM desc() be created in the future for example),\n> even if it could lead to some information overlap in some cases.\n\n> I am ok too, but I am also not sure that errcontext is the right place for\n> that.\n\nHmm. I still think that knowing at least about a FPW could be an\ninteresting piece of information even here. Anyway, instead of\ncopying a logic that exists already in xlog_outrec(), why not moving\nthe block information print into a separate routine out of the\nWAL_DEBUG section, and just reuse the same format for the context of\nthe redo error callback? That would also be more consistent with what\nwe do in pg_waldump where we don't show the fork name of a block when\nit is on a MAIN_FORKNUM. And this would avoid a third copy of the\nsame logic. If we add the XID, previous LSN and the record length\non the stack of what is printed, we could just reuse the existing\nroutine, still that's perhaps too much information displayed.\n--\nMichael",
"msg_date": "Thu, 24 Sep 2020 15:03:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add information to rm_redo_error_callback()"
},
{
"msg_contents": "On Thu, Sep 24, 2020 at 03:03:46PM +0900, Michael Paquier wrote:\n> Hmm. I still think that knowing at least about a FPW could be an\n> interesting piece of information even here. Anyway, instead of\n> copying a logic that exists already in xlog_outrec(), why not moving\n> the block information print into a separate routine out of the\n> WAL_DEBUG section, and just reuse the same format for the context of\n> the redo error callback? That would also be more consistent with what\n> we do in pg_waldump where we don't show the fork name of a block when\n> it is on a MAIN_FORKNUM. And this would avoid a third copy of the\n> same logic. If we add the XID, previous LSN and the record length\n> on the stack of what is printed, we could just reuse the existing\n> routine, still that's perhaps too much information displayed.\n\nSeeing nothing, I took a swing at that, and finished with the\nattached that refactors the logic and prints the block information as\nwanted. Any objections to that?\n--\nMichael",
"msg_date": "Thu, 1 Oct 2020 16:41:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add information to rm_redo_error_callback()"
},
{
"msg_contents": "Hi,\n\nOn 10/1/20 9:41 AM, Michael Paquier wrote:\n> On Thu, Sep 24, 2020 at 03:03:46PM +0900, Michael Paquier wrote:\n>> Hmm. I still think that knowing at least about a FPW could be an\n>> interesting piece of information even here. Anyway, instead of\n>> copying a logic that exists already in xlog_outrec(), why not moving\n>> the block information print into a separate routine out of the\n>> WAL_DEBUG section, and just reuse the same format for the context of\n>> the redo error callback? That would also be more consistent with what\n>> we do in pg_waldump where we don't show the fork name of a block when\n>> it is on a MAIN_FORKNUM. And this would avoid a third copy of the\n>> same logic. If we add the XID, previous LSN and the record length\n>> on the stack of what is printed, we could just reuse the existing\n>> routine, still that's perhaps too much information displayed.\n> Seeing nothing, I took a swing at that, and finished with the\n> attached that refactors the logic and prints the block information as\n> wanted. Any objections to that?\n\nSorry for the late reply and thanks for looking at it!\n\nHad a look at it and did a few tests: looks all good to me.\n\nNo objections at all, thanks!\n\nBertrand\n\n> --\n> Michael\n\n\n",
"msg_date": "Thu, 1 Oct 2020 11:18:30 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add information to rm_redo_error_callback()"
},
{
"msg_contents": "On Thu, Oct 01, 2020 at 11:18:30AM +0200, Drouvot, Bertrand wrote:\n> Had a look at it and did a few tests: looks all good to me.\n> \n> No objections at all, thanks!\n\nThanks for double-checking. Applied, then.\n--\nMichael",
"msg_date": "Fri, 2 Oct 2020 09:47:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add information to rm_redo_error_callback()"
},
{
"msg_contents": "\nOn 10/2/20 2:47 AM, Michael Paquier wrote:\n> On Thu, Oct 01, 2020 at 11:18:30AM +0200, Drouvot, Bertrand wrote:\n>> Had a look at it and did a few tests: looks all good to me.\n>>\n>> No objections at all, thanks!\n> Thanks for double-checking. Applied, then.\n\nThanks!\n\nBertrand\n\n\n\n",
"msg_date": "Fri, 2 Oct 2020 07:33:39 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Add information to rm_redo_error_callback()"
}
] |
[
{
"msg_contents": "If shutdown (non hot enabled) standby and promote the standby using\npromote_trigger_file via pg_ctl start with -w (wait), currently pg_ctl\nreturns as soon as recovery is started. Instead would be helpful if\npg_ctl can wait till PM_STATUS_READY for this case, given promotion is\nrequested.\n\npg_ctl -w returns as soon as recovery is started for non hot enabled\nstandby because PM_STATUS_STANDBY is written\non PMSIGNAL_RECOVERY_STARTED. Given the intent to promote the standby\nusing promote_trigger_file, it would be better to not write\nPM_STATUS_STANDBY, instead let promotion complete and return only\nafter connections can be actually accepted.\n\nSeems helpful behavior for users, though I am not sure about how much\npromote_trigger_file is used with non hot enabled standbys. This is\nsomething which will help to solidify some of the tests in Greenplum\nhence checking interest for the same here.\n\nIt's doable via below patch:\n\ndiff --git a/src/backend/postmaster/postmaster.c\nb/src/backend/postmaster/postmaster.c\nindex 5b5fc97c72..c49010aa5a 100644\n--- a/src/backend/postmaster/postmaster.c\n+++ b/src/backend/postmaster/postmaster.c\n@@ -5197,6 +5197,7 @@ sigusr1_handler(SIGNAL_ARGS)\n if (CheckPostmasterSignal(PMSIGNAL_RECOVERY_STARTED) &&\n pmState == PM_STARTUP && Shutdown == NoShutdown)\n {\n+ bool promote_trigger_file_exist = false;\n /* WAL redo has started. We're out of reinitialization. */\n FatalError = false;\n AbortStartTime = 0;\n@@ -5218,12 +5219,25 @@ sigusr1_handler(SIGNAL_ARGS)\n if (XLogArchivingAlways())\n PgArchPID = pgarch_start();\n\n+ {\n+ /*\n+ * if promote trigger file exist we don't wish to\nconvey\n+ * PM_STATUS_STANDBY, instead wish pg_ctl -w to\nwait till\n+ * connections can be actually accepted by the\ndatabase.\n+ */\n+ struct stat stat_buf;\n+ if (PromoteTriggerFile != NULL &&\n+ strcmp(PromoteTriggerFile, \"\") != 0 &&\n+ stat(PromoteTriggerFile, &stat_buf) == 0)\n+ promote_trigger_file_exist = true;\n+ }\n+\n /*\n * If we aren't planning to enter hot standby mode later,\ntreat\n * RECOVERY_STARTED as meaning we're out of startup, and\nreport status\n * accordingly.\n */\n- if (!EnableHotStandby)\n+ if (!EnableHotStandby && !promote_trigger_file_exist)\n {\n AddToDataDirLockFile(LOCK_FILE_LINE_PM_STATUS,\nPM_STATUS_STANDBY);\n #ifdef USE_SYSTEMD\n\n\n-- \n*Ashwin Agrawal (VMware)*\n\nIf shutdown (non hot enabled) standby and promote the standby usingpromote_trigger_file via pg_ctl start with -w (wait), currently pg_ctlreturns as soon as recovery is started. Instead would be helpful ifpg_ctl can wait till PM_STATUS_READY for this case, given promotion isrequested.pg_ctl -w returns as soon as recovery is started for non hot enabledstandby because PM_STATUS_STANDBY is writtenon PMSIGNAL_RECOVERY_STARTED. Given the intent to promote the standbyusing promote_trigger_file, it would be better to not write PM_STATUS_STANDBY, instead let promotion complete and return onlyafter connections can be actually accepted.Seems helpful behavior for users, though I am not sure about how muchpromote_trigger_file is used with non hot enabled standbys. This issomething which will help to solidify some of the tests in Greenplumhence checking interest for the same here.It's doable via below patch:diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.cindex 5b5fc97c72..c49010aa5a 100644--- a/src/backend/postmaster/postmaster.c+++ b/src/backend/postmaster/postmaster.c@@ -5197,6 +5197,7 @@ sigusr1_handler(SIGNAL_ARGS) if (CheckPostmasterSignal(PMSIGNAL_RECOVERY_STARTED) && pmState == PM_STARTUP && Shutdown == NoShutdown) {+ bool promote_trigger_file_exist = false; /* WAL redo has started. We're out of reinitialization. */ FatalError = false; AbortStartTime = 0;@@ -5218,12 +5219,25 @@ sigusr1_handler(SIGNAL_ARGS) if (XLogArchivingAlways()) PgArchPID = pgarch_start(); + {+ /*+ * if promote trigger file exist we don't wish to convey+ * PM_STATUS_STANDBY, instead wish pg_ctl -w to wait till+ * connections can be actually accepted by the database.+ */+ struct stat stat_buf;+ if (PromoteTriggerFile != NULL &&+ strcmp(PromoteTriggerFile, \"\") != 0 &&+ stat(PromoteTriggerFile, &stat_buf) == 0)+ promote_trigger_file_exist = true;+ }+ /* * If we aren't planning to enter hot standby mode later, treat * RECOVERY_STARTED as meaning we're out of startup, and report status * accordingly. */- if (!EnableHotStandby)+ if (!EnableHotStandby && !promote_trigger_file_exist) { AddToDataDirLockFile(LOCK_FILE_LINE_PM_STATUS, PM_STATUS_STANDBY); #ifdef USE_SYSTEMD-- Ashwin Agrawal (VMware)",
"msg_date": "Tue, 4 Aug 2020 12:01:45 -0700",
"msg_from": "Ashwin Agrawal <ashwinstar@gmail.com>",
"msg_from_op": true,
"msg_subject": "For standby pg_ctl doesn't wait for PM_STATUS_READY in presence of\n promote_trigger_file"
},
{
"msg_contents": "Hello.\n\nAt Tue, 4 Aug 2020 12:01:45 -0700, Ashwin Agrawal <ashwinstar@gmail.com> wrote in \n> If shutdown (non hot enabled) standby and promote the standby using\n> promote_trigger_file via pg_ctl start with -w (wait), currently pg_ctl\n> returns as soon as recovery is started. Instead would be helpful if\n> pg_ctl can wait till PM_STATUS_READY for this case, given promotion is\n> requested.\n> \n> pg_ctl -w returns as soon as recovery is started for non hot enabled\n> standby because PM_STATUS_STANDBY is written\n> on PMSIGNAL_RECOVERY_STARTED. Given the intent to promote the standby\n> using promote_trigger_file, it would be better to not write\n> PM_STATUS_STANDBY, instead let promotion complete and return only\n> after connections can be actually accepted.\n> \n> Seems helpful behavior for users, though I am not sure about how much\n> promote_trigger_file is used with non hot enabled standbys. This is\n> something which will help to solidify some of the tests in Greenplum\n> hence checking interest for the same here.\n> \n> It's doable via below patch:\n\nIt is apparently strange that \"pg_ctl start\" waits for a server to\npromote. Is there any reason you use that way instead of pg_ctl start\nthen pg_ctl promote?\n\n> diff --git a/src/backend/postmaster/postmaster.c\n> b/src/backend/postmaster/postmaster.c\n> index 5b5fc97c72..c49010aa5a 100644\n> --- a/src/backend/postmaster/postmaster.c\n> +++ b/src/backend/postmaster/postmaster.c\n> @@ -5197,6 +5197,7 @@ sigusr1_handler(SIGNAL_ARGS)\n> if (CheckPostmasterSignal(PMSIGNAL_RECOVERY_STARTED) &&\n> pmState == PM_STARTUP && Shutdown == NoShutdown)\n> {\n> + bool promote_trigger_file_exist = false;\n> /* WAL redo has started. We're out of reinitialization. */\n> FatalError = false;\n> AbortStartTime = 0;\n> @@ -5218,12 +5219,25 @@ sigusr1_handler(SIGNAL_ARGS)\n> if (XLogArchivingAlways())\n> PgArchPID = pgarch_start();\n> \n> + {\n> + /*\n> + * if promote trigger file exist we don't wish to\n> convey\n> + * PM_STATUS_STANDBY, instead wish pg_ctl -w to\n> wait till\n> + * connections can be actually accepted by the\n> database.\n> + */\n> + struct stat stat_buf;\n> + if (PromoteTriggerFile != NULL &&\n> + strcmp(PromoteTriggerFile, \"\") != 0 &&\n> + stat(PromoteTriggerFile, &stat_buf) == 0)\n> + promote_trigger_file_exist = true;\n> + }\n> +\n> /*\n> * If we aren't planning to enter hot standby mode later,\n> treat\n> * RECOVERY_STARTED as meaning we're out of startup, and\n> report status\n> * accordingly.\n> */\n> - if (!EnableHotStandby)\n> + if (!EnableHotStandby && !promote_trigger_file_exist)\n> {\n> AddToDataDirLockFile(LOCK_FILE_LINE_PM_STATUS,\n> PM_STATUS_STANDBY);\n> #ifdef USE_SYSTEMD\n\nAddition the above, in regards to the patch, I'm not sure it's good\nthing that postmaster process gets conscious of\nPromoteTriggerFile.\n\nMaybe we could change the behavior of \"pg_ctl start\" to wait for\nconsistecy point when archive recovery runs (slightly similarly to the\ncase of standbys) by adding a PM-signal, say,\nPMSIGNAL_CONSISTENCY_REACHED?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 05 Aug 2020 14:46:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: For standby pg_ctl doesn't wait for PM_STATUS_READY in\n presence of promote_trigger_file"
}
] |
[
{
"msg_contents": "I'm testing with a customer's data on pg13dev and got output for which Peak\nMemory doesn't look right/useful. I reproduced it on 565f16902.\n\nCREATE TABLE p(i int) PARTITION BY RANGE(i);\nCREATE TABLE p1 PARTITION OF p FOR VALUES FROM (0)TO(1000);\nCREATE TABLE p2 PARTITION OF p FOR VALUES FROM (1000)TO(2000);\nCREATE TABLE p3 PARTITION OF p FOR VALUES FROM (2000)TO(3000);\nINSERT INTO p SELECT i%3000 FROM generate_series(1,999999)i;\nVACUUM ANALYZE p;\n\npostgres=# explain(analyze,settings) SELECT i, COUNT(1) FROM p GROUP BY 1;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=7469.00..14214.45 rows=2502 width=12) (actual time=489.409..514.209 rows=3000 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Append (cost=6469.00..12964.25 rows=1251 width=12) (actual time=476.291..477.179 rows=1000 loops=3)\n -> HashAggregate (cost=6487.99..6497.99 rows=1000 width=12) (actual time=474.454..475.203 rows=1000 loops=1)\n Group Key: p.i\n Batches: 1 Memory Usage: 0kB\n Worker 0: Batches: 1 Memory Usage: 193kB\n Worker 1: Batches: 1 Memory Usage: 0kB\n -> Seq Scan on p1 p (cost=0.00..4817.99 rows=333999 width=4) (actual time=0.084..100.677 rows=333999 loops=1)\n -> HashAggregate (cost=6469.00..6479.00 rows=1000 width=12) (actual time=468.517..469.272 rows=1000 loops=1)\n Group Key: p_1.i\n Batches: 1 Memory Usage: 0kB\n Worker 0: Batches: 1 Memory Usage: 0kB\n Worker 1: Batches: 1 Memory Usage: 193kB\n -> Seq Scan on p2 p_1 (cost=0.00..4804.00 rows=333000 width=4) (actual time=0.082..102.154 rows=333000 loops=1)\n -> HashAggregate (cost=6469.00..6479.00 rows=1000 width=12) (actual time=485.887..486.509 rows=1000 loops=1)\n Group Key: p_2.i\n Batches: 1 Memory Usage: 193kB\n Worker 0: Batches: 1 Memory Usage: 0kB\n Worker 1: Batches: 1 Memory Usage: 0kB\n -> Seq Scan on p3 p_2 (cost=0.00..4804.00 rows=333000 width=4) (actual time=0.043..104.631 rows=333000 loops=1)\n Settings: effective_io_concurrency = '0', enable_partitionwise_aggregate = 'on', enable_partitionwise_join = 'on', work_mem = '127MB'\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 4 Aug 2020 20:21:05 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg13dev: explain partial, parallel hashagg, and memory use"
},
{
"msg_contents": "On Wed, 5 Aug 2020 at 13:21, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> I'm testing with a customer's data on pg13dev and got output for which Peak\n> Memory doesn't look right/useful. I reproduced it on 565f16902.\n\nLikely the sanity of those results depends on whether you think that\nthe Memory Usage reported outside of the workers is meant to be the\nsum of all processes or the memory usage for the leader backend.\n\nAll that's going on here is that the Parallel Append is using some\nparallel safe paths and giving one to each worker. The 2 workers take\nthe first 2 subpaths and the leader takes the third. The memory usage\nreported helps confirm that's the case.\n\nCan you explain what you'd want to see changed about this? Or do you\nwant to see the non-parallel worker memory be the sum of all workers?\nSort does not seem to do that, so I'm not sure if we should consider\nhash agg as an exception to that.\n\nOne thing I did notice from playing with this table is that Sort does\nnot show the memory used by the leader process when it didn't do any\nof the work itself.\n\npostgres=# set parallel_leader_participation =off;\nSET\npostgres=# explain analyze select i from p group by i;\n\n -> Sort (cost=59436.92..60686.92 rows=500000 width=4)\n(actual time=246.836..280.985 rows=500000 loops=2)\n Sort Key: p.i\n Worker 0: Sort Method: quicksort Memory: 27898kB\n Worker 1: Sort Method: quicksort Memory: 55842kB\n\nWhereas with the leader helping out we get:\n\n-> Sort (cost=51284.39..52326.05 rows=416666 width=4) (actual\ntime=191.814..213.418 rows=333333 loops=3)\n Sort Key: p.i\n Sort Method: quicksort Memory: 33009kB\n Worker 0: Sort Method: quicksort Memory: 25287kB\n Worker 1: Sort Method: quicksort Memory: 25445kB\n\nMaybe we should do the same for hash agg when the leader didn't assist?\n\nDavid\n\n\n",
"msg_date": "Wed, 5 Aug 2020 13:44:17 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg13dev: explain partial, parallel hashagg, and memory use"
},
{
"msg_contents": "On Tue, Aug 4, 2020 at 9:44 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 5 Aug 2020 at 13:21, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > I'm testing with a customer's data on pg13dev and got output for which Peak\n> > Memory doesn't look right/useful. I reproduced it on 565f16902.\n>\n> Likely the sanity of those results depends on whether you think that\n> the Memory Usage reported outside of the workers is meant to be the\n> sum of all processes or the memory usage for the leader backend.\n>\n> All that's going on here is that the Parallel Append is using some\n> parallel safe paths and giving one to each worker. The 2 workers take\n> the first 2 subpaths and the leader takes the third. The memory usage\n> reported helps confirm that's the case.\n>\n> Can you explain what you'd want to see changed about this? Or do you\n> want to see the non-parallel worker memory be the sum of all workers?\n> Sort does not seem to do that, so I'm not sure if we should consider\n> hash agg as an exception to that.\n\nI've always found the way we report parallel workers in EXPLAIN quite\nconfusing. I realize it matches the actual implementation model (the\nleader often is also \"another worker\", but I think the natural\nexpectation from a user perspective would be that you'd show as\nworkers all backends (including the leader) that did work, and then\naggregate into a summary line (where the leader is displayed now).\n\nIn the current output there's nothing really to hint to the use that\nthe model is leader + workers and that the \"summary\" line is really\nthe leader. If I were to design this from scratch, I'd want to propose\ndoing what I said above (summary aggregate line + treat leader as a\nworker line, likely with a \"leader\" tag), but that seems like a big\nchange to make now. On the other hand, perhaps designating what looks\nlike a summary line as the \"leader\" or some such would help clear up\nthe confusion? Perhaps it could also say \"Participating\" or\n\"Non-participating\"?\n\nJames\n\n\n",
"msg_date": "Tue, 4 Aug 2020 22:01:18 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg13dev: explain partial, parallel hashagg, and memory use"
},
{
"msg_contents": "On Wed, Aug 05, 2020 at 01:44:17PM +1200, David Rowley wrote:\n> On Wed, 5 Aug 2020 at 13:21, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > I'm testing with a customer's data on pg13dev and got output for which Peak\n> > Memory doesn't look right/useful. I reproduced it on 565f16902.\n> \n> Likely the sanity of those results depends on whether you think that\n> the Memory Usage reported outside of the workers is meant to be the\n> sum of all processes or the memory usage for the leader backend.\n> \n> All that's going on here is that the Parallel Append is using some\n> parallel safe paths and giving one to each worker. The 2 workers take\n> the first 2 subpaths and the leader takes the third. The memory usage\n> reported helps confirm that's the case.\n\nI'm not sure there's a problem, but all the 0kB were suspicious to me. \n\nI think you're saying that one worker alone handled each HashAgg, and the other\nworker (and leader) show 0kB. I guess in my naive thinking it's odd to show a\nworker which wasn't active for that subpath (at least in text output). But I\ndon't know the expected behavior of parallel hashagg, so that explains most of\nmy confusion. \n\nOn Tue, Aug 04, 2020 at 10:01:18PM -0400, James Coleman wrote:\n> Perhaps it could also say \"Participating\" or \"Non-participating\"?\n\nYes, that'd help me alot :)\n\nAlso odd (to me). If I encourage more workers, there are \"slots\" for each\n\"planned\" worker, even though fewer were launched:\n\npostgres=# ALTER TABLE p3 SET (parallel_workers=11);\npostgres=# SET max_parallel_workers_per_gather=11;\n Finalize HashAggregate (cost=10299.64..10329.64 rows=3000 width=12) (actual time=297.793..299.933 rows=3000 loops=1)\n Group Key: p.i\n Batches: 1 Memory Usage: 625kB\n -> Gather (cost=2928.09..10134.64 rows=33000 width=12) (actual time=233.398..282.429 rows=13000 loops=1)\n Workers Planned: 11\n Workers Launched: 7\n -> Parallel Append (cost=1928.09..5834.64 rows=3000 width=12) (actual time=214.358..232.980 rows=1625 loops=8)\n -> Partial HashAggregate (cost=1933.46..1943.46 rows=1000 width=12) (actual time=167.936..171.345 rows=1000 loops=4)\n Group Key: p.i\n Batches: 1 Memory Usage: 0kB\n Worker 0: Batches: 1 Memory Usage: 193kB\n Worker 1: Batches: 1 Memory Usage: 193kB\n Worker 2: Batches: 1 Memory Usage: 0kB\n Worker 3: Batches: 1 Memory Usage: 0kB\n Worker 4: Batches: 1 Memory Usage: 193kB\n Worker 5: Batches: 1 Memory Usage: 0kB\n Worker 6: Batches: 1 Memory Usage: 193kB\n Worker 7: Batches: 0 Memory Usage: 0kB\n Worker 8: Batches: 0 Memory Usage: 0kB\n Worker 9: Batches: 0 Memory Usage: 0kB\n Worker 10: Batches: 0 Memory Usage: 0kB\n\n\nThanks,\n-- \nJustin\n\n\n",
"msg_date": "Tue, 4 Aug 2020 21:13:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg13dev: explain partial, parallel hashagg, and memory use"
},
{
"msg_contents": "On Wed, 5 Aug 2020 at 14:13, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Also odd (to me). If I encourage more workers, there are \"slots\" for each\n> \"planned\" worker, even though fewer were launched:\n\nLooking at explain.c for \"num_workers; \" (including the final space at\nthe end), looking at each for loop that loops over each worker, quite\na number of those locations have a condition that skips the worker.\n\nFor example, show_sort_info() does\n\nif (sinstrument->sortMethod == SORT_TYPE_STILL_IN_PROGRESS)\ncontinue; /* ignore any unfilled slots */\n\nSo maybe Hash Agg should be doing something similar. Additionally,\nmaybe it should not show the leader details if the leader didn't help.\n\nDavid\n\n\n",
"msg_date": "Wed, 5 Aug 2020 14:27:26 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg13dev: explain partial, parallel hashagg, and memory use"
},
{
"msg_contents": "On Wed, 5 Aug 2020 at 14:27, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 5 Aug 2020 at 14:13, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Also odd (to me). If I encourage more workers, there are \"slots\" for each\n> > \"planned\" worker, even though fewer were launched:\n>\n> Looking at explain.c for \"num_workers; \" (including the final space at\n> the end), looking at each for loop that loops over each worker, quite\n> a number of those locations have a condition that skips the worker.\n>\n> For example, show_sort_info() does\n>\n> if (sinstrument->sortMethod == SORT_TYPE_STILL_IN_PROGRESS)\n> continue; /* ignore any unfilled slots */\n>\n> So maybe Hash Agg should be doing something similar. Additionally,\n> maybe it should not show the leader details if the leader didn't help.\n\nHere's what I had in mind.\n\nThe unpatched format got even more broken with EXPLAIN (ANALYZE,\nVERBOSE), so this is certainly a bug fix.\n\nDavid",
"msg_date": "Wed, 5 Aug 2020 17:25:25 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg13dev: explain partial, parallel hashagg, and memory use"
},
{
"msg_contents": "On Wed, 5 Aug 2020 at 17:25, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 5 Aug 2020 at 14:27, David Rowley <dgrowleyml@gmail.com> wrote:\n> > So maybe Hash Agg should be doing something similar. Additionally,\n> > maybe it should not show the leader details if the leader didn't help.\n>\n> Here's what I had in mind.\n\nJust coming back to this. I'd like to push it soon, but it's currently\nlate here. I'll look at pushing it in my morning in about 8 hours\ntime.\n\nIf anyone has any comments please let me know before then.\n\nDavid\n\n\n",
"msg_date": "Fri, 7 Aug 2020 00:44:19 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg13dev: explain partial, parallel hashagg, and memory use"
},
{
"msg_contents": "On Fri, 7 Aug 2020 at 00:44, David Rowley <dgrowleyml@gmail.com> wrote:\n> Just coming back to this. I'd like to push it soon, but it's currently\n> late here. I'll look at pushing it in my morning in about 8 hours\n> time.\n\nPushed.\n\nDavid\n\n\n",
"msg_date": "Fri, 7 Aug 2020 10:24:09 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg13dev: explain partial, parallel hashagg, and memory use"
}
] |
[
{
"msg_contents": "Hi,\n\nI see that release 13 is currently in beta.\nWhen will be the official production release of 13 be out?\n\nWe need to see if we can include this as part of our product release cycle.\n\nRegards,\nJoel\n\n\n\n\n\n\n\n\n\nHi,\n \nI see that release 13 is currently in beta.\nWhen will be the official production release of 13 be out?\n \nWe need to see if we can include this as part of our product release cycle.\n \nRegards,\nJoel",
"msg_date": "Wed, 5 Aug 2020 06:08:15 +0000",
"msg_from": "\"Joel Mariadasan (jomariad)\" <jomariad@cisco.com>",
"msg_from_op": true,
"msg_subject": "Reg. Postgres 13"
},
{
"msg_contents": "On Wed, Aug 5, 2020 at 8:08 AM Joel Mariadasan (jomariad) <\njomariad@cisco.com> wrote:\n\n> Hi,\n>\n>\n>\n> I see that release 13 is currently in beta.\n>\n> When will be the official production release of 13 be out?\n>\n>\n>\n> We need to see if we can include this as part of our product release cycle.\n>\n>\n>\n\nHello!\n\nYou can find this info on https://www.postgresql.org/developer/roadmap/.\nThe current roadmas is Q3, with a target hopefully in the Sept/Oct\ntimeframe (but no guarantees, of course).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Aug 5, 2020 at 8:08 AM Joel Mariadasan (jomariad) <jomariad@cisco.com> wrote:\n\n\nHi,\n \nI see that release 13 is currently in beta.\nWhen will be the official production release of 13 be out?\n \nWe need to see if we can include this as part of our product release cycle.\n Hello!You can find this info on https://www.postgresql.org/developer/roadmap/. The current roadmas is Q3, with a target hopefully in the Sept/Oct timeframe (but no guarantees, of course). -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 5 Aug 2020 09:29:50 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Reg. Postgres 13"
},
{
"msg_contents": "On Wed, Aug 5, 2020 at 06:08:15AM +0000, Joel Mariadasan (jomariad) wrote:\n> Hi,\n> \n> I see that release 13 is currently in beta.\n> \n> When will be the official production release of 13 be out?\n> \n> We need to see if we can include this as part of our product release cycle.\n\nLook here:\n\n\thttps://www.postgresql.org/developer/roadmap/\n\n\t The next major release of PostgreSQL is planned to be the 13\n\trelease. A tentative schedule for this version has a release in the\n\tthird quarter of 2020. \n\nMost likely Sept/Oct.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 5 Aug 2020 20:35:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Reg. Postgres 13"
}
] |
[
{
"msg_contents": "A colleague of mine brought to my attention that pg_rewind is not crash \nsafe. If it is interrupted for any reason, it leaves behind a data \ndirectory with a mix of data from the source and target images. If \nyou're \"lucky\", the server will start up, but it can be in an \ninconsistent state. That's obviously not good. It would be nice to:\n\n1. Detect the situation, and refuse to start up.\n\nOr even better:\n\n2. Make pg_rewind crash safe, so that you could safely restart it if \nit's interrupted.\n\nHas anyone else run into this? How did you work around it?\n\nIt doesn't seem hard to detect this. pg_rewind can somehow \"poison\" the \ndata directory just before it starts making irreversible changes. I'm \nthinking of updating the 'state' in the control file to a new \nPG_IN_REWIND value.\n\nIt also doesn't seem too hard to make it restartable. As long as you \npoint it to the same source server, it is already almost safe to run \npg_rewind again. If we re-order the way it writes the control or backup \nfiles and makes other changes, pg_rewind can verify that you pointed it \nat the same or compatible primary as before.\n\nI think there's one corner case with truncated files, if pg_rewind has \nextended a file by copying missing \"tail\" from the source system, but \nthe system crashes before it's fsynced to disk. But I think we can fix \nthat too, by paying attention to SMGR_TRUNCATE records when scanning the \nsource WAL.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 5 Aug 2020 21:13:08 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "pg_rewind is not crash safe"
},
{
"msg_contents": "\n\n> 5 авг. 2020 г., в 23:13, Heikki Linnakangas <hlinnaka@iki.fi> написал(а):\n> \n> A colleague of mine brought to my attention that pg_rewind is not crash safe. If it is interrupted for any reason, it leaves behind a data directory with a mix of data from the source and target images. If you're \"lucky\", the server will start up, but it can be in an inconsistent state. \n\nFWIW we routinely encounter cases when after unsuccessful pg_rewind databases refuses to start with \"contrecord requested\" message.\nI did not investigate this in detail yet, but I think it is a result of wrong redo recptr written to control file (due to interruption or insufficient present WAL segments).\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 6 Aug 2020 09:55:51 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind is not crash safe"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs $subject says, pg_test_fsync and pg_test_timing don't really check\nthe range of option values specified. It is possible for example to\nmake pg_test_fsync run an infinite amount of time, and pg_test_timing\ndoes not handle overflows with --duration at all.\n\nThese are far from being critical issues, but let's fix them at least\non HEAD. So, please see the attached, where I have also added some\nbasic TAP tests for both tools.\n\nThanks,\n--\nMichael",
"msg_date": "Thu, 6 Aug 2020 15:27:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On 2020-08-06 08:27, Michael Paquier wrote:\n> As $subject says, pg_test_fsync and pg_test_timing don't really check\n> the range of option values specified. It is possible for example to\n> make pg_test_fsync run an infinite amount of time, and pg_test_timing\n> does not handle overflows with --duration at all.\n> \n> These are far from being critical issues, but let's fix them at least\n> on HEAD. So, please see the attached, where I have also added some\n> basic TAP tests for both tools.\n\nAccording to the POSIX standard, atoi() is not required to do any error \nchecking, and if you want error checking, you should use strtol().\n\nAnd if you do that, you might as well change the variables to unsigned \nand use strtoul(), and then drop the checks for <=0. I would allow 0. \nIt's not very useful, but it's not harmful and could be applicable in \ntesting.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 4 Sep 2020 23:24:39 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On Fri, Sep 04, 2020 at 11:24:39PM +0200, Peter Eisentraut wrote:\n> According to the POSIX standard, atoi() is not required to do any error\n> checking, and if you want error checking, you should use strtol().\n> \n> And if you do that, you might as well change the variables to unsigned and\n> use strtoul(), and then drop the checks for <=0.\n\nSwitching to unsigned makes sense, indeed.\n\n> I would allow 0. It's not\n> very useful, but it's not harmful and could be applicable in testing.\n\nHmm, OK. For pg_test_fsync, 0 means infinity, and for pg_test_timing\nthat means stopping immediately (we currently don't allow that). How\ndoes this apply to testing? For pg_test_fsync, using 0 would mean to\njust remain stuck in the first fsync() pattern, while for\npg_test_fsync this means doing no test loops at all, generating a\nuseless log once done. Or do you mean to change the logic of\npg_test_fsync so as --secs-per-test=0 means doing one single write?\nThat's something I thought about for this thread, but I am not sure\nthat the extra regression test gain is worth more complexity in this\ncode.\n--\nMichael",
"msg_date": "Sun, 6 Sep 2020 12:04:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On 2020-09-06 05:04, Michael Paquier wrote:\n>> I would allow 0. It's not\n>> very useful, but it's not harmful and could be applicable in testing.\n> \n> Hmm, OK. For pg_test_fsync, 0 means infinity, and for pg_test_timing\n> that means stopping immediately (we currently don't allow that). How\n> does this apply to testing? For pg_test_fsync, using 0 would mean to\n> just remain stuck in the first fsync() pattern, while for\n> pg_test_fsync this means doing no test loops at all, generating a\n> useless log once done. Or do you mean to change the logic of\n> pg_test_fsync so as --secs-per-test=0 means doing one single write?\n> That's something I thought about for this thread, but I am not sure\n> that the extra regression test gain is worth more complexity in this\n> code.\n\nI think in general doing something 0 times should be allowed if possible.\n\nHowever, I see that in the case of pg_test_fsync you end up in alarm(0), \nwhich does something different, so it's okay in that case to disallow it.\n\nI notice that the error checking you introduce is different from the \nchecks for pgbench -t and -T (the latter having no errno checks). I'm \nnot sure which is correct, but it's perhaps worth making them the same.\n\n(pgbench -t 0, which is also currently not allowed, is a good example of \nwhy this could be useful, because that would allow checking whether the \nscript etc. can be loaded without running an actual test.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 7 Sep 2020 10:06:57 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On Mon, Sep 07, 2020 at 10:06:57AM +0200, Peter Eisentraut wrote:\n> However, I see that in the case of pg_test_fsync you end up in alarm(0),\n> which does something different, so it's okay in that case to disallow it.\n\nYep.\n\n> I notice that the error checking you introduce is different from the checks\n> for pgbench -t and -T (the latter having no errno checks). I'm not sure\n> which is correct, but it's perhaps worth making them the same.\n\npgbench currently uses atoi() to parse the options of -t and -T. Are\nyou suggesting to switch that to strtoXX() as well or perhaps you are\nreferring to the parsing of the weight in parseScriptWeight()? FWIW,\nthe error handling introduced in this patch is similar to what we do\nfor example in pg_resetwal. This has its own problems as strtoul()\nwould not report ERANGE except for values higher than ULONG_MAX, but\nthe returned results are stored in 32 bits. We could switch to just\nuse uint64 where we could of course, but is that really worth it for\nsuch tools? For example, pg_test_timing could overflow the\ntotal_timing calculated if using a too high value, but nobody would\nuse such values anyway. So I'd rather just use uint32 and call it a\nday, for simplicity's sake mainly..\n\n> (pgbench -t 0, which is also currently not allowed, is a good example of why\n> this could be useful, because that would allow checking whether the script\n> etc. can be loaded without running an actual test.)\n\nPerhaps. That looks like a separate item to me though.\n--\nMichael",
"msg_date": "Thu, 10 Sep 2020 16:59:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On 2020-09-10 09:59, Michael Paquier wrote:\n>> I notice that the error checking you introduce is different from the checks\n>> for pgbench -t and -T (the latter having no errno checks). I'm not sure\n>> which is correct, but it's perhaps worth making them the same.\n> pgbench currently uses atoi() to parse the options of -t and -T. Are\n> you suggesting to switch that to strtoXX() as well or perhaps you are\n> referring to the parsing of the weight in parseScriptWeight()? FWIW,\n> the error handling introduced in this patch is similar to what we do\n> for example in pg_resetwal. This has its own problems as strtoul()\n> would not report ERANGE except for values higher than ULONG_MAX, but\n> the returned results are stored in 32 bits. We could switch to just\n> use uint64 where we could of course, but is that really worth it for\n> such tools? For example, pg_test_timing could overflow the\n> total_timing calculated if using a too high value, but nobody would\n> use such values anyway. So I'd rather just use uint32 and call it a\n> day, for simplicity's sake mainly..\n\nThe first patch you proposed checks for errno == ERANGE, but pgbench \ncode doesn't do that. So one of them is not correct.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 10 Sep 2020 15:59:20 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On Thu, Sep 10, 2020 at 03:59:20PM +0200, Peter Eisentraut wrote:\n> The first patch you proposed checks for errno == ERANGE, but pgbench code\n> doesn't do that. So one of them is not correct.\n\nSorry for the confusion, I misunderstood what you were referring to.\nYes, the first patch is wrong to add the check on errno. FWIW, I\nthought about your point to use strtol() but that does not seem worth\nthe complication for those tools. It is not like anybody is going to\nuse high values for these, and using uint64 to make sure that the\nboundaries are checked just add more checks for bounds. There is\none example in pg_test_timing when compiling the total time.\n--\nMichael",
"msg_date": "Fri, 11 Sep 2020 16:08:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On 2020-09-11 09:08, Michael Paquier wrote:\n> On Thu, Sep 10, 2020 at 03:59:20PM +0200, Peter Eisentraut wrote:\n>> The first patch you proposed checks for errno == ERANGE, but pgbench code\n>> doesn't do that. So one of them is not correct.\n> \n> Sorry for the confusion, I misunderstood what you were referring to.\n> Yes, the first patch is wrong to add the check on errno. FWIW, I\n> thought about your point to use strtol() but that does not seem worth\n> the complication for those tools. It is not like anybody is going to\n> use high values for these, and using uint64 to make sure that the\n> boundaries are checked just add more checks for bounds. There is\n> one example in pg_test_timing when compiling the total time.\n\nI didn't mean use strtol() to be able to process larger values, but for \nthe error checking. atoi() cannot detect any errors other than ERANGE. \nSo if you are spending effort on making the option value parsing more \nrobust, relying on atoi() will result in an incomplete solution.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 15 Sep 2020 14:39:08 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On Tue, Sep 15, 2020 at 02:39:08PM +0200, Peter Eisentraut wrote:\n> I didn't mean use strtol() to be able to process larger values, but for the\n> error checking. atoi() cannot detect any errors other than ERANGE. So if\n> you are spending effort on making the option value parsing more robust,\n> relying on atoi() will result in an incomplete solution.\n\nOkay, after looking at that, here is v3. This includes range checks\nas well as errno checks based on strtol(). What do you think?\n--\nMichael",
"msg_date": "Fri, 18 Sep 2020 17:22:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On Fri, Sep 18, 2020 at 05:22:15PM +0900, Michael Paquier wrote:\n> Okay, after looking at that, here is v3. This includes range checks\n> as well as errno checks based on strtol(). What do you think?\n\nThis fails in the CF bot on Linux because of pg_logging_init()\nreturning with errno=ENOTTY in the TAP tests, for which I began a new\nthread:\nhttps://www.postgresql.org/message-id/20200918095713.GA20887@paquier.xyz\n\nNot sure if this will lead anywhere, but we can also address the\nfailure by enforcing errno=0 for the new calls of strtol() introduced\nin this patch. So here is an updated patch doing so.\n--\nMichael",
"msg_date": "Sun, 20 Sep 2020 12:41:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On 2020-09-20 05:41, Michael Paquier wrote:\n> On Fri, Sep 18, 2020 at 05:22:15PM +0900, Michael Paquier wrote:\n>> Okay, after looking at that, here is v3. This includes range checks\n>> as well as errno checks based on strtol(). What do you think?\n> \n> This fails in the CF bot on Linux because of pg_logging_init()\n> returning with errno=ENOTTY in the TAP tests, for which I began a new\n> thread:\n> https://www.postgresql.org/message-id/20200918095713.GA20887@paquier.xyz\n> \n> Not sure if this will lead anywhere, but we can also address the\n> failure by enforcing errno=0 for the new calls of strtol() introduced\n> in this patch. So here is an updated patch doing so.\n\nI think the error checking is now structurally correct in this patch.\n\nHowever, I still think the integer type use is a bit inconsistent. In \nboth cases, using strtoul() and dealing with unsigned integer types \nbetween parsing and final use would be more consistent.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 22 Sep 2020 23:45:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On Tue, Sep 22, 2020 at 11:45:14PM +0200, Peter Eisentraut wrote:\n> However, I still think the integer type use is a bit inconsistent. In both\n> cases, using strtoul() and dealing with unsigned integer types between\n> parsing and final use would be more consistent.\n\nNo objections to that either, so changed this way. I kept those\nvariables signed because applying values of 2B~4B is not really going\nto matter much here ;p\n--\nMichael",
"msg_date": "Wed, 23 Sep 2020 10:50:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On 2020-09-23 03:50, Michael Paquier wrote:\n> On Tue, Sep 22, 2020 at 11:45:14PM +0200, Peter Eisentraut wrote:\n>> However, I still think the integer type use is a bit inconsistent. In both\n>> cases, using strtoul() and dealing with unsigned integer types between\n>> parsing and final use would be more consistent.\n> \n> No objections to that either, so changed this way. I kept those\n> variables signed because applying values of 2B~4B is not really going\n> to matter much here ;p\n\nThis patch mixes up unsigned int and uint32 in random ways. The \nvariable is uint32, but the format is %u and the max constant is UINT_MAX.\n\nI think just use unsigned int as the variable type. There is no need to \nuse the bit-exact types. Note that the argument of alarm() is of type \nunsigned int.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 23 Sep 2020 08:11:59 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On Wed, Sep 23, 2020 at 08:11:59AM +0200, Peter Eisentraut wrote:\n> This patch mixes up unsigned int and uint32 in random ways. The variable is\n> uint32, but the format is %u and the max constant is UINT_MAX.\n> \n> I think just use unsigned int as the variable type. There is no need to use\n> the bit-exact types. Note that the argument of alarm() is of type unsigned\n> int.\n\nMakes sense, thanks.\n--\nMichael",
"msg_date": "Thu, 24 Sep 2020 16:12:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On 2020-09-24 09:12, Michael Paquier wrote:\n> On Wed, Sep 23, 2020 at 08:11:59AM +0200, Peter Eisentraut wrote:\n>> This patch mixes up unsigned int and uint32 in random ways. The variable is\n>> uint32, but the format is %u and the max constant is UINT_MAX.\n>>\n>> I think just use unsigned int as the variable type. There is no need to use\n>> the bit-exact types. Note that the argument of alarm() is of type unsigned\n>> int.\n> \n> Makes sense, thanks.\n\nlooks good to me\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 25 Sep 2020 07:52:10 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
},
{
"msg_contents": "On Fri, Sep 25, 2020 at 07:52:10AM +0200, Peter Eisentraut wrote:\n> looks good to me\n\nThanks, applied.\n--\nMichael",
"msg_date": "Mon, 28 Sep 2020 10:19:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Range checks of pg_test_fsync --secs-per-test and pg_test_timing\n --duration"
}
] |
[
{
"msg_contents": "The function pg_sequence_last_value() was added to underlie the\npg_sequences view, and it's the only way I'm aware of from userspace\nto directly get the last value of a sequence globally (i.e., not\nwithin the current session like currval()/lastval()). Obviously you\ncan join to the pg_sequences view, but that's sometimes unnecessarily\ncumbersome since it doesn't expose the relid of the sequence.\n\nWhen that function got added it apparently wasn't added to the docs,\nthough I'm not sure if that was intentional or not.\n\nDoes anyone have any objections to documenting\npg_sequence_last_value() in the sequence manipulation functions doc\npage?\n\nJames\n\n\n",
"msg_date": "Thu, 6 Aug 2020 09:14:12 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Any objection to documenting pg_sequence_last_value()?"
},
{
"msg_contents": "Hi All,\r\n\r\nI recently used pg_sequence_last_value() when working on a feature in an extension, and it would have been easier for me if there were some documentation for this function.\r\n\r\nI'd like to help document this function if there are no objections.\r\n\r\nBest,\r\nHanefi\r\n\r\n-----Original Message-----\r\nFrom: James Coleman <jtc331@gmail.com> \r\nSent: 6 Ağustos 2020 Perşembe 16:14\r\nTo: pgsql-hackers <pgsql-hackers@postgresql.org>\r\nSubject: [EXTERNAL] Any objection to documenting pg_sequence_last_value()?\r\n\r\nThe function pg_sequence_last_value() was added to underlie the pg_sequences view, and it's the only way I'm aware of from userspace to directly get the last value of a sequence globally (i.e., not within the current session like currval()/lastval()). Obviously you can join to the pg_sequences view, but that's sometimes unnecessarily cumbersome since it doesn't expose the relid of the sequence.\r\n\r\nWhen that function got added it apparently wasn't added to the docs, though I'm not sure if that was intentional or not.\r\n\r\nDoes anyone have any objections to documenting\r\npg_sequence_last_value() in the sequence manipulation functions doc page?\r\n\r\nJames\r\n\r\n\r\n",
"msg_date": "Tue, 30 Mar 2021 08:37:31 +0000",
"msg_from": "Hanefi Onaldi <Hanefi.Onaldi@microsoft.com>",
"msg_from_op": false,
"msg_subject": "RE: [EXTERNAL] Any objection to documenting pg_sequence_last_value()?"
},
{
"msg_contents": "On Tue, Mar 30, 2021 at 4:37 AM Hanefi Onaldi\n<Hanefi.Onaldi@microsoft.com> wrote:\n>\n> Hi All,\n>\n> I recently used pg_sequence_last_value() when working on a feature in an extension, and it would have been easier for me if there were some documentation for this function.\n>\n> I'd like to help document this function if there are no objections.\n>\n> Best,\n> Hanefi\n>\n> -----Original Message-----\n> From: James Coleman <jtc331@gmail.com>\n> Sent: 6 Ağustos 2020 Perşembe 16:14\n> To: pgsql-hackers <pgsql-hackers@postgresql.org>\n> Subject: [EXTERNAL] Any objection to documenting pg_sequence_last_value()?\n>\n> The function pg_sequence_last_value() was added to underlie the pg_sequences view, and it's the only way I'm aware of from userspace to directly get the last value of a sequence globally (i.e., not within the current session like currval()/lastval()). Obviously you can join to the pg_sequences view, but that's sometimes unnecessarily cumbersome since it doesn't expose the relid of the sequence.\n>\n> When that function got added it apparently wasn't added to the docs, though I'm not sure if that was intentional or not.\n>\n> Does anyone have any objections to documenting\n> pg_sequence_last_value() in the sequence manipulation functions doc page?\n>\n> James\n>\n>\n\nGiven there's been no objection, I think it'd be worth submitting a\npatch (and I'd be happy to review if you're willing to author one).\n\nJames\n\n\n",
"msg_date": "Tue, 6 Apr 2021 12:13:38 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Any objection to documenting pg_sequence_last_value()?"
}
] |
[
{
"msg_contents": "I got the first draft of $SUBJECT done a little earlier than usual.\nSee\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=a2e0cf45c21afbcbc544d1aca8d51d90004aa5d9\n\nThere seemed to be more than the usual quota of commits that I decided\nnot to document because they seemed uninteresting to end users, such\nas test-only changes. If you think I omitted anything that should be\ndocumented, don't hesitate to say so.\n\nPlease send any corrections before Sunday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Aug 2020 15:53:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Release notes for next week's back-branch releases"
}
] |
[
{
"msg_contents": "Ashutosh Bapat noticed that WalSndWaitForWal() is setting\nwaiting_for_ping_response after sending a keepalive that does *not*\nrequest a reply. The bad consequence is that other callers that do\nrequire a reply end up in not sending a keepalive, because they think it\nwas already sent previously. So the whole thing gets stuck.\n\nHe found that commit 41d5f8ad734 failed to remove the setting of\nwaiting_for_ping_response after changing the \"request\" parameter\nWalSndKeepalive from true to false; that seems to have been an omission\nand it breaks the algorithm. Thread at [1].\n\nThe simplest fix is just to remove the line that sets\nwaiting_for_ping_response, but I think it is less error-prone to have\nWalSndKeepalive set the flag itself, instead of expecting its callers to\ndo it (and know when to). Patch attached. Also rewords some related\ncommentary.\n\n[1] https://postgr.es/m/flat/BLU436-SMTP25712B7EF9FC2ADEB87C522DC040@phx.gbl\n\n-- \n�lvaro Herrera Valdivia, Chile",
"msg_date": "Thu, 6 Aug 2020 18:55:58 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "walsender waiting_for_ping spuriously set"
},
{
"msg_contents": "The patch looks good to me. Thanks for improving comments around that code.\nI like the change to set waiting_for_ping_response in WalSndKeepalive.\nThanks.\n\nOn Fri, 7 Aug 2020 at 04:26, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> Ashutosh Bapat noticed that WalSndWaitForWal() is setting\n> waiting_for_ping_response after sending a keepalive that does *not*\n> request a reply. The bad consequence is that other callers that do\n> require a reply end up in not sending a keepalive, because they think it\n> was already sent previously. So the whole thing gets stuck.\n>\n> He found that commit 41d5f8ad734 failed to remove the setting of\n> waiting_for_ping_response after changing the \"request\" parameter\n> WalSndKeepalive from true to false; that seems to have been an omission\n> and it breaks the algorithm. Thread at [1].\n>\n> The simplest fix is just to remove the line that sets\n> waiting_for_ping_response, but I think it is less error-prone to have\n> WalSndKeepalive set the flag itself, instead of expecting its callers to\n> do it (and know when to). Patch attached. Also rewords some related\n> commentary.\n>\n> [1]\n> https://postgr.es/m/flat/BLU436-SMTP25712B7EF9FC2ADEB87C522DC040@phx.gbl\n>\n> --\n> Álvaro Herrera Valdivia, Chile\n>\n\n\n-- \nBest Wishes,\nAshutosh\n\nThe patch looks good to me. Thanks for improving comments around that code. I like the change to set waiting_for_ping_response in WalSndKeepalive. Thanks.On Fri, 7 Aug 2020 at 04:26, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:Ashutosh Bapat noticed that WalSndWaitForWal() is setting\nwaiting_for_ping_response after sending a keepalive that does *not*\nrequest a reply. The bad consequence is that other callers that do\nrequire a reply end up in not sending a keepalive, because they think it\nwas already sent previously. So the whole thing gets stuck.\n\nHe found that commit 41d5f8ad734 failed to remove the setting of\nwaiting_for_ping_response after changing the \"request\" parameter\nWalSndKeepalive from true to false; that seems to have been an omission\nand it breaks the algorithm. Thread at [1].\n\nThe simplest fix is just to remove the line that sets\nwaiting_for_ping_response, but I think it is less error-prone to have\nWalSndKeepalive set the flag itself, instead of expecting its callers to\ndo it (and know when to). Patch attached. Also rewords some related\ncommentary.\n\n[1] https://postgr.es/m/flat/BLU436-SMTP25712B7EF9FC2ADEB87C522DC040@phx.gbl\n\n-- \nÁlvaro Herrera Valdivia, Chile\n-- Best Wishes,Ashutosh",
"msg_date": "Fri, 7 Aug 2020 11:08:50 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: walsender waiting_for_ping spuriously set"
},
{
"msg_contents": "I just noticed that part of this comment I'm modifying:\n\n> @@ -1444,17 +1444,13 @@ WalSndWaitForWal(XLogRecPtr loc)\n> \t\t * We only send regular messages to the client for full decoded\n> \t\t * transactions, but a synchronous replication and walsender shutdown\n> \t\t * possibly are waiting for a later location. So, before sleeping, we\n> -\t\t * send a ping containing the flush location. If the receiver is\n> -\t\t * otherwise idle, this keepalive will trigger a reply. Processing the\n> -\t\t * reply will update these MyWalSnd locations.\n> +\t\t * send a ping containing the flush location. A reply from standby is\n> +\t\t * not needed and would be wasteful.\n\nwas added very recently, in f246ea3b2a5e (\"In caught-up logical\nwalsender, sleep only in WalSndWaitForWal().\"). Added Noah to CC.\n\nI think the walreceiver will only send a reply if\nwal_receiver_status_interval is set to a nonzero value. I don't\nunderstand what reason could there possibly be for setting this\nparameter to zero, but it seems better to be explicit about it, as this\ncode is confusing enough.\n\nI'm thinking in keeping the sentences that were added in that commit,\nmaybe like so:\n\n> \t\t * We only send regular messages to the client for full decoded\n> \t\t * transactions, but a synchronous replication and walsender shutdown\n> \t\t * possibly are waiting for a later location. So, before sleeping, we\n> +\t\t * send a ping containing the flush location. A reply from standby is\n> +\t\t * not needed and would be wasteful most of the time,\n> +\t\t * but if the receiver is otherwise idle and walreceiver status messages\n> +\t\t * are enabled, this keepalive will trigger a reply. Processing the\n> +\t\t * reply will update these MyWalSnd locations.\n\n(Also, the comment would be updated all the way back to 9.5, even if\nf246ea3b2a5e itself was not.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 7 Aug 2020 18:55:12 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: walsender waiting_for_ping spuriously set"
},
{
"msg_contents": "On 2020-Aug-07, Alvaro Herrera wrote:\n\n> I'm thinking in keeping the sentences that were added in that commit,\n> maybe like so:\n> \n> > \t\t * We only send regular messages to the client for full decoded\n> > \t\t * transactions, but a synchronous replication and walsender shutdown\n> > \t\t * possibly are waiting for a later location. So, before sleeping, we\n> > +\t\t * send a ping containing the flush location. A reply from standby is\n> > +\t\t * not needed and would be wasteful most of the time,\n> > +\t\t * but if the receiver is otherwise idle and walreceiver status messages\n> > +\t\t * are enabled, this keepalive will trigger a reply. Processing the\n> > +\t\t * reply will update these MyWalSnd locations.\n\nAfter rereading this a few more times, I think it's OK as Noah had it,\nso I'll just use his wording.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 7 Aug 2020 19:18:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: walsender waiting_for_ping spuriously set"
},
{
"msg_contents": "Pushed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 8 Aug 2020 12:42:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: walsender waiting_for_ping spuriously set"
},
{
"msg_contents": "On 2020-08-06 18:55:58 -0400, Alvaro Herrera wrote:\n> Ashutosh Bapat noticed that WalSndWaitForWal() is setting\n> waiting_for_ping_response after sending a keepalive that does *not*\n> request a reply. The bad consequence is that other callers that do\n> require a reply end up in not sending a keepalive, because they think it\n> was already sent previously. So the whole thing gets stuck.\n> \n> He found that commit 41d5f8ad734 failed to remove the setting of\n> waiting_for_ping_response after changing the \"request\" parameter\n> WalSndKeepalive from true to false; that seems to have been an omission\n> and it breaks the algorithm. Thread at [1].\n> \n> The simplest fix is just to remove the line that sets\n> waiting_for_ping_response, but I think it is less error-prone to have\n> WalSndKeepalive set the flag itself, instead of expecting its callers to\n> do it (and know when to). Patch attached. Also rewords some related\n> commentary.\n\nThanks for diagnosis and fix!\n\n- Andres\n\n\n",
"msg_date": "Mon, 10 Aug 2020 17:33:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: walsender waiting_for_ping spuriously set"
}
] |
[
{
"msg_contents": "While reviewing an amcheck patch of Andrey Borodin's, I noticed that\nit had a problem that I tied back to btree_xlog_split()'s loose\napproach to locking buffers compared to the primary [1] (i.e. compared\nto _bt_split()). This created a problem the proposed new check that is\nnot unlike the problem that backwards scans running on standbys had\nwith \"concurrent\" page deletions -- that was a legitimate bug that was\nfixed in commit 9a9db08a.\n\nI'm starting to think that we should bite the bullet and not release\nall same-level locks within btree_xlog_split() until the very end,\nalong with the existing right sibling page whose left link we need to\nupdate. In other words, \"couple\" the locks in the manner of\n_bt_split(), though only for same-level pages (just like\nbtree_xlog_unlink_page() after commit 9a9db08a). That would make it\nokay to commit Andrey's patch, but it also seems like a good idea on\ngeneral principle. (Note that I'm not proposing cross-level lock\ncoupling on replicas, which seems unnecessary. Actually it's not\nreally possible to do that because cross-level locks span multiple\natomic actions/WAL records on the primary.)\n\nPresumably the lock coupling on standbys will have some overhead, but\nthat seems essentially the same as the overhead on the primary. The\nobvious case to test (to evaluate the overhead of being more\nconservative in btree_xlog_split()) is a workload where we continually\nsplit the rightmost page. That's not actually relevant, though, since\nthere is no right sibling to update when we split the rightmost page.\n\nMy sense is that the current approach to locking taken within\nbtree_xlog_split() is kind of an accident, not something that was\npursued as a special optimization for the REDO routine at some point.\nCommit 3bbf668d certainly creates that impression. But I might have\nmissed something.\n\n[1] postgr.es/m/CAH2-Wzm3=SLwu5=z8qG6UBpCemZW3dUNXWbX-cpXCgb=y3OhZw@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 6 Aug 2020 17:02:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Should the nbtree page split REDO routine's locking work more like\n the locking on the primary?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I'm starting to think that we should bite the bullet and not release\n> all same-level locks within btree_xlog_split() until the very end,\n> along with the existing right sibling page whose left link we need to\n> update.\n\n+1 for making this more like what happens in original execution (\"on the\nprimary\", to use your wording). Perhaps what you suggest here is still\nnot enough like the original execution, but it sounds closer.\n\n> My sense is that the current approach to locking taken within\n> btree_xlog_split() is kind of an accident, not something that was\n> pursued as a special optimization for the REDO routine at some point.\n> Commit 3bbf668d certainly creates that impression. But I might have\n> missed something.\n\nAs the commit message for 3bbf668d explains, the initial situation for\nall the replay code was that it executed by itself in crash recovery and\ndidn't need to bother with locks at all. I think that it did take some\nlocks even then, but that was because of code sharing with the primary\nexecution path rather than being something we wanted. Once we invented\nHot Standby that situation had to be improved. It seems to me that the\ngoal now needs to be to replicate the primary-execution buffer locking\nbehavior in any case where we can't prove that something simpler is safe.\n3bbf668d did not claim to achieve that everywhere, and it didn't; it\ndoesn't surprise me that there's work left to be done.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Aug 2020 21:08:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should the nbtree page split REDO routine's locking work more\n like the locking on the primary?"
},
{
"msg_contents": "On Thu, Aug 6, 2020 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> +1 for making this more like what happens in original execution (\"on the\n> primary\", to use your wording). Perhaps what you suggest here is still\n> not enough like the original execution, but it sounds closer.\n\nIt won't be the same as the original execution, exactly -- I am only\nthinking of holding on to same-level page locks (the original page,\nits new right sibling, and the original right sibling). I suppose that\nit's possible to go further than this in one rarer case (when clearing\nincomplete split flag one level down), but for the most part it isn't\neven possible to follow original execution's approach to locking in\nevery detail. Clearly it's not okay for the startup process to hold\nbuffer locks across replay of the first and second phase of a split,\nbut that's what it would take to follow original execution 100%\nfaithfully -- there are two WAL records involved.\n\nI am quite confident that there won't be any remaining problems\nprovided we follow the original execution's approach to locking within\neach level of the tree -- that's enough. Anything that runs during\nrecovery won't care about cross-level differences, aside from the\nobvious (scans may have to move right to recover from concurrent\nsplits).\n\n> As the commit message for 3bbf668d explains, the initial situation for\n> all the replay code was that it executed by itself in crash recovery and\n> didn't need to bother with locks at all. I think that it did take some\n> locks even then, but that was because of code sharing with the primary\n> execution path rather than being something we wanted.\n\nMakes sense.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 6 Aug 2020 19:00:46 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Should the nbtree page split REDO routine's locking work more\n like the locking on the primary?"
},
{
"msg_contents": "On Thu, Aug 6, 2020 at 7:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Aug 6, 2020 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > +1 for making this more like what happens in original execution (\"on the\n> > primary\", to use your wording). Perhaps what you suggest here is still\n> > not enough like the original execution, but it sounds closer.\n>\n> It won't be the same as the original execution, exactly -- I am only\n> thinking of holding on to same-level page locks (the original page,\n> its new right sibling, and the original right sibling).\n\nI pushed a commit that reorders the lock acquisitions within\nbtree_xlog_unlink_page() -- they're now consistent with _bt_split()\n(at least among sibling pages involved in the page split).\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 7 Aug 2020 15:28:46 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Should the nbtree page split REDO routine's locking work more\n like the locking on the primary?"
},
{
"msg_contents": "\n\n> 8 авг. 2020 г., в 03:28, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> On Thu, Aug 6, 2020 at 7:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> On Thu, Aug 6, 2020 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> +1 for making this more like what happens in original execution (\"on the\n>>> primary\", to use your wording). Perhaps what you suggest here is still\n>>> not enough like the original execution, but it sounds closer.\n>> \n>> It won't be the same as the original execution, exactly -- I am only\n>> thinking of holding on to same-level page locks (the original page,\n>> its new right sibling, and the original right sibling).\n> \n> I pushed a commit that reorders the lock acquisitions within\n> btree_xlog_unlink_page() -- they're now consistent with _bt_split()\n> (at least among sibling pages involved in the page split).\n\nSounds great, thanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 8 Aug 2020 13:35:09 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Should the nbtree page split REDO routine's locking work more\n like the locking on the primary?"
}
] |
[
{
"msg_contents": "Hi all,\n\nThis is a continuation of the work that has been previously discussed\nhere, resulting mainly in e3931d0 for pg_attribute and pg_shdepend:\nhttps://www.postgresql.org/message-id/20190213182737.mxn6hkdxwrzgxk35@alap3.anarazel.de\n\nI have been looking at the amount of work that could be done\nindependently for pg_depend, and attached are two patches:\n- 0001 switches recordMultipleDependencies() to use multi-inserts.\nContrary to pg_attribute and pg_shdepend, the number of items to\ninsert is known in advance, but some of them can be skipped if known\nas a pinned dependency. The data insertion is capped at 64kB, and the\nnumber of slots is basically calculation from the maximum cap and the\nnumber of items to insert.\n- 0002 switches a bunch of code paths to make use of multi-inserts\ninstead of individual calls to recordDependencyOn(), grouping the\ninsertions of dependencies of the same time. This relies on the\nexisting set of APIs to manipulate a set of object addresses, without\nany new addition there (no reset-like routine either as I noticed that\nit would have been useful in only one place). The set of changes is\nhonestly a bit bulky here.\n\nI am adding this thread to the next commit fest. Thoughts are\nwelcome.\n\nThanks,\n--\nMichael",
"msg_date": "Fri, 7 Aug 2020 15:16:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On Fri, Aug 07, 2020 at 03:16:19PM +0900, Michael Paquier wrote:\n> I am adding this thread to the next commit fest. Thoughts are\n> welcome.\n\nForgot to mention that this is based on some initial work from Daniel,\nand that we have discussed the issue before I worked on it.\n--\nMichael",
"msg_date": "Fri, 7 Aug 2020 15:21:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-07 15:16:19 +0900, Michael Paquier wrote:\n> From cd117fa88938c89ac953a5e3c036828337150b07 Mon Sep 17 00:00:00 2001\n> From: Michael Paquier <michael@paquier.xyz>\n> Date: Fri, 7 Aug 2020 10:57:40 +0900\n> Subject: [PATCH 1/2] Use multi-inserts for pg_depend\n> \n> This is a follow-up of the work done in e3931d01. This case is a bit\n> different than pg_attribute and pg_shdepend: the maximum number of items\n> to insert is known in advance, but there is no need to handle pinned\n> dependencies. Hence, the base allocation for slots is done based on the\n> number of items and the maximum allowed with a cap at 64kB, and items\n> are initialized once used to minimize the overhead of the operation.\n> \n> Author: Daniel Gustafsson, Michael Paquier\n> Discussion: https://postgr.es/m/XXX\n> ---\n> src/backend/catalog/pg_depend.c | 95 ++++++++++++++++++++++++---------\n> 1 file changed, 69 insertions(+), 26 deletions(-)\n> \n> diff --git a/src/backend/catalog/pg_depend.c b/src/backend/catalog/pg_depend.c\n> index 70baf03178..596f0c5e29 100644\n> --- a/src/backend/catalog/pg_depend.c\n> +++ b/src/backend/catalog/pg_depend.c\n> @@ -47,6 +47,12 @@ recordDependencyOn(const ObjectAddress *depender,\n> \trecordMultipleDependencies(depender, referenced, 1, behavior);\n> }\n> \n> +/*\n> + * Cap the maximum amount of bytes allocated for recordMultipleDependencies()\n> + * slots.\n> + */\n> +#define MAX_PGDEPEND_INSERT_BYTES\t65535\n> +\n\nDo we really want to end up with several separate defines for different\ntype of catalog batch inserts? That doesn't seem like a good\nthing. Think there should be a single define for all catalog bulk\ninserts.\n\n\n> /*\n> * Record multiple dependencies (of the same kind) for a single dependent\n> * object. This has a little less overhead than recording each separately.\n> @@ -59,10 +65,10 @@ recordMultipleDependencies(const ObjectAddress *depender,\n> {\n> \tRelation\tdependDesc;\n> \tCatalogIndexState indstate;\n> -\tHeapTuple\ttup;\n> -\tint\t\t\ti;\n> -\tbool\t\tnulls[Natts_pg_depend];\n> -\tDatum\t\tvalues[Natts_pg_depend];\n> +\tint\t\t\tslotCount, i;\n> +\tTupleTableSlot **slot;\n> +\tint\t\t\tnslots, max_slots;\n> +\tbool\t\tslot_init = true;\n> \n> \tif (nreferenced <= 0)\n> \t\treturn;\t\t\t\t\t/* nothing to do */\n> @@ -76,11 +82,18 @@ recordMultipleDependencies(const ObjectAddress *depender,\n> \n> \tdependDesc = table_open(DependRelationId, RowExclusiveLock);\n> \n> +\t/*\n> +\t * Allocate the slots to use, but delay initialization until we know that\n> +\t * they will be used.\n> +\t */\n> +\tmax_slots = Min(nreferenced,\n> +\t\t\t\t\tMAX_PGDEPEND_INSERT_BYTES / sizeof(FormData_pg_depend));\n> +\tslot = palloc(sizeof(TupleTableSlot *) * max_slots);\n> +\n> \t/* Don't open indexes unless we need to make an update */\n> \tindstate = NULL;\n> \n> -\tmemset(nulls, false, sizeof(nulls));\n> -\n> +\tslotCount = 0;\n> \tfor (i = 0; i < nreferenced; i++, referenced++)\n> \t{\n> \t\t/*\n> @@ -88,38 +101,68 @@ recordMultipleDependencies(const ObjectAddress *depender,\n> \t\t * need to record dependencies on it. This saves lots of space in\n> \t\t * pg_depend, so it's worth the time taken to check.\n> \t\t */\n> -\t\tif (!isObjectPinned(referenced, dependDesc))\n> +\t\tif (isObjectPinned(referenced, dependDesc))\n> +\t\t\tcontinue;\n> +\n\nHm, would it be better to first iterate over the dependencies, compute\nthe number of dependencies to be inserted, and then go ahead and create\nthe right number of slots?\n\n\n> From fcc0a11e9fc94d2fedc71dd10ba2a23713225963 Mon Sep 17 00:00:00 2001\n> From: Michael Paquier <michael@paquier.xyz>\n> Date: Fri, 7 Aug 2020 15:14:51 +0900\n> Subject: [PATCH 2/2] Switch to multi-insert dependencies for many code paths\n> \n> This makes use of the new APIs to insert dependencies in groups, instead\n> of doing the operation one-by-one.\n\nSeems several places have been modified to new APIs despite only\ncovering a single dependency. Perhaps worth mentioning?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Aug 2020 17:32:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On Mon, Aug 10, 2020 at 05:32:21PM -0700, Andres Freund wrote:\n> Do we really want to end up with several separate defines for different\n> type of catalog batch inserts? That doesn't seem like a good\n> thing. Think there should be a single define for all catalog bulk\n> inserts.\n\nUnlikely so, but I kept them separate to potentially lower the\nthreshold of 64kB for catalog rows that have a lower average size than\npg_attribute. catalog.h would be the natural location I would choose\nfor a single definition.\n\n> Hm, would it be better to first iterate over the dependencies, compute\n> the number of dependencies to be inserted, and then go ahead and create\n> the right number of slots?\n\nNot sure about that, but I am not wedded to the approach of the patch\neither as the most consuming portion is the slot initialization/reset.\nComputing the number of items in advance forces to go through the\ndependency list twice, while doing a single pass makes the code\nallocate 64 extra bytes for each slot not used. It is of course\nbetter to avoid calling isObjectPinned() twice for each dependency, so\nwe could use a bitmap, or just simply build a secondary list of\ndependencies that we are sure will be inserted after doing a first\npass to discard the unwanted entries.\n\n> Seems several places have been modified to new APIs despite only\n> covering a single dependency. Perhaps worth mentioning?\n\nYeah, I need to think more about this commit message.\n--\nMichael",
"msg_date": "Tue, 11 Aug 2020 14:59:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On Tue, Aug 11, 2020 at 1:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Aug 10, 2020 at 05:32:21PM -0700, Andres Freund wrote:\n> > Do we really want to end up with several separate defines for different\n> > type of catalog batch inserts? That doesn't seem like a good\n> > thing. Think there should be a single define for all catalog bulk\n> > inserts.\n>\n> Unlikely so, but I kept them separate to potentially lower the\n> threshold of 64kB for catalog rows that have a lower average size than\n> pg_attribute.\n\nUh, why would we want to do that?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 Aug 2020 11:02:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On 2020-Aug-11, Robert Haas wrote:\n\n> On Tue, Aug 11, 2020 at 1:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Mon, Aug 10, 2020 at 05:32:21PM -0700, Andres Freund wrote:\n> > > Do we really want to end up with several separate defines for different\n> > > type of catalog batch inserts? That doesn't seem like a good\n> > > thing. Think there should be a single define for all catalog bulk\n> > > inserts.\n> >\n> > Unlikely so, but I kept them separate to potentially lower the\n> > threshold of 64kB for catalog rows that have a lower average size than\n> > pg_attribute.\n> \n> Uh, why would we want to do that?\n\nYeah. As I understand, the only reason to have this number is to avoid\nan arbitrarily large number of entries created as a single multi-insert\nWAL record ... but does that really ever happen? I guess if you create\na table with some really complicated schema you might get, say, a\nhundred pg_depend rows at once. But to fill eight complete pages of\npg_depend entries sounds astoundingly ridiculous already -- I'd say it's\njust an easy way to spell \"infinity\" for this. Tweaking one infinity\nvalue to become some other infinity value sounds useless.\n\nSo I agree with what Andres said. Let's have just one such define and\nbe done with it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 12 Aug 2020 19:52:42 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On Wed, Aug 12, 2020 at 07:52:42PM -0400, Alvaro Herrera wrote:\n> Yeah. As I understand, the only reason to have this number is to avoid\n> an arbitrarily large number of entries created as a single multi-insert\n> WAL record ... but does that really ever happen? I guess if you create\n> a table with some really complicated schema you might get, say, a\n> hundred pg_depend rows at once. But to fill eight complete pages of\n> pg_depend entries sounds astoundingly ridiculous already -- I'd say it's\n> just an easy way to spell \"infinity\" for this. Tweaking one infinity\n> value to become some other infinity value sounds useless.\n> \n> So I agree with what Andres said. Let's have just one such define and\n> be done with it.\n\nOkay. Would src/include/catalog/catalog.h be a suited location for\nthis flag or somebody has a better idea?\n--\nMichael",
"msg_date": "Thu, 13 Aug 2020 13:40:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On 2020-Aug-13, Michael Paquier wrote:\n\n> Okay. Would src/include/catalog/catalog.h be a suited location for\n> this flag or somebody has a better idea?\n\nNext to the API definition I guess, is that dependency.h?\n\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Aug 2020 05:35:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 05:35:14AM -0400, Alvaro Herrera wrote:\n> Next to the API definition I guess, is that dependency.h?\n\nWe need something more central, see also MAX_PGATTRIBUTE_INSERT_BYTES\nand MAX_PGSHDEPEND_INSERT_BYTES. And the definition should be named\nsomething like MAX_CATALOG_INSERT_BYTES or such I guess.\n--\nMichael",
"msg_date": "Thu, 13 Aug 2020 19:02:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On 2020-Aug-13, Michael Paquier wrote:\n\n> On Thu, Aug 13, 2020 at 05:35:14AM -0400, Alvaro Herrera wrote:\n> > Next to the API definition I guess, is that dependency.h?\n> \n> We need something more central, see also MAX_PGATTRIBUTE_INSERT_BYTES\n> and MAX_PGSHDEPEND_INSERT_BYTES. And the definition should be named\n> something like MAX_CATALOG_INSERT_BYTES or such I guess.\n\nMAX_CATALOG_INSERT_BYTES sounds decent to me. I mentioned dependency.h\nbecause I was uncaffeinatedly thinking that this was used with API\ndefined there -- but in reality it's used with indexing.h functions, and\nit seems to me that that file would be the place for it.\n\nLooking at the existing contents of catalog.h, I would say it does not\nfit in there.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Aug 2020 11:45:52 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 11:45:52AM -0400, Alvaro Herrera wrote:\n> MAX_CATALOG_INSERT_BYTES sounds decent to me. I mentioned dependency.h\n> because I was uncaffeinatedly thinking that this was used with API\n> defined there -- but in reality it's used with indexing.h functions, and\n> it seems to me that that file would be the place for it.\n\nOK, let's live with indexing.h then.\n\nRegarding the maximum number of slots allocated. Do people like the\ncurrent approach taken by the patch to do a single loop of the\ndependency entries at the cost of more allocating perhaps too much for\nthe array holding the set of TupleTableSlots (the actual slot\ninitialization happens only if necessary)? Or would it be preferred\nto scan twice the set of dependencies, discarding pinned dependencies\nin a first scan to build the list of dependencies that would be\ninserted? This way, you can know the exact amount memory to allocated\nfor TupleTableSlots, though that's just 64B for each one of them.\n\nI have read today through the patch set of Julien and Thomas to add a\nversion string to pg_depend, and I get the impression that the\ncurrent approach taken by the patch fits better in the whole picture.\n--\nMichael",
"msg_date": "Fri, 14 Aug 2020 17:06:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On 2020-Aug-14, Michael Paquier wrote:\n\n> Regarding the maximum number of slots allocated. Do people like the\n> current approach taken by the patch to do a single loop of the\n> dependency entries at the cost of more allocating perhaps too much for\n> the array holding the set of TupleTableSlots (the actual slot\n> initialization happens only if necessary)? Or would it be preferred\n> to scan twice the set of dependencies, discarding pinned dependencies\n> in a first scan to build the list of dependencies that would be\n> inserted? This way, you can know the exact amount memory to allocated\n> for TupleTableSlots, though that's just 64B for each one of them.\n\nIt seems a bit silly to worry about allocating just the exact amount\nneeded; the current approach looked fine to me. The logic to keep track\nnumber of used slots used is baroque, though -- that could use a lot of\nsimplification.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 14 Aug 2020 14:23:16 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On Fri, Aug 14, 2020 at 02:23:16PM -0400, Alvaro Herrera wrote:\n> It seems a bit silly to worry about allocating just the exact amount\n> needed; the current approach looked fine to me.\n\nOkay, thanks.\n\n> The logic to keep track\n> number of used slots used is baroque, though -- that could use a lot of\n> simplification.\n\nWhat are you suggesting here? A new API layer to manage a set of\nslots?\n--\nMichael",
"msg_date": "Sat, 15 Aug 2020 10:50:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On Sat, Aug 15, 2020 at 10:50:37AM +0900, Michael Paquier wrote:\n> What are you suggesting here? A new API layer to manage a set of\n> slots?\n\nIt has been a couple of weeks, and I am not really sure what is the\nsuggestion here. So I would like to move on with this patch set as\nthe changes are straight-forward using the existing routines for\nobject addresses by grouping all insert dependencies of the same type.\nAre there any objections?\n\nAttached is a rebased set, where I have added in indexing.h a unique\ndefinition for the hard limit of 64kB for the amount of data that can\nbe inserted at once, based on the suggestion from Alvaro and Andres.\n--\nMichael",
"msg_date": "Mon, 31 Aug 2020 16:56:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "> On 14 Aug 2020, at 20:23, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> The logic to keep track number of used slots used is baroque, though -- that\n> could use a lot of simplification.\n\nWhat if slot_init was an integer which increments together with the loop\nvariable until max_slots is reached? If so, maybe it should be renamed\nslot_init_count and slotCount renamed slot_stored_count to make the their use\nclearer?\n\n> On 31 Aug 2020, at 09:56, Michael Paquier <michael@paquier.xyz> wrote:\n\n> It has been a couple of weeks, and I am not really sure what is the\n> suggestion here. So I would like to move on with this patch set as\n> the changes are straight-forward using the existing routines for\n> object addresses by grouping all insert dependencies of the same type.\n> Are there any objections?\n\nI'm obviously biased but I think this patchset is a net win. There are more\nthings we can do in this space, but it's a good start.\n\n> Attached is a rebased set, where I have added in indexing.h a unique\n> definition for the hard limit of 64kB for the amount of data that can\n> be inserted at once, based on the suggestion from Alvaro and Andres.\n\n+#define MAX_CATALOG_INSERT_BYTES 65535\nThis name, and inclusion in a headerfile, implies that the definition is\nsomewhat generic as opposed to its actual use. Using MULTIINSERT rather than\nINSERT in the name would clarify I reckon.\n\nA few other comments:\n\n+\t/*\n+\t * Allocate the slots to use, but delay initialization until we know that\n+\t * they will be used.\n+\t */\nI think this comment warrants a longer explanation on why they wont all be\nused, or perhaps none of them (which is the real optimization win here).\n\n+\tObjectAddresses *addrs_auto;\n+\tObjectAddresses *addrs_normal;\nIt's not for this patch, but it seems a logical next step would be to be able\nto record the DependencyType as well when collecting deps rather than having to\ncreate multiple buckets.\n\n+\t/* Normal dependency from a domain to its collation. */\n+\t/* We know the default collation is pinned, so don't bother recording it */\nIt's just moved and not introduced in this patch, but shouldn't these two lines\nbe joined into a normal block comment?\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 1 Sep 2020 11:53:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On Tue, Sep 01, 2020 at 11:53:36AM +0200, Daniel Gustafsson wrote:\n> On 14 Aug 2020, at 20:23, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n>> The logic to keep track number of used slots used is baroque, though -- that\n>> could use a lot of simplification.\n> \n> What if slot_init was an integer which increments together with the loop\n> variable until max_slots is reached? If so, maybe it should be renamed\n> slot_init_count and slotCount renamed slot_stored_count to make the their use\n> clearer?\n\nGood idea, removing slot_init looks like a good thing for readability.\nAnd the same can be done for pg_shdepend.\n\n> On 31 Aug 2020, at 09:56, Michael Paquier <michael@paquier.xyz> wrote:\n> +#define MAX_CATALOG_INSERT_BYTES 65535\n> This name, and inclusion in a headerfile, implies that the definition is\n> somewhat generic as opposed to its actual use. Using MULTIINSERT rather than\n> INSERT in the name would clarify I reckon.\n\nMakes sense, I have switched to MAX_CATALOG_MULTI_INSERT_BYTES. \n\n> A few other comments:\n> \n> +\t/*\n> +\t * Allocate the slots to use, but delay initialization until we know that\n> +\t * they will be used.\n> +\t */\n> I think this comment warrants a longer explanation on why they wont all be\n> used, or perhaps none of them (which is the real optimization win here).\n\nOkay, I have updated the comments where this formulation is used.\nDoes that look adapted to you?\n\n> +\tObjectAddresses *addrs_auto;\n> +\tObjectAddresses *addrs_normal;\n> It's not for this patch, but it seems a logical next step would be to be able\n> to record the DependencyType as well when collecting deps rather than having to\n> create multiple buckets.\n\nYeah, agreed. I am not sure yet how to design those APIs. One option\nis to use a set of an array with DependencyType elements, each one\nstoring a list of dependencies of the same type.\n\n> +\t/* Normal dependency from a domain to its collation. */\n> +\t/* We know the default collation is pinned, so don't bother recording it */\n> It's just moved and not introduced in this patch, but shouldn't these two lines\n> be joined into a normal block comment?\n\nOkay, done.\n--\nMichael",
"msg_date": "Thu, 3 Sep 2020 14:35:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "> On 3 Sep 2020, at 07:35, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Sep 01, 2020 at 11:53:36AM +0200, Daniel Gustafsson wrote:\n>> On 14 Aug 2020, at 20:23, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> \n>>> The logic to keep track number of used slots used is baroque, though -- that\n>>> could use a lot of simplification.\n>> \n>> What if slot_init was an integer which increments together with the loop\n>> variable until max_slots is reached? If so, maybe it should be renamed\n>> slot_init_count and slotCount renamed slot_stored_count to make the their use\n>> clearer?\n> \n> Good idea, removing slot_init looks like a good thing for readability.\n> And the same can be done for pg_shdepend.\n\nI think this version is a clear improvement. Nothing more sticks out from a\nread-through.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 3 Sep 2020 09:47:07 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On Thu, Sep 03, 2020 at 09:47:07AM +0200, Daniel Gustafsson wrote:\n> I think this version is a clear improvement. Nothing more sticks out from a\n> read-through.\n\nThanks for taking the time to look at it, Daniel. We of course could\nstill try to figure out how we could group all dependencies without\nworrying about their type, but I'd like to leave that as future work\nfor now. This is much more complex than what's proposed on this\nthread, and I am not sure if we really need to make this stuff more\ncomplex for this purpose.\n--\nMichael",
"msg_date": "Thu, 3 Sep 2020 19:19:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "> On 3 Sep 2020, at 12:19, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Sep 03, 2020 at 09:47:07AM +0200, Daniel Gustafsson wrote:\n>> I think this version is a clear improvement. Nothing more sticks out from a\n>> read-through.\n> \n> Thanks for taking the time to look at it, Daniel. We of course could\n> still try to figure out how we could group all dependencies without\n> worrying about their type, but I'd like to leave that as future work\n> for now. This is much more complex than what's proposed on this\n> thread, and I am not sure if we really need to make this stuff more\n> complex for this purpose.\n\nAgreed, I think that's a separate piece of work and discussion.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 3 Sep 2020 12:47:22 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "I agree, this version looks much better, thanks. Two very minor things:\n\nOn 2020-Sep-03, Michael Paquier wrote:\n\n> @@ -76,11 +77,23 @@ recordMultipleDependencies(const ObjectAddress *depender,\n> \n> \tdependDesc = table_open(DependRelationId, RowExclusiveLock);\n> \n> +\t/*\n> +\t * Allocate the slots to use, but delay initialization until we know that\n> +\t * they will be used. The slot initialization is the costly part, and the\n> +\t * exact number of dependencies inserted cannot be known in advance as it\n> +\t * depends on what is pinned by the system.\n> +\t */\n\nI'm not sure you need the second sentence in this comment; keeping the\n\"delay initialization until ...\" part seems sufficient. If you really\nwant to highlight that initialization is costly, maybe just say \"delay\ncostly initialization\".\n\n> +\t\t/*\n> +\t\t * Record the Dependency. Note we don't bother to check for duplicate\n> +\t\t * dependencies; there's no harm in them.\n> +\t\t */\n\nNo need to uppercase \"dependency\". (I know this is carried forward from\nprior comment, but it was equally unnecessary there.)\n\n> \t/*\n> \t * Allocate the slots to use, but delay initialization until we know that\n> -\t * they will be used.\n> +\t * they will be used. A full scan of pg_shdepend is done to find all the\n> +\t * dependencies from the template database to copy. Their number is not\n> +\t * known in advance and the slot initialization is the costly part.\n> \t */\n\nAs above, this change is not needed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 3 Sep 2020 10:50:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On Thu, Sep 03, 2020 at 10:50:49AM -0400, Alvaro Herrera wrote:\n> I'm not sure you need the second sentence in this comment; keeping the\n> \"delay initialization until ...\" part seems sufficient. If you really\n> want to highlight that initialization is costly, maybe just say \"delay\n> costly initialization\".\n\nThanks for the review.\n\nThis extra comment was to answer to Daniel's suggestion upthread, and\nthe simple wording you are suggesting is much better than what I did,\nso I have just added \"costly initialization\" in those two places.\n\n>> +\t\t/*\n>> +\t\t * Record the Dependency. Note we don't bother to check for duplicate\n>> +\t\t * dependencies; there's no harm in them.\n>> +\t\t */\n> \n> No need to uppercase \"dependency\". (I know this is carried forward from\n> prior comment, but it was equally unnecessary there.)\n\nThanks, fixed.\n--\nMichael",
"msg_date": "Fri, 4 Sep 2020 10:15:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
},
{
"msg_contents": "On Fri, Sep 04, 2020 at 10:15:57AM +0900, Michael Paquier wrote:\n> Thanks, fixed.\n\nWith the two comment fixes included, I have looked at both patches\nagain today, and applied them.\n--\nMichael",
"msg_date": "Sat, 5 Sep 2020 22:21:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch to multi-inserts for pg_depend"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nI'd like to propose a patch which introduces a functionality to include\nadditional columns to SPGiST index to increase speed of queries containing\nthem due to making the scans index only in this case. To date this\nfunctionality was available in GiSt and btree, I suppose the same is useful\nin SPGiST also.\n\nA few words on realisaton:\n\n1. The patch is intended to be fully compatible with previous SPGiSt\nindexes so SpGist leaf tuple structure remains unchanged until the ending\nof key attribute. All changes are introduced only after it. Internal tuples\nremain unchanged at all.\n\n2. Included data is added in the form very similar to heap tuple but unlike\nthe later it should not start from MAXALIGN boundary. I.e. nulls mask (if\nexist) starts just after the key value (it doesn't need alignment). Each of\nincluded attributes start from their own typealign boundary. The goal is to\nmake leaf tuples and therefore index more compact.\n\n3. Leaf tuple header is modified to store additional per tuple flags:\na) is nullmask present - if there is at least one null value among included\nattributes of a tuple\n(Note that this nullmask apply only to include attributes as nulls\nmanagement for key attributes is already realised in SPGiSt by placing\nleafs with null keys in separate list not in the main index tree.)\nb) is there variable length values among included. If there is no and key\nattribute is also fixed-length e.g. (kd-tree, quad-tree etc.) then leaf\ntuple processing can be speed up using attcacheoff.\n\nThese bits are incorporated into unused higher bits of nextOffset in the\nheader SPGiST leaf tuple. Even if we have 64Kb pages and tuples of minimum\n12 bytes (the length of the header on 32-bit architectures) + 4 bytes\nItemIdData 14 bit for nextOffset is more than enough.\n\nAll this changes only affect private index structures so all outside\nbehavior like WAL, vacuum etc will remain unchanged.\n\nAs usual I very much appreciate your feedback\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Fri, 7 Aug 2020 15:59:41 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Covering SPGiST index"
},
{
"msg_contents": "\n\n> 7 авг. 2020 г., в 16:59, Pavel Borisov <pashkin.elfe@gmail.com> написал(а):\n> \n> As usual I very much appreciate your feedback\n\nThanks for the patch! Looks interesting.\n\nOn a first glance the whole concept of non-multicolumn index with included attributes seems...well, just difficult to understand.\nBut I expect for SP-GiST this must be single key with multiple included attributes, right?\nI couldn't find a test that checks impossibility of on 2-column SP-GiST, only few asserts about it. Is this checked somewhere else?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 8 Aug 2020 13:44:56 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": ">\n> On a first glance the whole concept of non-multicolumn index with included\n> attributes seems...well, just difficult to understand.\n> But I expect for SP-GiST this must be single key with multiple included\n> attributes, right?\n> I couldn't find a test that checks impossibility of on 2-column SP-GiST,\n> only few asserts about it. Is this checked somewhere else?\n>\n\nYes, SpGist is by its construction a single-column index, there is no such\nthing like 2-column SP-GiST yet. In the same way like original SpGist will\nrefuse to add a second key column, this remains after modification as well,\nwith exception of columns attached by INCLUDE directive. They can be\n(INDEX_MAX_KEYS -1) pieces and they will not be used to create additional\nindex trees (as there is only one), they will be just attached to the key\ntree leafs tuple.\n\nI also little bit corrected error reporting for the case when user wants to\ninvoke index build with not one column. Thanks!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Mon, 10 Aug 2020 11:34:24 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "Also little bit corrected code formatting.\n\n> Best regards,\n> Pavel Borisov\n>\n> Postgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n>",
"msg_date": "Mon, 10 Aug 2020 17:45:58 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "Same code formatted as a patch.\n\nпн, 10 авг. 2020 г. в 17:45, Pavel Borisov <pashkin.elfe@gmail.com>:\n\n> Also little bit corrected code formatting.\n>\n>> Best regards,\n>> Pavel Borisov\n>>\n>> Postgres Professional: http://postgrespro.com\n>> <http://www.postgrespro.com>\n>>\n>",
"msg_date": "Mon, 10 Aug 2020 20:14:40 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "I added changes in documentation into the patch.\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Tue, 11 Aug 2020 12:11:59 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "вт, 11 авг. 2020 г. в 12:11, Pavel Borisov <pashkin.elfe@gmail.com>:\n\n> I added changes in documentation into the patch.\n>\n>\n> --\n> Best regards,\n> Pavel Borisov\n>\n> Postgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n>",
"msg_date": "Tue, 11 Aug 2020 22:50:46 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "With a little bugfix\n\nвт, 11 авг. 2020 г. в 22:50, Pavel Borisov <pashkin.elfe@gmail.com>:\n\n>\n>\n> вт, 11 авг. 2020 г. в 12:11, Pavel Borisov <pashkin.elfe@gmail.com>:\n>\n>> I added changes in documentation into the patch.\n>>\n>>\n>> --\n>> Best regards,\n>> Pavel Borisov\n>>\n>> Postgres Professional: http://postgrespro.com\n>> <http://www.postgrespro.com>\n>>\n>",
"msg_date": "Mon, 17 Aug 2020 20:04:32 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "Hi!\n\n> 17 авг. 2020 г., в 21:04, Pavel Borisov <pashkin.elfe@gmail.com> написал(а):\n> \n> Postgres Professional: http://postgrespro.com\n> <v6-0001-Covering-SP-GiST-index-support-for-INCLUDE-column.patch>\n\nI'm looking into the patch. I have few notes:\n\n1. I see that in src/backend/access/spgist/README you describe SP-GiST tuple as sequence of {Value, ItemPtr to heap, Included attributes}. Is it different from regular IndexTuple where tid is within TupleHeader?\n\n2. Instead of cluttering tuple->nextOffset with bit flags we could just change Tuple Header for leaf tuples with covering indexes. Interpret tuples for indexes with included attributes differently, iff it makes code cleaner. There are so many changes with SGLT_SET_OFFSET\\SGLT_GET_OFFSET that it seems viable to put some effort into research of other ways to represent two bits for null mask and varatts.\n\n3. Comment \"* SPGiST dead tuple: declaration for examining non-live tuples\" does not precede relevant code. because struct SpGistDeadTupleData was not moved.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 23 Aug 2020 13:55:59 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": ">\n> I'm looking into the patch. I have few notes:\n>\n> 1. I see that in src/backend/access/spgist/README you describe SP-GiST\n> tuple as sequence of {Value, ItemPtr to heap, Included attributes}. Is it\n> different from regular IndexTuple where tid is within TupleHeader?\n>\n\nYes, the header of SpGist tuple is put down in a little bit different way\nthan index tuple. It is also intended to connect spgist leaf tuples in\nchains on a leaf page so it already have more complex layout and bigger\nsize that index tuple header.\n\nSpGist tuple header size is 12 bytes which is a maxaligned value for 32 bit\narchitectures, and key value can start just after it without any gap. This\nis of value, as unnecessary index size increase slows down performance and\nis evil anyway. The only part of this which is left now is a gap\nbetween SpGist tuple header and first value on 64 bit architecture (as\nmaxalign value in this case is 16 bytes and 4 bytes per tuple can be\nsaved). But I was discouraged to change this on the reason of binary\ncompatibility with indexes built before and complexity of the change also,\nas quite many things in the code do depend on this maxaligned header (for\ninner and dead tuples also).\n\nAnother difference is that SpGist nulls mask is inserted after the key\nvalue before the first included one and apply only to included values. It\nis not needed for key values, as null key values in SpGist are stored in\nseparate tree, and it is not needed to mark it null second time. Also nulls\nmask size in Spgist does depend on the number of included values in a\ntuple, unlike in IndexTuple which contains redundant nulls mask for all\npossible INDEX_MAX_KEYS. In certain cases we can store nulls mask in free\nbytes after key value before typealign of first included value. (E.g. if\nkey value is varchar (radix tree) statistically we have only 1/8 of keys\nfinishing exactly an maxalign, the others will have a natural gap for nulls\nmask.)\n\n2. Instead of cluttering tuple->nextOffset with bit flags we could just\n> change Tuple Header for leaf tuples with covering indexes. Interpret tuples\n> for indexes with included attributes differently, iff it makes code\n> cleaner. There are so many changes with SGLT_SET_OFFSET\\SGLT_GET_OFFSET\n> that it seems viable to put some effort into research of other ways to\n> represent two bits for null mask and varatts.\n>\n\nOf course SpGist header can be done different for index with and without\nincluded columns. I see two reasons against this:\n1. It will be needed to integrate many ifs and in many places keep in mind\nwhether the index contains included values. It is expected to be much more\ncode than now and not only in the parts which integrates included values to\nleaf tuples. I think this vast changes can puzzle reader much more than\njust two small macros evenly copy-pasted in the code.\n2. I also see no need to increase SpGist tuple size just for inserting two\nbits which are now stored free of charge. I consulted with bit flags\nstorage in IndexTupleData.t_tid and did it in a similar way. Macros for\nGET/SET are basically needed to make bit flags and offset modification\nindependent and safe in any place of a code.\n\nI added some extra comments and mentions in manual to make all the things\nclear (see v7 patch)\n\n\n> 3. Comment \"* SPGiST dead tuple: declaration for examining non-live\n> tuples\" does not precede relevant code. because struct SpGistDeadTupleData\n> was not moved.\n\n\nYou are right, thank you! Corrected this and also removed some unnecessary\ndeclarations.\n\nThank you for your attention to the patch!",
"msg_date": "Mon, 24 Aug 2020 17:19:37 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "On 24.08.2020 16:19, Pavel Borisov wrote:\n>\n> I added some extra comments and mentions in manual to make all the \n> things clear (see v7 patch)\n\nThe patch implements the proposed functionality, passes tests, and in \ngeneral looks good to me.\nI don't mind using a macro to differentiate tuples with and without \nincluded attributes. Any approach will require code changes. Though, I \ndon't have a strong opinion about that.\n\nA bit of nitpicking:\n\n1) You mention backward compatibility in some comments. But, after this \npatch will be committed, it will be uneasy to distinct new and old \nphrases. So I suggest to rephrase them. You can either refer a \nspecific version or just call it \"compatibility with indexes without \nincluded attributes\".\n\n2) SpgLeafSize() function name seems misleading, as it actually refers \nto a tuple's size, not a leaf page. I suggest to rename it to \nSpgLeafTupleSize().\n\n3) I didn't quite get the meaning of the assertion, that is added in a \nfew places:\n Assert(so->state.includeTupdesc->natts);\nShould it be Assert(so->state.includeTupdesc->natts > 1) ?\n\n4) There are a few typos in comments and docs:\ns/colums/columns\ns/include attribute/included attribute\n\nand so on.\n\n5) This comment in index_including.sql is outdated:\n * 7. Check various AMs. All but btree and gist must fail.\n\n6) New test lacks SET enable_seqscan TO off;\nin addition to SET enable_bitmapscan TO off;\n\nI also wonder, why both index_including_spgist.sql and \nindex_including.sql tests are stable without running 'vacuum analyze' \nbefore the EXPLAIN that shows Index Only Scan plan. Is autovacuum just \nalways fast enough to fill a visibility map, or I miss something?\n\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 24.08.2020 16:19, Pavel Borisov\n wrote:\n\n\n\n\n\n\nI added some extra comments and mentions in manual to\n make all the things clear (see v7 patch) \n \n\n\n\n\n\n The patch implements the proposed functionality, passes tests, and\n in general looks good to me.\n I don't mind using a macro to differentiate tuples with and without\n included attributes. Any approach will require code changes. Though,\n I don't have a strong opinion about that.\n \n \n A bit of nitpicking:\n\n 1) You mention backward compatibility in some comments. But, after\n this patch will be committed, it will be uneasy to distinct new and\n old phrases. So I suggest to rephrase them. You can either refer a\n specific version or just call it \"compatibility with indexes without\n included attributes\".\n\n 2) SpgLeafSize() function name seems misleading, as it actually\n refers to a tuple's size, not a leaf page. I suggest to rename it to\n SpgLeafTupleSize().\n\n 3) I didn't quite get the meaning of the assertion, that is added in\n a few places:\n Assert(so->state.includeTupdesc->natts);\n Should it be Assert(so->state.includeTupdesc->natts > 1) ?\n\n 4) There are a few typos in comments and docs:\n s/colums/columns\n s/include attribute/included attribute\n\n and so on.\n\n 5) This comment in index_including.sql is outdated:\n * 7. Check various AMs. All but btree and gist must fail.\n\n 6) New test lacks SET enable_seqscan TO off;\n in addition to SET enable_bitmapscan TO off;\n\n I also wonder, why both index_including_spgist.sql and\n index_including.sql tests are stable without running 'vacuum\n analyze' before the EXPLAIN that shows Index Only Scan plan. Is\n autovacuum just always fast enough to fill a visibility map, or I\n miss something?\n\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 27 Aug 2020 01:03:49 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": ">\n> 3) I didn't quite get the meaning of the assertion, that is added in a few\n> places:\n> Assert(so->state.includeTupdesc->natts);\n> Should it be Assert(so->state.includeTupdesc->natts > 1) ?\n>\nIt is rather Assert(so->state.includeTupdesc->natts > 0) as INCLUDE tuple\ndescriptor should not be initialized and filled in case of index without\nINCLUDE attributes and doesn't contain any info about key attribute which\nis processed by SpGist existing way separately for different SpGist tuple\ntypes i.e. leaf, prefix=inner and label tuples. So only INCLUDE attributes\nare counted there. This and similar Asserts are for the case includeTupdesc\nbecomes mistakenly initialized by some future code change.\n\nI completely agree with all the other suggestions and made corrections (see\nv8). Thank you very much for your review!\nAlso there is a separate patch 0002 to add VACUUM ANALYZE to\nindex_including test which is not necessary for covering spgist.\n\nOne more point to note: in spgist_private.h I needed to shift down whole\nblock between\n*\"typedef struct SpGistSearchItem\"*\n*and *\n*\"} SpGistCache;\"*\nto position it below tuples types declarations to insert pointer\n\"SpGistLeafTuple leafTuple\"; into struct SpGistSearchItem. This is the only\nchange in this block and I apologize for possible inconvenience to review\nthis change.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Thu, 27 Aug 2020 20:03:57 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "\n\n> 27 авг. 2020 г., в 21:03, Pavel Borisov <pashkin.elfe@gmail.com> написал(а):\n> \n> see v8\n\nFor me is the only concerning point is putting nullmask and varatt bits into tuple->nextOffset.\nBut, probably, we can go with this.\n\nBut let's change macro a bit. When I see\nSGLT_SET_OFFSET(leafTuple->nextOffset, InvalidOffsetNumber);\nI expect that leafTuple->nextOffset is function argument by value and will not be changed.\nFor example see ItemPointerSetOffsetNumber() - it's not exposing ip_posid.\n\nAlso, I'd propose instead of\n>*(leafChainDatums + i * natts) and leafChainIsnulls + i * natts\nusing something like\n>int some_index = i * natts;\n>leafChainDatumsp[some_index] and &leafChainIsnulls[some_index]\nBut, probably, it's a matter of taste...\n\nAlso I'm not sure would it be helpful to use instead of\n>isnull[0] and leafDatum[0]\nmore complex \n>#define SpgKeyIndex 0\n>isnull[SpgKeyIndex] and leafDatum[SpgKeyIndex]\nThere is so many [0] in the patch...\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 30 Aug 2020 18:01:19 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": ">\n> But let's change macro a bit. When I see\n> SGLT_SET_OFFSET(leafTuple->nextOffset, InvalidOffsetNumber);\n> I expect that leafTuple->nextOffset is function argument by value and will\n> not be changed.\n> For example see ItemPointerSetOffsetNumber() - it's not exposing ip_posid.\n>\n> Also, I'd propose instead of\n> >*(leafChainDatums + i * natts) and leafChainIsnulls + i * natts\n> using something like\n> >int some_index = i * natts;\n> >leafChainDatumsp[some_index] and &leafChainIsnulls[some_index]\n> But, probably, it's a matter of taste...\n>\n> Also I'm not sure would it be helpful to use instead of\n> >isnull[0] and leafDatum[0]\n> more complex\n> >#define SpgKeyIndex 0\n> >isnull[SpgKeyIndex] and leafDatum[SpgKeyIndex]\n> There is so many [0] in the patch...\n>\nI agree with all of your proposals and integrated them into v9.\nThank you very much!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Mon, 31 Aug 2020 15:57:56 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "\n\n> 31 авг. 2020 г., в 16:57, Pavel Borisov <pashkin.elfe@gmail.com> написал(а):\n> \n> I agree with all of your proposals and integrated them into v9.\n\nI have a wild idea of renaming nextOffset in SpGistLeafTupleData.\nOr at least documenting in comments that this field is more than just an offset.\n\nThis looks like assert rather than real runtime check in spgLeafTupleSize()\n\n+\t\tif (state->includeTupdesc->natts + 1 >= INDEX_MAX_KEYS)\n+\t\t\tereport(ERROR,\n+\t\t\t\t\t(errcode(ERRCODE_TOO_MANY_COLUMNS),\n+\t\t\t\t\t errmsg(\"number of index columns (%d) exceeds limit (%d)\",\n+\t\t\t\t\t\t\tstate->includeTupdesc->natts, INDEX_MAX_KEYS)));\nEven if you go with check, number of columns is state->includeTupdesc->natts + 1.\n\nAlso I'd refactor this\n+\t/* Form descriptor for INCLUDE columns if any */\n+\tif (IndexRelationGetNumberOfAttributes(index) > 1)\n+\t{\n+\t\tint\t\t\ti;\n+\n+\t\tcache->includeTupdesc = CreateTemplateTupleDesc(\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\tIndexRelationGetNumberOfAttributes(index) - 1);\n \n+\t\tfor (i = 0; i < IndexRelationGetNumberOfAttributes(index) - 1; i++)\n+\t\t{\n+\t\t\tTupleDescInitEntry(cache->includeTupdesc, i + 1, NULL,\n+\t\t\t\t\t\t\t TupleDescAttr(index->rd_att, i + 1)->atttypid,\n+\t\t\t\t\t\t\t -1, 0);\n+\t\t}\n+\t}\n+\telse\n+\t\tcache->includeTupdesc = NULL;\ninto something like\ncache->includeTupdesc = NULL;\nfor (i = 0; i < IndexRelationGetNumberOfAttributes(index) - 1; i++)\n{\n if (cache->includeTupdesc == NULL)\n\t// init tuple description\n // init entry\n}\nBut, probably it's only a matter of taste.\n\nBeside this, I think patch is ready for committer. If Anastasia has no objections, let's flip CF entry state.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 2 Sep 2020 17:18:09 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": ">\n> I have a wild idea of renaming nextOffset in SpGistLeafTupleData.\n> Or at least documenting in comments that this field is more than just an\n> offset.\n>\nSeems reasonable as previous changes localized mentions of nextOffset only\nto leaf tuple definition and access macros. So I've done this renaming.\n\n\n> This looks like assert rather than real runtime check in spgLeafTupleSize()\n>\n> + if (state->includeTupdesc->natts + 1 >= INDEX_MAX_KEYS)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_TOO_MANY_COLUMNS),\n> + errmsg(\"number of index columns\n> (%d) exceeds limit (%d)\",\n> +\n> state->includeTupdesc->natts, INDEX_MAX_KEYS)));\n> Even if you go with check, number of columns is\n> state->includeTupdesc->natts + 1.\n>\nI placed this check only once on SpGist state creation and replaced the\nother checks to asserts.\n\n\n> Also I'd refactor this\n> + /* Form descriptor for INCLUDE columns if any */\n>\nAlso done. Thanks a lot!\nSee v10.\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Wed, 2 Sep 2020 19:42:57 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> [ v10-0001-Covering-SP-GiST-index-support-for-INCLUDE-colum.patch ]\n\nI've started to review this, and I've got to say that I really disagree\nwith this choice:\n\n+ * If there are INCLUDE columns, they are stored after a key value, each\n+ * starting from its own typalign boundary. Unlike IndexTuple, first INCLUDE\n+ * value does not need to start from MAXALIGN boundary, so SPGiST uses private\n+ * routines to access them.\n\nThis seems to require far more new code than it could possibly be worth,\nbecause most of the time anything you could save here is just going\nto disappear into end-of-tuple alignment space anyway -- recall that\nthe overall index tuple length is going to be MAXALIGN'd no matter what.\nI think you should yank this out and try to rely on standard tuple\nconstruction/deconstruction code instead.\n\nI also find it unacceptable that you stuck a tuple descriptor pointer into\nthe rd_amcache structure. The relcache only supports that being a flat\nblob of memory. I see that you tried to hack around that by having\nspgGetCache reconstruct a new tupdesc every time through, but (a) that's\nactually worse than having no cache at all, and (b) spgGetCache doesn't\nreally know much about the longevity of the memory context it's called in.\nThis could easily lead to dangling tuple pointers, serious memory bloat\nfrom repeated tupdesc construction, or quite possibly both. Safer would\nbe to build the tupdesc during initSpGistState(), or maybe just make it\non-demand. In view of the previous point, I'm also wondering if there's\nany way to get the relcache's regular rd_att tupdesc to be useful here,\nso we don't have to build one during scans at all.\n\n(I wondered for a bit about whether you could keep a long-lived private\ntupdesc in the relcache's rd_indexcxt context. But it looks like\nrelcache.c sometimes resets rd_amcache without also clearing the\nrd_indexcxt, so that would probably lead to leakage.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Nov 2020 18:34:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": ">\n> I've started to review this, and I've got to say that I really disagree\n> with this choice:\n>\n> + * If there are INCLUDE columns, they are stored after a key value, each\n> + * starting from its own typalign boundary. Unlike IndexTuple, first\n> INCLUDE\n> + * value does not need to start from MAXALIGN boundary, so SPGiST uses\n> private\n> + * routines to access them.\n>\n> This seems to require far more new code than it could possibly be worth,\n> because most of the time anything you could save here is just going\n> to disappear into end-of-tuple alignment space anyway -- recall that\n> the overall index tuple length is going to be MAXALIGN'd no matter what.\n> I think you should yank this out and try to rely on standard tuple\n> construction/deconstruction code instead.\n>\nI'd say that much of the SELECT performance gain of SP-GiST over GiST is\ndue to its lightweight pages, each containing more tuples so we can have\nless page fetches. And this is the main goal of having lightweight tuples.\nPFA my performance measurements for box+cidr selects, with gist and spgist\nindexes built on box key-column and cidr (optionally) include column.\n\nThe way that seems acceptable to me is to add (optional) nulls mask into\nthe end of existing style SpGistLeafTuple header and use indextuple\nroutines to attach attributes after it. In this case, we can reduce the\namount of code at the cost of adding one extra MAXALIGN size to the overall\ntuple size on 32-bit arch as now tuple header size of 12 bit already fits 3\nMAXALIGNS (on 64 bit the header now is shorter than 2 maxaligns (12 bytes\nof 16) and nulls mask will be free of cost). If you mean this I try to make\nchanges soon. What do you think of it?\n\nI also find it unacceptable that you stuck a tuple descriptor pointer into\n> the rd_amcache structure. The relcache only supports that being a flat\n> blob of memory. I see that you tried to hack around that by having\n> spgGetCache reconstruct a new tupdesc every time through, but (a) that's\n> actually worse than having no cache at all, and (b) spgGetCache doesn't\n> really know much about the longevity of the memory context it's called in.\n> This could easily lead to dangling tuple pointers, serious memory bloat\n> from repeated tupdesc construction, or quite possibly both. Safer would\n> be to build the tupdesc during initSpGistState(), or maybe just make it\n> on-demand. In view of the previous point, I'm also wondering if there's\n> any way to get the relcache's regular rd_att tupdesc to be useful here,\n> so we don't have to build one during scans at all.\n>\n> (I wondered for a bit about whether you could keep a long-lived private\n> tupdesc in the relcache's rd_indexcxt context. But it looks like\n> relcache.c sometimes resets rd_amcache without also clearing the\n> rd_indexcxt, so that would probably lead to leakage.)\n>\nI will consider this for sure, thanks.",
"msg_date": "Tue, 17 Nov 2020 11:36:56 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "вт, 17 нояб. 2020 г. в 11:36, Pavel Borisov <pashkin.elfe@gmail.com>:\n\n> I've started to review this, and I've got to say that I really disagree\n>> with this choice:\n>>\n>> + * If there are INCLUDE columns, they are stored after a key value, each\n>> + * starting from its own typalign boundary. Unlike IndexTuple, first\n>> INCLUDE\n>> + * value does not need to start from MAXALIGN boundary, so SPGiST uses\n>> private\n>> + * routines to access them.\n>>\n> Tom, I took a stab at making the code for tuple creation/decomposition\nmore optimal. Now I see several options for this:\n1. Included values can be added after key value as a whole index tuple. Pro\nof this: it reuses existing code perfectly. Con is that it will introduce\nextra (empty) index tuple header.\n2. Existing state: pro is that in my opinion, it has the least possible\ngaps. The con is the need to duplicate much of the existing code with some\nmodification. Frankly I don't like this duplication very much even if it is\nonly a private spgist code.\n2A. Existing state can be shifted into fewer changes in index_form_tuple\nand index_deform_tuple if I shift the null mask after the tuple header and\nbefore the key value (SpGistTupleHeader+nullmask chunk will be maxaligned).\nThis is what I proposed in the previous answer. I tried to work on this\nvariant but it will need to duplicate index_form_tuple and\nindex_deform_tuple code into private version. The reason is that spgist\ntuple has its own header of different size and in my understanding, it can\nnot be incorporated using index_form_tuple.\n3. I can split index_form_tuple into two parts: a) header adding and size\ncalculation, b) filling attributes. External (a), which could be\nconstructed differently for SpGist, and internal (b) being universal.\n3A. I can make index_form_tuple accept pointer as an argument to create\ntuple in already palloced memory area (with the shift to its start). So\nexternal caller will be able to incorporate headers after calling\nindex_form_tuple routine.\n\nMaybe there is some other way I don't imagine yet. Which way do you think\nfor me better to follow? What is your advice?\n\n\n> I also find it unacceptable that you stuck a tuple descriptor pointer into\n>> the rd_amcache structure. The relcache only supports that being a flat\n>> blob of memory. I see that you tried to hack around that by having\n>> spgGetCache reconstruct a new tupdesc every time through, but (a) that's\n>> actually worse than having no cache at all, and (b) spgGetCache doesn't\n>> really know much about the longevity of the memory context it's called in.\n>> This could easily lead to dangling tuple pointers, serious memory bloat\n>> from repeated tupdesc construction, or quite possibly both. Safer would\n>> be to build the tupdesc during initSpGistState(), or maybe just make it\n>> on-demand. In view of the previous point, I'm also wondering if there's\n>> any way to get the relcache's regular rd_att tupdesc to be useful here,\n>> so we don't have to build one during scans at all.\n>>\n> I see that FormData_pg_attribute's inside TupleDescData are situated in a\nsingle memory chunk. If I add it at the ending of allocated SpGistCache as\na copy of this chunk (using memcpy), not a pointer to it as it is now, will\nit be safe for use?\nOr maybe it would still bel better to initialize tuple descriptor any\ntime initSpGistState called without trying to cache it?\n\nWhat will you advise?\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nвт, 17 нояб. 2020 г. в 11:36, Pavel Borisov <pashkin.elfe@gmail.com>:I've started to review this, and I've got to say that I really disagree\nwith this choice:\n\n+ * If there are INCLUDE columns, they are stored after a key value, each\n+ * starting from its own typalign boundary. Unlike IndexTuple, first INCLUDE\n+ * value does not need to start from MAXALIGN boundary, so SPGiST uses private\n+ * routines to access them.Tom, I took a stab at making the code for tuple creation/decomposition more optimal. Now I see several options for this:1. Included values can be added after key value as a whole index tuple. Pro of this: it reuses existing code perfectly. Con is that it will introduce extra (empty) index tuple header.2. Existing state: pro is that in my opinion, it has the least possible gaps. The con is the need to duplicate much of the existing code with some modification. Frankly I don't like this duplication very much even if it is only a private spgist code.2A. Existing state can be shifted into fewer changes in index_form_tuple and index_deform_tuple if I shift the null mask after the tuple header and before the key value (SpGistTupleHeader+nullmask chunk will be maxaligned). This is what I proposed in the previous answer. I tried to work on this variant but it will need to duplicate index_form_tuple and index_deform_tuple code into private version. The reason is that spgist tuple has its own header of different size and in my understanding, it can not be incorporated using index_form_tuple.3. I can split index_form_tuple into two parts: a) header adding and size calculation, b) filling attributes. External (a), which could be constructed differently for SpGist, and internal (b) being universal.3A. I can make index_form_tuple accept pointer as an argument to create tuple in already palloced memory area (with the shift to its start). So external caller will be able to incorporate headers after calling index_form_tuple routine.Maybe there is some other way I don't imagine yet. Which way do you think for me better to follow? What is your advice? \nI also find it unacceptable that you stuck a tuple descriptor pointer into\nthe rd_amcache structure. The relcache only supports that being a flat\nblob of memory. I see that you tried to hack around that by having\nspgGetCache reconstruct a new tupdesc every time through, but (a) that's\nactually worse than having no cache at all, and (b) spgGetCache doesn't\nreally know much about the longevity of the memory context it's called in.\nThis could easily lead to dangling tuple pointers, serious memory bloat\nfrom repeated tupdesc construction, or quite possibly both. Safer would\nbe to build the tupdesc during initSpGistState(), or maybe just make it\non-demand. In view of the previous point, I'm also wondering if there's\nany way to get the relcache's regular rd_att tupdesc to be useful here,\nso we don't have to build one during scans at all.I see that FormData_pg_attribute's inside TupleDescData are situated in a single memory chunk. If I add it at the ending of allocated SpGistCache as a copy of this chunk (using memcpy), not a pointer to it as it is now, will it be safe for use?Or maybe it would still bel better to initialize tuple descriptor any time initSpGistState called without trying to cache it?What will you advise?-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Tue, 17 Nov 2020 21:19:52 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> The way that seems acceptable to me is to add (optional) nulls mask into\n> the end of existing style SpGistLeafTuple header and use indextuple\n> routines to attach attributes after it. In this case, we can reduce the\n> amount of code at the cost of adding one extra MAXALIGN size to the overall\n> tuple size on 32-bit arch as now tuple header size of 12 bit already fits 3\n> MAXALIGNS (on 64 bit the header now is shorter than 2 maxaligns (12 bytes\n> of 16) and nulls mask will be free of cost). If you mean this I try to make\n> changes soon. What do you think of it?\n\nYeah, that was pretty much the same conclusion I came to. For\nINDEX_MAX_KEYS values up to 32, the nulls bitmap will fit into what's\nnow padding space on 64-bit machines. For backwards compatibility,\nwe'd have to be careful that the code knows there's no nulls bitmap in\nan index without included columns, so I'm not sure how messy that will\nbe. But it's worth trying that way to see how it comes out.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Nov 2020 18:27:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": ">\n> > The way that seems acceptable to me is to add (optional) nulls mask into\n> > the end of existing style SpGistLeafTuple header and use indextuple\n> > routines to attach attributes after it. In this case, we can reduce the\n> > amount of code at the cost of adding one extra MAXALIGN size to the\n> overall\n> > tuple size on 32-bit arch as now tuple header size of 12 bit already\n> fits 3\n> > MAXALIGNS (on 64 bit the header now is shorter than 2 maxaligns (12 bytes\n> > of 16) and nulls mask will be free of cost). If you mean this I try to\n> make\n> > changes soon. What do you think of it?\n>\n> Yeah, that was pretty much the same conclusion I came to. For\n> INDEX_MAX_KEYS values up to 32, the nulls bitmap will fit into what's\n> now padding space on 64-bit machines. For backwards compatibility,\n> we'd have to be careful that the code knows there's no nulls bitmap in\n> an index without included columns, so I'm not sure how messy that will\n> be. But it's worth trying that way to see how it comes out.\n>\n\nI made a refactoring of the patch code according to the discussion:\n1. Changed a leaf tuple format to: header - (optional) bitmask - key value\n- (optional) INCLUDE values\n2. Re-use existing code of heap_fill_tuple() to fill data part of a leaf\ntuple\n3. Splitted index_deform_tuple() into two portions: (a) bigger 'inner' one\n- index_deform_anyheader_tuple() - to make processing of index-like tuples\n(now IndexTuple and SpGistLeafTuple) working independent from type of tuple\nheader. (b) a small 'outer' index_deform_tuple() and spgDeformLeafTuple()\nto make all header-specific processing and then call the inner (a)\n4. Inserted a tuple descriptor into the SpGistCache chunk of memory. So\ncleaning the cached chunk will also invalidate the tuple descriptor and not\nmake it dangling or leaked. This also allows not to build it every time\nunless the cache is invalidated.\n5. Corrected amroutine->amcaninclude according to new upstream fix.\n6. Returned big chunks that were shifted in spgist_private.h to their\ninitial places where possible and made other cosmetic changes to improve\nthe patch.\n\nPFA v.11 of the patch.\nDo you think the proposed changes are in the right direction?\n\nThank you!\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Thu, 26 Nov 2020 21:48:31 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "I've noticed CI error due to the fact that MSVC doesn't allow arrays of\nflexible size arrays and made a fix for the issue.\nAlso did some minor refinement in tuple creation.\nPFA v12 of a patch.\n\nчт, 26 нояб. 2020 г. в 21:48, Pavel Borisov <pashkin.elfe@gmail.com>:\n\n> > The way that seems acceptable to me is to add (optional) nulls mask into\n>> > the end of existing style SpGistLeafTuple header and use indextuple\n>> > routines to attach attributes after it. In this case, we can reduce the\n>> > amount of code at the cost of adding one extra MAXALIGN size to the\n>> overall\n>> > tuple size on 32-bit arch as now tuple header size of 12 bit already\n>> fits 3\n>> > MAXALIGNS (on 64 bit the header now is shorter than 2 maxaligns (12\n>> bytes\n>> > of 16) and nulls mask will be free of cost). If you mean this I try to\n>> make\n>> > changes soon. What do you think of it?\n>>\n>> Yeah, that was pretty much the same conclusion I came to. For\n>> INDEX_MAX_KEYS values up to 32, the nulls bitmap will fit into what's\n>> now padding space on 64-bit machines. For backwards compatibility,\n>> we'd have to be careful that the code knows there's no nulls bitmap in\n>> an index without included columns, so I'm not sure how messy that will\n>> be. But it's worth trying that way to see how it comes out.\n>>\n>\n> I made a refactoring of the patch code according to the discussion:\n> 1. Changed a leaf tuple format to: header - (optional) bitmask - key value\n> - (optional) INCLUDE values\n> 2. Re-use existing code of heap_fill_tuple() to fill data part of a leaf\n> tuple\n> 3. Splitted index_deform_tuple() into two portions: (a) bigger 'inner' one\n> - index_deform_anyheader_tuple() - to make processing of index-like tuples\n> (now IndexTuple and SpGistLeafTuple) working independent from type of tuple\n> header. (b) a small 'outer' index_deform_tuple() and spgDeformLeafTuple()\n> to make all header-specific processing and then call the inner (a)\n> 4. Inserted a tuple descriptor into the SpGistCache chunk of memory. So\n> cleaning the cached chunk will also invalidate the tuple descriptor and not\n> make it dangling or leaked. This also allows not to build it every time\n> unless the cache is invalidated.\n> 5. Corrected amroutine->amcaninclude according to new upstream fix.\n> 6. Returned big chunks that were shifted in spgist_private.h to their\n> initial places where possible and made other cosmetic changes to improve\n> the patch.\n>\n> PFA v.11 of the patch.\n> Do you think the proposed changes are in the right direction?\n>\n> Thank you!\n> --\n> Best regards,\n> Pavel Borisov\n>\n> Postgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n>\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Thu, 3 Dec 2020 16:33:45 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> I've noticed CI error due to the fact that MSVC doesn't allow arrays of\n> flexible size arrays and made a fix for the issue.\n> Also did some minor refinement in tuple creation.\n> PFA v12 of a patch.\n\nThe cfbot's still unhappy --- looks like you omitted a file from the\npatch?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Dec 2020 12:05:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": ">\n> The cfbot's still unhappy --- looks like you omitted a file from the\n> patch?\n>\nYou are right, thank you. Corrected this. PFA v13\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Fri, 4 Dec 2020 21:31:14 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "On 12/4/20 12:31 PM, Pavel Borisov wrote:\n> The cfbot's still unhappy --- looks like you omitted a file from the\n> patch?\n> \n> You are right, thank you. Corrected this. PFA v13\n\nTom, do the changes as enumerated in [1] look like they are going in the \nright direction?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n[1] \nhttps://www.postgresql.org/message-id/CALT9ZEEszJUwsXMWknXQ3k_YbGtQaQwiYxxEGZ-pFGRUDSXdtQ%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 10 Mar 2021 11:48:05 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "David Steele <david@pgmasters.net> writes:\n> Tom, do the changes as enumerated in [1] look like they are going in the \n> right direction?\n\nI spent a little time looking at this, and realized something that may\nor may not be a serious problem. This form of the patch supposes that\nit can use the usual tuple form/deform logic for all columns of a leaf\ntuple including the key column. However, that does not square with\nSPGIST's existing storage convention for pass-by-value key types: we\npresently assume that those are stored in their Datum representation,\nie always 4 or 8 bytes depending on machine word width, even when\ntyplen is less than that.\n\nNow there is an argument why this might not be an unacceptable disk\nformat breakage: there probably aren't any SPGIST indexes with a\npass-by-value leaf key type. We certainly haven't got any such\nopclasses in core, and it's a bit hard to see what the semantics or\nuse-case would be for indexing bools or smallints with SPGIST.\nHowever, doing nothing isn't okay, because if anyone did make such\nan opclass in future, it'd misbehave with this patch (since SGLTDATUM\nwould disagree with the actual storage layout).\n\nThere are a number of options we could consider:\n\n1. Disallow pass-by-value leafType, checking this in spgGetCache().\nThe main advantage of this IMV is that if anyone has written such an\nopclass already, it'd break noisily rather than silently misbehave.\nIt'd also allow simplification of SGLTDATUM by removing its\npass-by-value case, which is kind of nice.\n\n2. Accept the potential format breakage, and keep the patch mostly\nas-is but adjust SGLTDATUM to do the right thing depending on typlen.\n\n3. Decide that we need to preserve the existing rule. We could hackily\nstill use the standard tuple form/deform logic if we told it that the\ndatatype of a pass-by-value key column is INT4 or INT8, depending on\nsizeof(Datum). But that could be rather messy.\n\nAnother thing I notice in this immediate area is that the code\npresently assumes it can apply SGLTDATUM even to leaf tuples that\nstore a null key. That's perfectly okay for pass-by-ref key types,\nsince it just means we compute an address we're not going to\ndereference. But it's really rather broken for pass-by-value cases:\nit'll fetch a word from past the end of the tuple. Given recent\nmusings about making the holes in the middle of pages undefined per\nvalgrind, I wonder whether we aren't going to be forced to clean that\nup. Choice #1 looks a little more attractive with that in mind: it'd\nmean there's nothing to fix.\n\nA couple of other observations:\n\n* Making doPickSplit deform all the tuples at once, and thereby need\nfairly large work arrays (which it leaks), seems kind of ugly.\nCouldn't we leave the deforming to the end, and do it one tuple at\na time just as we form the derived tuples? (Then you could use\nfixed-size local arrays of length INDEX_MAX_KEYS.) Could probably\nremove the heapPtrs[] array that way, too.\n\n* Personally I would not have changed the API of spgFormLeafTuple\nto take only the TupleDesc and not the whole SpGistState. That\ndoesn't seem to buy anything, and we'd have to undo it in future\nif spgFormLeafTuple ever needs access to any of the rest of the\nSpGistState.\n\n* The amount of random whitespace changes in the patch is really\nrather annoying. Please run the code through pgindent to undo\nunnecessary changes to existing code lines, and also look through\nit to remove unnecessary additions and removals of blank lines.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Mar 2021 17:18:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "I wrote:\n> I spent a little time looking at this, and realized something that may\n> or may not be a serious problem. This form of the patch supposes that\n> it can use the usual tuple form/deform logic for all columns of a leaf\n> tuple including the key column. However, that does not square with\n> SPGIST's existing storage convention for pass-by-value key types: we\n> presently assume that those are stored in their Datum representation,\n> ie always 4 or 8 bytes depending on machine word width, even when\n> typlen is less than that.\n\n> Now there is an argument why this might not be an unacceptable disk\n> format breakage: there probably aren't any SPGIST indexes with a\n> pass-by-value leaf key type.\n\nOn further contemplation, it occurs to me that if we make the switch\nto \"key values are stored per normal rules\", then even if there is an\nindex with pass-by-value keys out there someplace, it would only break\non big-endian architectures. On little-endian, the extra space\noccupied by the Datum format would just seem to be padding space.\nSo this probably means that the theoretical compatibility hazard is\nsmall enough to be negligible, and we should go with my option #2\n(i.e., we need to replace SGLTDATUM with normal attribute-fetch logic).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Mar 2021 19:42:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": ">\n> On further contemplation, it occurs to me that if we make the switch\n> to \"key values are stored per normal rules\", then even if there is an\n> index with pass-by-value keys out there someplace, it would only break\n> on big-endian architectures. On little-endian, the extra space\n> occupied by the Datum format would just seem to be padding space.\n> So this probably means that the theoretical compatibility hazard is\n> small enough to be negligible, and we should go with my option #2\n> (i.e., we need to replace SGLTDATUM with normal attribute-fetch logic).\n>\n> regards, tom lane\n>\n\nI am sorry for the delay in reply. Now I've returned to the work on the\npatch.\nFirst of all big thanks for good pieces of advice. I especially liked the\nidea of not allocating a big array in a picksplit procedure and doing\ndeform and form tuples on the fly.\nI found all notes mentioned are quite worth integrating into the patch, and\nhave made the next version of a patch (with a pgindent done also). PFA v 14.\n\nI hope I understand the way to modify SGLTDATUM in the right way. If not\nplease let me know. (The macro SGLTDATUM itself is not removed, it calls\nfetch_att. And I find this suitable as the address for the first tuple\nattribute is MAXALIGNed).\n\nThanks again for your consideration. From now I hope to be able to work on\nthe feature with not so big delay.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Thu, 25 Mar 2021 23:47:29 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "In a v14 I forgot to add the test. PFA v15\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Fri, 26 Mar 2021 00:02:03 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> In a v14 I forgot to add the test. PFA v15\n\nI've committed this with a lot of mostly-cosmetic changes.\nThe not-so-cosmetic bits had to do with confusion between\nthe input data type and the leaf type, which isn't really\nyour fault because it was there before :-(.\n\nOne note is that I dropped the added regression test script\n(index_including_spgist.sql) entirely, because I couldn't\nsee that it did anything that justified a permanent expenditure\nof test cycles. It looks like you made that by doing s/gist/spgist/g\non index_including_gist.sql, which might be all right except that\nthat script was designed to test GiST-specific implementation concerns\nthat aren't too relevant to SP-GiST. AFAICT, removing that script had\nexactly zero effect on the test coverage shown by gcov. There are\ncertainly bits of spgist that are depressingly under-covered, but I'm\nafraid we need custom-designed test cases to get at them.\n\n(wanders away wondering if the isolationtester could be used to test\nthe concurrency-sensitive parts of spgvacuum.c ...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Apr 2021 18:52:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
},
{
"msg_contents": ">\n> I've committed this with a lot of mostly-cosmetic changes.\n> The not-so-cosmetic bits had to do with confusion between\n> the input data type and the leaf type, which isn't really\n> your fault because it was there before :-(.\n>\n> One note is that I dropped the added regression test script\n> (index_including_spgist.sql) entirely, because I couldn't\n> see that it did anything that justified a permanent expenditure\n> of test cycles. It looks like you made that by doing s/gist/spgist/g\n> on index_including_gist.sql, which might be all right except that\n> that script was designed to test GiST-specific implementation concerns\n> that aren't too relevant to SP-GiST. AFAICT, removing that script had\n> exactly zero effect on the test coverage shown by gcov. There are\n> certainly bits of spgist that are depressingly under-covered, but I'm\n> afraid we need custom-designed test cases to get at them.\n>\n> (wanders away wondering if the isolationtester could be used to test\n> the concurrency-sensitive parts of spgvacuum.c ...)\n>\n> regards, tom lane\n>\n\nThanks a lot!\nAs for tests I mostly checked the storage and reconstruction of included\nattributes in the spgist index with radix and quadtree, with many included\ncolumns of different types and nulls among the values. But I consider it\ntoo big for regression. I attach radix test just in case. Do you consider\nsomething like this could be useful for testing and should I try to adapt\nsomething like this to make regression? Or do something like this on some\ndatabase already in the regression suite?\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Tue, 6 Apr 2021 15:09:59 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Covering SPGiST index"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs far as I understand, in the upcoming version 13, information about\nbuffers used during planning is now available in the explain plan.\n\n[…]\n Planning Time: 0.203 ms\n Buffers: shared hit=14\n[…]\n\nIn the JSON format, the data structure is a bit different:\n\n[…]\n \"Planning\": {\n \"Planning Time\": 0.533,\n \"Shared Hit Blocks\": 14,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0\n },\n[…]\n\nFor a matter of consistency, I wonder if it would be possible to format\nit like the following:\n\n[…]\n Planning:\n Planning Time: 0.203 ms\n Buffers: shared hit=14\n[…]\n\n\nNote: a similar way to format information is already used for JIT.\n\n[…]\n JIT:\n Functions: 3\n Options: Inlining false, Optimization false, Expressions true,\nDeforming true\n Timing: Generation 0.340 ms, Inlining 0.000 ms, Optimization 0.168\nms, Emission 1.907 ms, Total 2.414 ms\n[…]\n\nRegards,\nPierre\n\n\n",
"msg_date": "Fri, 7 Aug 2020 14:30:01 +0200",
"msg_from": "Pierre Giraud <pierre.giraud@dalibo.com>",
"msg_from_op": true,
"msg_subject": "[PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "Hi,\n\nOn Fri, Aug 7, 2020 at 2:30 PM Pierre Giraud <pierre.giraud@dalibo.com> wrote:\n>\n> Hi all,\n>\n> As far as I understand, in the upcoming version 13, information about\n> buffers used during planning is now available in the explain plan.\n\nIndeed.\n\n> […]\n> Planning Time: 0.203 ms\n> Buffers: shared hit=14\n> […]\n>\n> In the JSON format, the data structure is a bit different:\n>\n> […]\n> \"Planning\": {\n> \"Planning Time\": 0.533,\n> \"Shared Hit Blocks\": 14,\n> \"Shared Read Blocks\": 0,\n> \"Shared Dirtied Blocks\": 0,\n> \"Shared Written Blocks\": 0,\n> \"Local Hit Blocks\": 0,\n> \"Local Read Blocks\": 0,\n> \"Local Dirtied Blocks\": 0,\n> \"Local Written Blocks\": 0,\n> \"Temp Read Blocks\": 0,\n> \"Temp Written Blocks\": 0\n> },\n> […]\n>\n> For a matter of consistency, I wonder if it would be possible to format\n> it like the following:\n>\n> […]\n> Planning:\n> Planning Time: 0.203 ms\n> Buffers: shared hit=14\n> […]\n\nI agree that this output looks more consistent with other output,\nincluding JIT as you mentioned. I'll send a patch for that if there's\nno objection.\n\nOut of curiosity, is the current text output actually harder to parse\nthan the one you're suggesting?\n\n\n",
"msg_date": "Fri, 7 Aug 2020 14:52:10 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "\n\nLe 07/08/2020 à 14:52, Julien Rouhaud a écrit :\n> Hi,\n> \n> On Fri, Aug 7, 2020 at 2:30 PM Pierre Giraud <pierre.giraud@dalibo.com> wrote:\n>>\n>> Hi all,\n>>\n>> As far as I understand, in the upcoming version 13, information about\n>> buffers used during planning is now available in the explain plan.\n> \n> Indeed.\n> \n>> […]\n>> Planning Time: 0.203 ms\n>> Buffers: shared hit=14\n>> […]\n>>\n>> In the JSON format, the data structure is a bit different:\n>>\n>> […]\n>> \"Planning\": {\n>> \"Planning Time\": 0.533,\n>> \"Shared Hit Blocks\": 14,\n>> \"Shared Read Blocks\": 0,\n>> \"Shared Dirtied Blocks\": 0,\n>> \"Shared Written Blocks\": 0,\n>> \"Local Hit Blocks\": 0,\n>> \"Local Read Blocks\": 0,\n>> \"Local Dirtied Blocks\": 0,\n>> \"Local Written Blocks\": 0,\n>> \"Temp Read Blocks\": 0,\n>> \"Temp Written Blocks\": 0\n>> },\n>> […]\n>>\n>> For a matter of consistency, I wonder if it would be possible to format\n>> it like the following:\n>>\n>> […]\n>> Planning:\n>> Planning Time: 0.203 ms\n>> Buffers: shared hit=14\n>> […]\n> \n> I agree that this output looks more consistent with other output,\n> including JIT as you mentioned. I'll send a patch for that if there's\n> no objection.\n\nThanks a lot!\n\n> \n> Out of curiosity, is the current text output actually harder to parse\n> than the one you're suggesting?\n> \n\nI don't want to speak in the name of developers of others parsing tools\nbut this should not require a lot of work to parse the output I'm proposing.\nIt would be nice to have their opinion though, especially Hubert depesz\nLubaczewski's since he already integrated the change:\nhttps://gitlab.com/depesz/Pg--Explain/-/commit/4a760136ee69ee4929625d4e4022f79ac60b763f\n\nHowever, as far as I know, he's not doing anything with the buffers\ninformation with the \"Planning\" section yet.\n\nTo answer your question, I think that the new output would make the\nparser a little bit easier to write because it would make things a bit\nclearer (ie. more separated) so less prone to errors.\n\n\n",
"msg_date": "Fri, 7 Aug 2020 15:51:46 +0200",
"msg_from": "Pierre Giraud <pierre.giraud@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On Fri, Aug 7, 2020 at 3:51 PM Pierre Giraud <pierre.giraud@dalibo.com> wrote:\n>\n> Le 07/08/2020 à 14:52, Julien Rouhaud a écrit :\n> > Hi,\n> >\n> > On Fri, Aug 7, 2020 at 2:30 PM Pierre Giraud <pierre.giraud@dalibo.com> wrote:\n> >>\n> >> Hi all,\n> >>\n> >> As far as I understand, in the upcoming version 13, information about\n> >> buffers used during planning is now available in the explain plan.\n> >\n> > Indeed.\n> >\n> >> […]\n> >> Planning Time: 0.203 ms\n> >> Buffers: shared hit=14\n> >> […]\n> >>\n> >> In the JSON format, the data structure is a bit different:\n> >>\n> >> […]\n> >> \"Planning\": {\n> >> \"Planning Time\": 0.533,\n> >> \"Shared Hit Blocks\": 14,\n> >> \"Shared Read Blocks\": 0,\n> >> \"Shared Dirtied Blocks\": 0,\n> >> \"Shared Written Blocks\": 0,\n> >> \"Local Hit Blocks\": 0,\n> >> \"Local Read Blocks\": 0,\n> >> \"Local Dirtied Blocks\": 0,\n> >> \"Local Written Blocks\": 0,\n> >> \"Temp Read Blocks\": 0,\n> >> \"Temp Written Blocks\": 0\n> >> },\n> >> […]\n> >>\n> >> For a matter of consistency, I wonder if it would be possible to format\n> >> it like the following:\n> >>\n> >> […]\n> >> Planning:\n> >> Planning Time: 0.203 ms\n> >> Buffers: shared hit=14\n> >> […]\n> >\n> > I agree that this output looks more consistent with other output,\n> > including JIT as you mentioned. I'll send a patch for that if there's\n> > no objection.\n>\n> Thanks a lot!\n>\n> >\n> > Out of curiosity, is the current text output actually harder to parse\n> > than the one you're suggesting?\n> >\n>\n> I don't want to speak in the name of developers of others parsing tools\n> but this should not require a lot of work to parse the output I'm proposing.\n> It would be nice to have their opinion though, especially Hubert depesz\n> Lubaczewski's since he already integrated the change:\n> https://gitlab.com/depesz/Pg--Explain/-/commit/4a760136ee69ee4929625d4e4022f79ac60b763f\n>\n> However, as far as I know, he's not doing anything with the buffers\n> information with the \"Planning\" section yet.\n>\n> To answer your question, I think that the new output would make the\n> parser a little bit easier to write because it would make things a bit\n> clearer (ie. more separated) so less prone to errors.\n\nAdding depesz.\n\n\n",
"msg_date": "Thu, 13 Aug 2020 09:31:51 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On Fri, Aug 07, 2020 at 02:30:01PM +0200, Pierre Giraud wrote:\n> Hi all,\n> \n> As far as I understand, in the upcoming version 13, information about\n> buffers used during planning is now available in the explain plan.\n> \n> […]\n> Planning Time: 0.203 ms\n> Buffers: shared hit=14\n> […]\n> \n> For a matter of consistency, I wonder if it would be possible to format\n> it like the following:\n> \n> […]\n> Planning:\n> Planning Time: 0.203 ms\n> Buffers: shared hit=14\n\nThanks for reporting. I added this here.\nhttps://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 18 Aug 2020 22:27:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On Tue, Aug 18, 2020 at 10:27:06PM -0500, Justin Pryzby wrote:\n> On Fri, Aug 07, 2020 at 02:30:01PM +0200, Pierre Giraud wrote:\n> > Hi all,\n> > \n> > As far as I understand, in the upcoming version 13, information about\n> > buffers used during planning is now available in the explain plan.\n> > \n> > […]\n> > Planning Time: 0.203 ms\n> > Buffers: shared hit=14\n> > […]\n> > \n> > For a matter of consistency, I wonder if it would be possible to format\n> > it like the following:\n> > \n> > […]\n> > Planning:\n> > Planning Time: 0.203 ms\n> > Buffers: shared hit=14\n> \n> Thanks for reporting. I added this here.\n> https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items\n\nThanks Justin!\n\nHearing no objection, here's a patch to change the output as suggested by\nPierre:\n\n=# explain (analyze, buffers) select * from pg_class;\n QUERY PLAN >\n------------------------------------------------------------------------------------------------------->\n Seq Scan on pg_class (cost=0.00..16.86 rows=386 width=265) (actual time=0.020..0.561 rows=386 loops=1)\n Buffers: shared hit=9 read=4\n Planning:\n Planning Time: 4.345 ms\n Buffers: shared hit=103 read=12\n Execution Time: 1.447 ms\n(6 rows)\n\n=# explain (analyze, buffers, format json) select * from pg_class;\n QUERY PLAN\n-------------------------------------\n [ +\n { +\n \"Plan\": { +\n \"Node Type\": \"Seq Scan\", +\n \"Parallel Aware\": false, +\n[...]\n \"Temp Written Blocks\": 0 +\n }, +\n \"Planning\": { +\n \"Planning Time\": 4.494, +\n \"Shared Hit Blocks\": 103, +\n \"Shared Read Blocks\": 12, +\n \"Shared Dirtied Blocks\": 0, +\n \"Shared Written Blocks\": 0, +\n \"Local Hit Blocks\": 0, +\n \"Local Read Blocks\": 0, +\n \"Local Dirtied Blocks\": 0, +\n \"Local Written Blocks\": 0, +\n \"Temp Read Blocks\": 0, +\n \"Temp Written Blocks\": 0 +\n }, +\n \"Triggers\": [ +\n ], +\n \"Execution Time\": 1.824 +\n } +\n ]\n(1 row)",
"msg_date": "Wed, 19 Aug 2020 09:21:33 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On Wed, 19 Aug 2020 at 19:22, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Hearing no objection, here's a patch to change the output as suggested by\n> Pierre:\n>\n> =# explain (analyze, buffers) select * from pg_class;\n> QUERY PLAN >\n> ------------------------------------------------------------------------------------------------------->\n> Seq Scan on pg_class (cost=0.00..16.86 rows=386 width=265) (actual time=0.020..0.561 rows=386 loops=1)\n> Buffers: shared hit=9 read=4\n> Planning:\n> Planning Time: 4.345 ms\n> Buffers: shared hit=103 read=12\n> Execution Time: 1.447 ms\n> (6 rows)\n\nI don't really have anything to say about the change in format, but on\nlooking at the feature, I do find it strange that I need to specify\nANALYZE to get EXPLAIN to output the buffer information for the\nplanner.\n\nI'd expect that EXPLAIN (BUFFERS) would work just fine, but I get:\n\nERROR: EXPLAIN option BUFFERS requires ANALYZE\n\nThs docs [1] also mention this is disallowed per:\n\n\"This parameter may only be used when ANALYZE is also enabled.\"\n\nI just don't agree that it should be. What if I want to get an\nindication of why the planner is slow but I don't want to wait for the\nquery to execute? or don't want to execute it at all, say it's a\nDELETE!\n\nIt looks like we'd need to make BUFFERS imply SUMMARY, perhaps\nsomething along the lines of what we do now with ANALYZE with:\n\n/* if the summary was not set explicitly, set default value */\nes->summary = (summary_set) ? es->summary : es->analyze;\n\nHowever, I'm not quite sure how we should handle if someone does:\nEXPLAIN (BUFFERS on, SUMMARY off). Without the summary, there's no\nplace to print the buffers, which seems bad as they asked for buffers.\n\nDavid\n\n[1] https://www.postgresql.org/docs/devel/sql-explain.html\n\n\n",
"msg_date": "Wed, 19 Aug 2020 20:49:48 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On Wed, Aug 19, 2020 at 08:49:48PM +1200, David Rowley wrote:\n> On Wed, 19 Aug 2020 at 19:22, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > Hearing no objection, here's a patch to change the output as suggested by\n> > Pierre:\n> >\n> > =# explain (analyze, buffers) select * from pg_class;\n> > QUERY PLAN >\n> > ------------------------------------------------------------------------------------------------------->\n> > Seq Scan on pg_class (cost=0.00..16.86 rows=386 width=265) (actual time=0.020..0.561 rows=386 loops=1)\n> > Buffers: shared hit=9 read=4\n> > Planning:\n> > Planning Time: 4.345 ms\n> > Buffers: shared hit=103 read=12\n> > Execution Time: 1.447 ms\n> > (6 rows)\n> \n> I don't really have anything to say about the change in format, but on\n> looking at the feature, I do find it strange that I need to specify\n> ANALYZE to get EXPLAIN to output the buffer information for the\n> planner.\n> \n> I'd expect that EXPLAIN (BUFFERS) would work just fine, but I get:\n> \n> ERROR: EXPLAIN option BUFFERS requires ANALYZE\n> \n> Ths docs [1] also mention this is disallowed per:\n> \n> \"This parameter may only be used when ANALYZE is also enabled.\"\n> \n> I just don't agree that it should be. What if I want to get an\n> indication of why the planner is slow but I don't want to wait for the\n> query to execute? or don't want to execute it at all, say it's a\n> DELETE!\n\n\nI quite agree, this restriction is unhelpful since we have planning buffer\ninformation.\n\n\n> \n> It looks like we'd need to make BUFFERS imply SUMMARY\n\n\n+1\n\n\n> \n> However, I'm not quite sure how we should handle if someone does:\n> EXPLAIN (BUFFERS on, SUMMARY off). Without the summary, there's no\n> place to print the buffers, which seems bad as they asked for buffers.\n\n\nBut this won't be as much a problem if ANALYZE is asked, and having different\nbehaviors isn't appealing. So maybe it's better to let people get what they\nasked for even if that's contradictory?\n\n\n",
"msg_date": "Wed, 19 Aug 2020 11:04:29 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On Wed, 19 Aug 2020 at 21:05, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Aug 19, 2020 at 08:49:48PM +1200, David Rowley wrote:\n> > However, I'm not quite sure how we should handle if someone does:\n> > EXPLAIN (BUFFERS on, SUMMARY off). Without the summary, there's no\n> > place to print the buffers, which seems bad as they asked for buffers.\n>\n>\n> But this won't be as much a problem if ANALYZE is asked, and having different\n> behaviors isn't appealing. So maybe it's better to let people get what they\n> asked for even if that's contradictory?\n\nI'd say BUFFERS on, BUFFERS off is contradictory. I don't think\nBUFFERS, SUMMARY OFF is. It's just that we show the buffer details for\nthe planner in the summary. Since \"summary\" is not exactly a word\nthat describes what you're asking EXPLAIN to do, I wouldn't blame\nusers if they got confused as to why their BUFFERS on request was not\ndisplayed.\n\nWe do use errors for weird combinations already, e.g:\n\npostgres=# explain (timing on) select * from t1 where a > 4000000;\nERROR: EXPLAIN option TIMING requires ANALYZE\n\nso, maybe we can just error if analyze == off AND buffers == on AND\nsummary == off. We likely should pay attention to analyze there as it\nseems perfectly fine to EXPLAIN (ANALYZE, BUFFERS, SUMMARY off). We\nquite often do SUMMARY off for the regression tests... I think that\nmight have been why it was added in the first place.\n\nDavid\n\n\n",
"msg_date": "Wed, 19 Aug 2020 22:39:49 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On Wed, 19 Aug 2020 at 22:39, David Rowley <dgrowleyml@gmail.com> wrote:\n> so, maybe we can just error if analyze == off AND buffers == on AND\n> summary == off. We likely should pay attention to analyze there as it\n> seems perfectly fine to EXPLAIN (ANALYZE, BUFFERS, SUMMARY off). We\n> quite often do SUMMARY off for the regression tests... I think that\n> might have been why it was added in the first place.\n\nI had something like the attached in mind.\n\npostgres=# explain (buffers) select * from t1 where a > 4000000;\n QUERY PLAN\n--------------------------------------------------------------------------\n Index Only Scan using t1_pkey on t1 (cost=0.42..10.18 rows=100 width=4)\n Index Cond: (a > 4000000)\n Planning Time: 13.341 ms\n Buffers: shared hit=2735\n(4 rows)\n\nIt does look a bit weirder if the planner didn't do any buffer work:\n\npostgres=# explain (buffers) select * from pg_class;\n QUERY PLAN\n--------------------------------------------------------------\n Seq Scan on pg_class (cost=0.00..443.08 rows=408 width=768)\n Planning Time: 0.136 ms\n(2 rows)\n\nbut that's not a combination that people were able to use before, so I\nthink it's ok to show the planning time there.\n\nDavid",
"msg_date": "Wed, 19 Aug 2020 23:24:37 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On 2020/08/19 19:39, David Rowley wrote:\n> On Wed, 19 Aug 2020 at 21:05, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> On Wed, Aug 19, 2020 at 08:49:48PM +1200, David Rowley wrote:\n>>> However, I'm not quite sure how we should handle if someone does:\n>>> EXPLAIN (BUFFERS on, SUMMARY off). Without the summary, there's no\n>>> place to print the buffers, which seems bad as they asked for buffers.\n>>\n>>\n>> But this won't be as much a problem if ANALYZE is asked, and having different\n>> behaviors isn't appealing. So maybe it's better to let people get what they\n>> asked for even if that's contradictory?\n> \n> I'd say BUFFERS on, BUFFERS off is contradictory. I don't think\n> BUFFERS, SUMMARY OFF is. It's just that we show the buffer details for\n> the planner in the summary. Since \"summary\" is not exactly a word\n> that describes what you're asking EXPLAIN to do, I wouldn't blame\n> users if they got confused as to why their BUFFERS on request was not\n> displayed.\n\nDisplaying the planner's buffer usage under summary is the root cause of\nthe confusion? If so, what about displaying that outside summary?\nAttached is the POC patch that I'm just thinking.\n\nWith the patch, for example, whatever \"summary\" settng is, \"buffers on\"\ndisplays the planner's buffer usage if it happens.\n\n=# explain (buffers on, summary off) select * from t;\n QUERY PLAN\n-----------------------------------------------------\n Seq Scan on t (cost=0.00..32.60 rows=2260 width=8)\n Planning:\n Buffers: shared hit=16 read=6\n(3 rows)\n\n\nIf \"summary\" is enabled, the planning time is also displayed.\n\n=# explain (buffers on, summary on) select * from t;\n QUERY PLAN\n-----------------------------------------------------\n Seq Scan on t (cost=0.00..32.60 rows=2260 width=8)\n Planning:\n Buffers: shared hit=16 read=6\n Planning Time: 0.904 ms\n(4 rows)\n\n\nIf the planner's buffer usage doesn't happen, it's not displayed\n(in text format).\n\n=# explain (buffers on, summary on) select * from t;\n QUERY PLAN\n-----------------------------------------------------\n Seq Scan on t (cost=0.00..32.60 rows=2260 width=8)\n Planning Time: 0.064 ms\n(2 rows)\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 20 Aug 2020 00:31:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On Thu, 20 Aug 2020 at 03:31, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/08/19 19:39, David Rowley wrote:\n> > On Wed, 19 Aug 2020 at 21:05, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >>\n> >> On Wed, Aug 19, 2020 at 08:49:48PM +1200, David Rowley wrote:\n> >>> However, I'm not quite sure how we should handle if someone does:\n> >>> EXPLAIN (BUFFERS on, SUMMARY off). Without the summary, there's no\n> >>> place to print the buffers, which seems bad as they asked for buffers.\n> >>\n> >>\n> >> But this won't be as much a problem if ANALYZE is asked, and having different\n> >> behaviors isn't appealing. So maybe it's better to let people get what they\n> >> asked for even if that's contradictory?\n> >\n> > I'd say BUFFERS on, BUFFERS off is contradictory. I don't think\n> > BUFFERS, SUMMARY OFF is. It's just that we show the buffer details for\n> > the planner in the summary. Since \"summary\" is not exactly a word\n> > that describes what you're asking EXPLAIN to do, I wouldn't blame\n> > users if they got confused as to why their BUFFERS on request was not\n> > displayed.\n>\n> Displaying the planner's buffer usage under summary is the root cause of\n> the confusion? If so, what about displaying that outside summary?\n> Attached is the POC patch that I'm just thinking.\n\nI had a look at this and I like it better than what I proposed earlier.\n\nThe change to show_buffer_usage() is a bit ugly, but I'm not really\nseeing a better way to do it. Perhaps that can be improved later if we\never find that there's some other special case to add.\n\nDavid\n\n\n",
"msg_date": "Thu, 20 Aug 2020 11:33:34 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On Thu, 20 Aug 2020 at 03:31, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> With the patch, for example, whatever \"summary\" settng is, \"buffers on\"\n> displays the planner's buffer usage if it happens.\n\nI forgot to mention earlier, you'll also need to remove the part in\nthe docs that mentions BUFFERS requires ANALYZE.\n\nDavid\n\n\n",
"msg_date": "Thu, 20 Aug 2020 16:00:40 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On 2020/08/20 13:00, David Rowley wrote:\n> On Thu, 20 Aug 2020 at 03:31, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> With the patch, for example, whatever \"summary\" settng is, \"buffers on\"\n>> displays the planner's buffer usage if it happens.\n> \n> I forgot to mention earlier, you'll also need to remove the part in\n> the docs that mentions BUFFERS requires ANALYZE.\n\nThanks for the review! I removed that.\nAttached is the updated version of the patch.\nI also added the regression test performing \"explain (buffers on)\"\nwithout \"analyze\" option.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 20 Aug 2020 16:58:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "Can you please show what the plan would look like for?\n\n=# explain (buffers on, summary on, format JSON) select * from t;\n\n\n\nLe 20/08/2020 à 09:58, Fujii Masao a écrit :\n> \n> \n> On 2020/08/20 13:00, David Rowley wrote:\n>> On Thu, 20 Aug 2020 at 03:31, Fujii Masao\n>> <masao.fujii@oss.nttdata.com> wrote:\n>>> With the patch, for example, whatever \"summary\" settng is, \"buffers on\"\n>>> displays the planner's buffer usage if it happens.\n>>\n>> I forgot to mention earlier, you'll also need to remove the part in\n>> the docs that mentions BUFFERS requires ANALYZE.\n> \n> Thanks for the review! I removed that.\n> Attached is the updated version of the patch.\n> I also added the regression test performing \"explain (buffers on)\"\n> without \"analyze\" option.\n> \n> Regards,\n> \n\n\n",
"msg_date": "Thu, 20 Aug 2020 10:03:12 +0200",
"msg_from": "Pierre Giraud <pierre.giraud@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "\n\nOn 2020/08/20 17:03, Pierre Giraud wrote:\n> Can you please show what the plan would look like for?\n> \n> =# explain (buffers on, summary on, format JSON) select * from t;\n\nWith my patch, the following is reported in that case.\n\n=# explain (buffers on, summary on, format JSON) select * from pg_class;\n QUERY PLAN\n------------------------------------\n [ +\n { +\n \"Plan\": { +\n \"Node Type\": \"Seq Scan\", +\n \"Parallel Aware\": false, +\n \"Relation Name\": \"pg_class\",+\n \"Alias\": \"pg_class\", +\n \"Startup Cost\": 0.00, +\n \"Total Cost\": 16.87, +\n \"Plan Rows\": 387, +\n \"Plan Width\": 265, +\n \"Shared Hit Blocks\": 0, +\n \"Shared Read Blocks\": 0, +\n \"Shared Dirtied Blocks\": 0, +\n \"Shared Written Blocks\": 0, +\n \"Local Hit Blocks\": 0, +\n \"Local Read Blocks\": 0, +\n \"Local Dirtied Blocks\": 0, +\n \"Local Written Blocks\": 0, +\n \"Temp Read Blocks\": 0, +\n \"Temp Written Blocks\": 0 +\n }, +\n \"Planning\": { +\n \"Shared Hit Blocks\": 103, +\n \"Shared Read Blocks\": 12, +\n \"Shared Dirtied Blocks\": 0, +\n \"Shared Written Blocks\": 0, +\n \"Local Hit Blocks\": 0, +\n \"Local Read Blocks\": 0, +\n \"Local Dirtied Blocks\": 0, +\n \"Local Written Blocks\": 0, +\n \"Temp Read Blocks\": 0, +\n \"Temp Written Blocks\": 0 +\n }, +\n \"Planning Time\": 8.132 +\n } +\n ]\n(1 row)\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 20 Aug 2020 17:07:57 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On Thu, 20 Aug 2020 at 19:58, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/08/20 13:00, David Rowley wrote:\n> > I forgot to mention earlier, you'll also need to remove the part in\n> > the docs that mentions BUFFERS requires ANALYZE.\n>\n> Thanks for the review! I removed that.\n> Attached is the updated version of the patch.\n> I also added the regression test performing \"explain (buffers on)\"\n> without \"analyze\" option.\n\nLooks good to me.\n\nDavid\n\n\n",
"msg_date": "Thu, 20 Aug 2020 22:28:47 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On Thu, Aug 20, 2020 at 12:29 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 20 Aug 2020 at 19:58, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > On 2020/08/20 13:00, David Rowley wrote:\n> > > I forgot to mention earlier, you'll also need to remove the part in\n> > > the docs that mentions BUFFERS requires ANALYZE.\n> >\n> > Thanks for the review! I removed that.\n> > Attached is the updated version of the patch.\n> > I also added the regression test performing \"explain (buffers on)\"\n> > without \"analyze\" option.\n>\n> Looks good to me.\n\nLooks good to me too.\n\n\n",
"msg_date": "Thu, 20 Aug 2020 15:34:33 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "\n\nOn 2020/08/20 22:34, Julien Rouhaud wrote:\n> On Thu, Aug 20, 2020 at 12:29 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> On Thu, 20 Aug 2020 at 19:58, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>> On 2020/08/20 13:00, David Rowley wrote:\n>>>> I forgot to mention earlier, you'll also need to remove the part in\n>>>> the docs that mentions BUFFERS requires ANALYZE.\n>>>\n>>> Thanks for the review! I removed that.\n>>> Attached is the updated version of the patch.\n>>> I also added the regression test performing \"explain (buffers on)\"\n>>> without \"analyze\" option.\n>>\n>> Looks good to me.\n> \n> Looks good to me too.\n\nDavid and Julien, thanks for the review! I'd like to wait for\nPierre's opinion about this change before committing the patch.\n\nPierre,\ncould you share your opinion about this change?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 21 Aug 2020 00:41:34 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "Le 20/08/2020 à 17:41, Fujii Masao a écrit :\n> \n> \n> On 2020/08/20 22:34, Julien Rouhaud wrote:\n>> On Thu, Aug 20, 2020 at 12:29 PM David Rowley <dgrowleyml@gmail.com>\n>> wrote:\n>>>\n>>> On Thu, 20 Aug 2020 at 19:58, Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>> On 2020/08/20 13:00, David Rowley wrote:\n>>>>> I forgot to mention earlier, you'll also need to remove the part in\n>>>>> the docs that mentions BUFFERS requires ANALYZE.\n>>>>\n>>>> Thanks for the review! I removed that.\n>>>> Attached is the updated version of the patch.\n>>>> I also added the regression test performing \"explain (buffers on)\"\n>>>> without \"analyze\" option.\n>>>\n>>> Looks good to me.\n>>\n>> Looks good to me too.\n> \n> David and Julien, thanks for the review! I'd like to wait for\n> Pierre's opinion about this change before committing the patch.\n> \n> Pierre,\n> could you share your opinion about this change?\n\nIt looks good to me too. Thanks a lot!\nLet's not forget to notify Hubert (depesz) once integrated.\n\n> \n> Regards,\n> \n\n\n",
"msg_date": "Fri, 21 Aug 2020 07:54:13 +0200",
"msg_from": "Pierre Giraud <pierre.giraud@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "\n\nOn 2020/08/21 14:54, Pierre Giraud wrote:\n> Le 20/08/2020 à 17:41, Fujii Masao a écrit :\n>>\n>>\n>> On 2020/08/20 22:34, Julien Rouhaud wrote:\n>>> On Thu, Aug 20, 2020 at 12:29 PM David Rowley <dgrowleyml@gmail.com>\n>>> wrote:\n>>>>\n>>>> On Thu, 20 Aug 2020 at 19:58, Fujii Masao\n>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>>\n>>>>> On 2020/08/20 13:00, David Rowley wrote:\n>>>>>> I forgot to mention earlier, you'll also need to remove the part in\n>>>>>> the docs that mentions BUFFERS requires ANALYZE.\n>>>>>\n>>>>> Thanks for the review! I removed that.\n>>>>> Attached is the updated version of the patch.\n>>>>> I also added the regression test performing \"explain (buffers on)\"\n>>>>> without \"analyze\" option.\n>>>>\n>>>> Looks good to me.\n>>>\n>>> Looks good to me too.\n>>\n>> David and Julien, thanks for the review! I'd like to wait for\n>> Pierre's opinion about this change before committing the patch.\n>>\n>> Pierre,\n>> could you share your opinion about this change?\n> \n> It looks good to me too. Thanks a lot!\n\nPushed. Thanks!\n\n> Let's not forget to notify Hubert (depesz) once integrated.\n\nYou're going to do that?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 21 Aug 2020 20:52:34 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> Pushed. Thanks!\n\nBuildfarm shows this patch has got problems under\n-DRELCACHE_FORCE_RELEASE and/or -DCATCACHE_FORCE_RELEASE:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2020-08-21%2011%3A54%3A12\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Aug 2020 10:53:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On 2020/08/21 23:53, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> Pushed. Thanks!\n> \n> Buildfarm shows this patch has got problems under\n> -DRELCACHE_FORCE_RELEASE and/or -DCATCACHE_FORCE_RELEASE:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2020-08-21%2011%3A54%3A12\n\nThanks for the report!\n\nThis happens because, in text format, whether \"Planning:\" line is output\nvaries depending on the system state. So explain_filter() should ignore\n\"Planning:\" line. Patch attached. I'm now checking whether the patched\nversion works fine.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Sat, 22 Aug 2020 01:04:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "\n\nOn 2020/08/22 1:04, Fujii Masao wrote:\n> \n> \n> On 2020/08/21 23:53, Tom Lane wrote:\n>> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>>> Pushed. Thanks!\n>>\n>> Buildfarm shows this patch has got problems under\n>> -DRELCACHE_FORCE_RELEASE and/or -DCATCACHE_FORCE_RELEASE:\n>>\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2020-08-21%2011%3A54%3A12\n> \n> Thanks for the report!\n> \n> This happens because, in text format, whether \"Planning:\" line is output\n> varies depending on the system state. So explain_filter() should ignore\n> \"Planning:\" line. Patch attached. I'm now checking whether the patched\n> version works fine.\n\nI pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 22 Aug 2020 01:54:20 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
},
{
"msg_contents": "On Fri, Aug 21, 2020 at 07:54:13AM +0200, Pierre Giraud wrote:\n> It looks good to me too. Thanks a lot!\n> Let's not forget to notify Hubert (depesz) once integrated.\n\nThanks a lot, and sorry for not responding earlier - vacation.\n\nBest regards,\n\ndepesz\n\n\n\n",
"msg_date": "Mon, 24 Aug 2020 08:53:29 +0200",
"msg_from": "hubert depesz lubaczewski <depesz@depesz.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG13] Planning (time + buffers) data structure in explain plan\n (format text)"
}
] |
[
{
"msg_contents": "I'm thinking about whether we should get rid of the distprep target, the \nstep in the preparation of the official source tarball that creates a \nbunch of prebuilt files using bison, flex, perl, etc. for inclusion in \nthe tarball. I think this concept is no longer fitting for contemporary \nsoftware distribution.\n\nThere is a lot of interest these days in making the artifacts of \nsoftware distribution traceable, for security and legal reasons. You \ncan trace the code from an author into Git, from Git into a tarball, \nsomewhat from a tarball into a binary package (for example using \nreproduceable builds), from a binary package onto a user's system. \nHaving some mystery prebuilt files in the middle there does not feel \nright. Packaging guidelines nowadays tend to disfavor such practices \nand either suggest, recommend, or require removing and rebuilding such \nfiles. This whole thing was fairly cavalier when we shipped gram.c, \nscan.c, and one or two other files, but now the number of prebuilt files \nis more than 100, not including the documentation, so this is a bit more \nserious.\n\nPractically, who even uses source tarballs these days? They are a \nvehicle for packagers, but packagers are not really helped by adding a \nbunch of prebuilt files. I think this practice started before there \neven were things like rpm. Nowadays, most people who want to work with \nthe source should and probably do use git, so making the difference \nbetween a git checkout and a source tarball smaller would probably be \ngood. And it would also make the actual tarball smaller.\n\nThe practical costs of this are also not negligible. Because of the \nparticular way configure handles bison and flex, it happens a bunch of \ntimes on new and test systems that the build proceeds and then tells you \nyou should have installed bison 5 minutes ago. Also, extensions cannot \nrely on bison, flex, or perl being available, except it often works so \nit's not dealt with correctly.\n\nWho benefits from these prebuilt files? I doubt anyone actually has \nproblems obtaining useful installations of bison, flex, or perl. There \nis the documentation build, but that also seems pretty robust nowadays \nand in any case you don't need to build the documentation to get a \nuseful installation. We could make some adjustments so that not \nbuilding the documentation is more accessible. The only users of this \nwould appear to be those not using git and not using any packaging. \nThat number is surely not zero, but it's probably very small and doesn't \nseem worth catering to specifically.\n\nThoughts?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 8 Aug 2020 09:45:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "get rid of distprep?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I'm thinking about whether we should get rid of the distprep target, ...\n\n> Who benefits from these prebuilt files? I doubt anyone actually has \n> problems obtaining useful installations of bison, flex, or perl.\n\nI'm sure it was a bigger issue twenty years ago, but yeah, nowadays\nour minimum requirements for those tools are so ancient that everybody\nwho cares to build from source should have usable versions available.\n\nI think the weak spot in your argument, though, is the documentation.\nThere is basically nothing that is standardized or reproducible in\nthat toolchain, as every platform names and subdivides the relevant\npackages differently, if they exist at all. I was reminded of that\njust recently when I updated my main workstation to RHEL8, and had to\njump through a lot of hoops to get everything installed that's needed\nto build the docs (and I still lack the tools for some of the weirder\nproducts such as epub). I'd be willing to say \"you must have bison,\nflex, and perl to build\" --- and maybe we could even avoid having a\nlong discussion about what \"perl\" means in this context --- but I\nfear the doc tools situation would be a mess.\n\n> The only users of this \n> would appear to be those not using git and not using any packaging. \n\nNo, there's the packagers themselves who would be bearing the brunt of\nrediscovering how to build the docs on their platforms. And if the\nargument is that there's a benefit to them of making the build more\nreproducible, I'm not sure I buy it, because of (1) timestamps in the\noutput files and (2) docbook's willingness to try to download missing\nbits off the net. (2) is a huge and not very obvious hazard to\nreproducibility.\n\nBut maybe you ought to be surveying -packagers about the question\ninstead of theorizing here. Would *they* see this as a net benefit?\n\nOne other point to consider is that distprep or no distprep, I'd be\nquite sad if the distclean target went away. That's extremely useful\nin normal development workflows to tear down everything that depends\non configure output, without giving up some of the more expensive\nbuild products such as gram.c and preproc.c.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Aug 2020 11:09:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: get rid of distprep?"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a draft of the release announcement for the update release\non 2020-08-13, which also includes the release of PostgreSQL 13 Beta 3.\nReviews and feedback are welcome.\n\nThis is a fairly hefty release announcement as it includes notes both\nabout the update release and the beta. I tried to keep the notes about\nBeta 3 focused on the significant changes, with a reference to the open\nitems page. If you believe I missed something that is significant,\nplease let me know.\n\nPlease be sure all feedback is delivered by 2020-08-12 AoE.\n\nThanks,\n\nJonathan",
"msg_date": "Sat, 8 Aug 2020 14:27:27 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "2020-08-13 Update + PostgreSQL 13 Beta 3 Release Announcement Draft"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nHere are some more experimental patches to reduce system calls.\n\n0001 skips sending signals when the recipient definitely isn't\nwaiting, using a new flag-and-memory-barrier dance. This seems to\nskip around 12% of the kill() calls for \"make check\", and probably\nhelps with some replication configurations that do a lot of\nsignalling. Patch by Andres (with a small memory barrier adjustment\nby me).\n\n0002 gets rid of the latch self-pipe on Linux systems.\n\n0003 does the same on *BSD/macOS systems.\n\nThe idea for 0002 and 0003 is to use a new dedicated signal just for\nlatch wakeups, and keep it blocked (Linux) or ignored (BSD), except\nwhile waiting. There may be other ways to achieve this without\nbringing in a new signal, but it seemed important to leave SIGUSR1\nunblocked for procsignals, and hard to figure out how to multiplex\nwith existing SIGUSR2 users, so for the first attempt at prototyping\nthis I arbitrarily chose SIGURG.",
"msg_date": "Sun, 9 Aug 2020 23:48:33 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Optimising latch signals"
},
{
"msg_contents": "On Sun, Aug 9, 2020 at 11:48 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here are some more experimental patches to reduce system calls.\n>\n> 0001 skips sending signals when the recipient definitely isn't\n> waiting, using a new flag-and-memory-barrier dance. This seems to\n> skip around 12% of the kill() calls for \"make check\", and probably\n> helps with some replication configurations that do a lot of\n> signalling. Patch by Andres (with a small memory barrier adjustment\n> by me).\n>\n> 0002 gets rid of the latch self-pipe on Linux systems.\n>\n> 0003 does the same on *BSD/macOS systems.\n\nHere's a rebase over the recent signal handler/mask reorganisation.\n\nSome thoughts, on looking at this again after a while:\n\n1. It's a bit clunky that pqinitmask() takes a new argument to say\nwhether SIGURG should be blocked; that's because the knowledge of\nwhich latch implementation we're using is private to latch.c, and only\nthe epoll version needs to block it. I wonder how to make that\ntidier.\n2. It's a bit weird to have UnBlockSig (SIGURG remains blocked for\nepoll builds) and UnBlockAllSig (SIGURG is also unblocked). Maybe the\nnaming is confusing.\n3. Maybe it's strange to continue to use overloaded SIGUSR1 on\nnon-epoll, non-kqueue systems; perhaps we should use SIGURG\neverywhere.\n4. As a nano-optimisation, SetLatch() on a latch the current process\nowns might as well use raise(SIGURG) rather than kill(). This is\nnecessary to close races when SetLatch(MyLatch) runs in a signal\nhandler. In other words, although this patch uses signal blocking to\nclose the race when other processes call SetLatch() and send us\nSIGURG, there's still a race if, say, SIGINT is sent to the\ncheckpointer and it sets its own latch from its SIGINT handler\nfunction; in the user context it may be in WaitEventSetWait() having\njust seen latch->is_set == false, and now be about to enter\nepoll_pwait()/kevent() after the signal handler returns, so we need to\ngive it a reason not to go to sleep.\n\nBy way of motivation for removing the self-pipe, and where possible\nalso the signal handler, here is a trace of the WAL writer handling\nthree requests to write data, on a FreeBSD system, with the patch:\n\nkevent(9,0x0,0,{ SIGURG,... },1,{ 0.200000000 }) = 1 (0x1)\npwrite(4,\"\\b\\M-Q\\^E\\0\\^A\\0\\0\\0\\0\\0\\M-/\\^\\\"...,8192,0xaf0000) = 8192 (0x2000)\nkevent(9,0x0,0,{ SIGURG,... },1,{ 0.200000000 }) = 1 (0x1)\npwrite(4,\"\\b\\M-Q\\^E\\0\\^A\\0\\0\\0\\0 \\M-/\\^\\\\0\"...,8192,0xaf2000) = 8192 (0x2000)\nkevent(9,0x0,0,{ SIGURG,... },1,{ 0.200000000 }) = 1 (0x1)\npwrite(4,\"\\b\\M-Q\\^E\\0\\^A\\0\\0\\0\\0`\\M-/\\^\\\\0\"...,8192,0xaf6000) = 8192 (0x2000)\n\nHere is the same thing on unpatched master:\n\nkevent(11,0x0,0,0x801c195b0,1,{ 0.200000000 }) ERR#4 'Interrupted system call'\nSIGNAL 30 (SIGUSR1) code=SI_USER pid=66575 uid=1001\nsigprocmask(SIG_SETMASK,{ SIGUSR1 },0x0) = 0 (0x0)\nwrite(10,\"\\0\",1) = 1 (0x1)\nsigreturn(0x7fffffffc880) EJUSTRETURN\npwrite(4,\"\\b\\M-Q\\^E\\0\\^A\\0\\0\\0\\0`\\M-w)\\0\\0\"...,8192,0xf76000) = 8192 (0x2000)\nkevent(11,0x0,0,{ 9,EVFILT_READ,0x0,0,0x1,0x801c19580 },1,{\n0.200000000 }) = 1 (0x1)\nread(9,\"\\0\",16) = 1 (0x1)\nkevent(11,0x0,0,0x801c195b0,1,{ 0.200000000 }) ERR#4 'Interrupted system call'\nSIGNAL 30 (SIGUSR1) code=SI_USER pid=66575 uid=1001\nsigprocmask(SIG_SETMASK,{ SIGUSR1 },0x0) = 0 (0x0)\nwrite(10,\"\\0\",1) = 1 (0x1)\nsigreturn(0x7fffffffc880) EJUSTRETURN\npwrite(4,\"\\b\\M-Q\\^E\\0\\^A\\0\\0\\0\\0 \\M-y)\\0\\0\"...,8192,0xf92000) = 8192 (0x2000)\nkevent(11,0x0,0,{ 9,EVFILT_READ,0x0,0,0x1,0x801c19580 },1,{\n0.200000000 }) = 1 (0x1)\nSIGNAL 30 (SIGUSR1) code=SI_USER pid=66575 uid=1001\nsigprocmask(SIG_SETMASK,{ SIGUSR1 },0x0) = 0 (0x0)\nwrite(10,\"\\0\",1) = 1 (0x1)\nsigreturn(0x7fffffffc880) EJUSTRETURN\nread(9,\"\\0\\0\",16) = 2 (0x2)\nkevent(11,0x0,0,0x801c195b0,1,{ 0.200000000 }) ERR#4 'Interrupted system call'\nSIGNAL 30 (SIGUSR1) code=SI_USER pid=66575 uid=1001\nsigprocmask(SIG_SETMASK,{ SIGUSR1 },0x0) = 0 (0x0)\nwrite(10,\"\\0\",1) = 1 (0x1)\nsigreturn(0x7fffffffc880) EJUSTRETURN\npwrite(4,\"\\b\\M-Q\\^E\\0\\^A\\0\\0\\0\\0\\0\\M-z)\\0\"...,8192,0xfa0000) = 8192 (0x2000)\nkevent(11,0x0,0,{ 9,EVFILT_READ,0x0,0,0x1,0x801c19580 },1,{\n0.200000000 }) = 1 (0x1)\nread(9,\"\\0\",16) = 1 (0x1)\n\nThe improvement isn't quite as good on Linux, because as far as I can\ntell you still have to have an (empty) signal handler installed and it\nruns (can we find a way to avoid that?), but you still get to skip all\nthe pipe manipulation.",
"msg_date": "Fri, 13 Nov 2020 12:42:23 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising latch signals"
},
{
"msg_contents": "On Fri, Nov 13, 2020 at 12:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> 1. It's a bit clunky that pqinitmask() takes a new argument to say\n> whether SIGURG should be blocked; that's because the knowledge of\n> which latch implementation we're using is private to latch.c, and only\n> the epoll version needs to block it. I wonder how to make that\n> tidier.\n\nI found, I think, a better way: now InitializeLatchSupport() is in\ncharge of managing the signal handler and modifying the signal mask.\n\n> 3. Maybe it's strange to continue to use overloaded SIGUSR1 on\n> non-epoll, non-kqueue systems; perhaps we should use SIGURG\n> everywhere.\n\nFixed.\n\n> The improvement isn't quite as good on Linux, because as far as I can\n> tell you still have to have an (empty) signal handler installed and it\n> runs (can we find a way to avoid that?), but you still get to skip all\n> the pipe manipulation.\n\nI received an off-list clue that we could use a signalfd, which I'd\ndiscounted before because it still has to be drained; in fact the\noverheads saved outweigh that so this seems like a better solution,\nand I'm reliably informed that in a future WAIT_USE_IOURING mode it\nshould be possible to get rid of the read too, so it seems like a good\ndirection to go in.\n\nI'll add this to the next commitfest.",
"msg_date": "Thu, 19 Nov 2020 16:49:05 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising latch signals"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 4:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I'll add this to the next commitfest.\n\nLet's see if this version fixes the Windows compile error and warning\nreported by cfbot.",
"msg_date": "Thu, 26 Nov 2020 10:54:48 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising latch signals"
},
{
"msg_contents": "Here's a new version with two small changes from Andres:\n1. Reorder InitPostmasterChild() slightly to avoid hanging on\nEXEC_BACKEND builds.\n2. Revert v2's use of raise(x) instead of kill(MyProcPid, x); glibc\nmanages to generate 5 syscalls for raise().\n\nI'm planning to commit this soon if there are no objections.",
"msg_date": "Sat, 27 Feb 2021 00:04:55 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising latch signals"
},
{
"msg_contents": "On Sat, Feb 27, 2021 at 12:04 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I'm planning to commit this soon if there are no objections.\n\nPushed, with the addition of an SFD_CLOEXEC flag for the signalfd.\nTime to watch the buildfarm to find out if my speculation about\nillumos is correct...\n\n\n",
"msg_date": "Mon, 1 Mar 2021 14:29:14 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising latch signals"
},
{
"msg_contents": "On Mon, Mar 1, 2021 at 2:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Time to watch the buildfarm to find out if my speculation about\n> illumos is correct...\n\nI just heard that damselfly's host has been decommissioned with no\nimmediate plan for a replacement. That was the last of the\nSolaris-family animals testing master. It may be some time before I\nfind out if my assumptions broke something on that OS...\n\n\n",
"msg_date": "Wed, 3 Mar 2021 09:25:20 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising latch signals"
},
{
"msg_contents": "On 2021-Mar-03, Thomas Munro wrote:\n\n> On Mon, Mar 1, 2021 at 2:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Time to watch the buildfarm to find out if my speculation about\n> > illumos is correct...\n> \n> I just heard that damselfly's host has been decommissioned with no\n> immediate plan for a replacement. That was the last of the\n> Solaris-family animals testing master. It may be some time before I\n> find out if my assumptions broke something on that OS...\n\nHi, I don't know if you realized but we have two new Illumos members\nnow (haddock and hake), and they're both failing initdb on signalfd()\nproblems.\n\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Mon, 8 Mar 2021 20:20:47 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Optimising latch signals"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 12:20 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Mar-03, Thomas Munro wrote:\n> > On Mon, Mar 1, 2021 at 2:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Time to watch the buildfarm to find out if my speculation about\n> > > illumos is correct...\n> >\n> > I just heard that damselfly's host has been decommissioned with no\n> > immediate plan for a replacement. That was the last of the\n> > Solaris-family animals testing master. It may be some time before I\n> > find out if my assumptions broke something on that OS...\n>\n> Hi, I don't know if you realized but we have two new Illumos members\n> now (haddock and hake), and they're both failing initdb on signalfd()\n> problems.\n\nAh, cool. I'd been discussing this with their owner, who saw my\nmessage and wanted to provide replacements. Nice to see these start\nup even though I don't love the colour of their first results. In\noff-list emails, we got as far as determining that signalfd() fails on\nillumos when running inside a zone (= container), because\n/dev/signalfd is somehow not present. Apparently it works when\nrunning on the main host. Tracing revealed that it's trying to open\nthat device and getting ENOENT here:\n\nrunning bootstrap script ... FATAL: XX000: signalfd() failed\nLOCATION: InitializeLatchSupport, latch.c:279\n\nI'll wait a short time while he tries to see if that can be fixed (I\nhave no clue if it's a configuration problem in some kind of zone\ncreation scripts, or a bug, or what). If not, my fallback plan will\nbe to change it to default to WAIT_USE_POLL on that OS until it can be\nfixed.\n\n\n",
"msg_date": "Tue, 9 Mar 2021 13:09:49 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising latch signals"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 1:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Mar 9, 2021 at 12:20 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Hi, I don't know if you realized but we have two new Illumos members\n> > now (haddock and hake), and they're both failing initdb on signalfd()\n> > problems.\n\n> I'll wait a short time while he tries to see if that can be fixed (I\n\nThey're green now. For the record: https://www.illumos.org/issues/13613\n\n\n",
"msg_date": "Wed, 10 Mar 2021 11:18:10 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising latch signals"
}
] |
[
{
"msg_contents": "I use IMPORT FOREIGN SCHEMA a bit to set up systems for testing. But not\nenough that I can ever remember whether INTO or FROM SERVER comes first in\nthe syntax.\n\nHere is an improvement to the tab completion, so I don't have to keep\nlooking it up in the docs.\n\nIt accidentally (even before this patch) completes \"IMPORT FOREIGN SCHEMA\"\nwith a list of local schemas. This is probably wrong, but I find this\nconvenient as I often do this in a loop-back setup where the list of\nforeign schema would be the same as the local ones. So I don't countermand\nthat behavior here.\n\nCheers,\n\nJeff",
"msg_date": "Sun, 9 Aug 2020 11:46:42 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "tab completion of IMPORT FOREIGN SCHEMA"
},
{
"msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> It accidentally (even before this patch) completes \"IMPORT FOREIGN SCHEMA\"\n> with a list of local schemas. This is probably wrong, but I find this\n> convenient as I often do this in a loop-back setup where the list of\n> foreign schema would be the same as the local ones. So I don't countermand\n> that behavior here.\n\nI don't see how psql could obtain a \"real\" list of foreign schemas\nfrom an arbitrary FDW, even if it magically knew which server the\nuser would specify later in the command. So this behavior seems fine.\nIt has some usefulness, while not completing at all would have none.\n\nIt might be a good idea to figure out where that completion is\nhappening and annotate it about this point.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 Aug 2020 12:33:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tab completion of IMPORT FOREIGN SCHEMA"
},
{
"msg_contents": "On Sun, Aug 09, 2020 at 12:33:43PM -0400, Tom Lane wrote:\n> I don't see how psql could obtain a \"real\" list of foreign schemas\n> from an arbitrary FDW, even if it magically knew which server the\n> user would specify later in the command. So this behavior seems fine.\n> It has some usefulness, while not completing at all would have none.\n\nSounds fine to me as well. The LIMIT TO and EXCEPT clauses are\noptional, so using TailMatches() looks fine.\n\n+ else if (TailMatches(\"FROM\", \"SERVER\", MatchAny, \"INTO\", MatchAny))\n+ COMPLETE_WITH(\"OPTIONS\")\nShouldn't you complete with \"OPTIONS (\" here?\n\nIt would be good to complete with \"FROM SERVER\" after specifying\nEXCEPT or LIMIT TO, you can just use \"(*)\" to include the list of\ntables in the list of elements checked.\n--\nMichael",
"msg_date": "Mon, 17 Aug 2020 14:15:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: tab completion of IMPORT FOREIGN SCHEMA"
},
{
"msg_contents": "On Mon, Aug 17, 2020 at 02:15:34PM +0900, Michael Paquier wrote:\n> Sounds fine to me as well. The LIMIT TO and EXCEPT clauses are\n> optional, so using TailMatches() looks fine.\n> \n> + else if (TailMatches(\"FROM\", \"SERVER\", MatchAny, \"INTO\", MatchAny))\n> + COMPLETE_WITH(\"OPTIONS\")\n> Shouldn't you complete with \"OPTIONS (\" here?\n> \n> It would be good to complete with \"FROM SERVER\" after specifying\n> EXCEPT or LIMIT TO, you can just use \"(*)\" to include the list of\n> tables in the list of elements checked.\n\nI have complete the patch with those parts as per the attached. If\nthere are any objections or extra opinions, please feel free.\n--\nMichael",
"msg_date": "Tue, 15 Sep 2020 14:56:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: tab completion of IMPORT FOREIGN SCHEMA"
},
{
"msg_contents": "On Tue, Sep 15, 2020 at 02:56:40PM +0900, Michael Paquier wrote:\n> I have completed the patch with those parts as per the attached. If\n> there are any objections or extra opinions, please feel free.\n\nAnd done with 7307df1.\n--\nMichael",
"msg_date": "Thu, 17 Sep 2020 11:58:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: tab completion of IMPORT FOREIGN SCHEMA"
}
] |
[
{
"msg_contents": "Is there a way that we can show information about nested queries in\npg_stat_activity? It's often inconvenient for users when somebody's\nexecuting a function and it doesn't seem to be finishing as quickly as\nanticipated. You can't really tell where in that function things broke\ndown. There are a couple of possible ways to attack this problem, but\nthe one that I like best is to just try to advertise the query text of\nall the nested queries that are in progress rather than only the\ntop-level query. This runs up against the problem that we have only a\nfixed-length buffer with which to work, but that doesn't seem like a\nhuge problem: if the outer query fills the buffer, nested queries\nwon't be advertised. If not, the first level of nested query can be\nadvertised using the space remaining. If there's still space left,\nthis can be repeated for multiple levels of nested queries until we\nrun out of bytes. This requires some way of separating one query\nstring from the next, but that seems like a pretty solvable problem.\nIt also requires figuring out how this would show up in the output of\npg_stat_activity, which I'm not quite sure about. And it might have\nperformance issues too, for some use cases, but it could be an\noptional feature, so that people who don't want to pay the cost of\nupdating the pg_stat_activity information more frequently do not need\nto do so.\n\nI'm curious to hear if other people agree that this is a problem, what\nthey think about the above ideas for improving things, and if they've\ngot any other suggestions.\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 10 Aug 2020 10:37:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "nested queries vs. pg_stat_activity"
},
{
"msg_contents": "On Mon, Aug 10, 2020 at 4:37 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Is there a way that we can show information about nested queries in\n> pg_stat_activity? It's often inconvenient for users when somebody's\n> executing a function and it doesn't seem to be finishing as quickly as\n> anticipated. You can't really tell where in that function things broke\n> down. There are a couple of possible ways to attack this problem, but\n> the one that I like best is to just try to advertise the query text of\n> all the nested queries that are in progress rather than only the\n> top-level query. This runs up against the problem that we have only a\n> fixed-length buffer with which to work, but that doesn't seem like a\n> huge problem: if the outer query fills the buffer, nested queries\n> won't be advertised. If not, the first level of nested query can be\n> advertised using the space remaining. If there's still space left,\n> this can be repeated for multiple levels of nested queries until we\n> run out of bytes. This requires some way of separating one query\n> string from the next, but that seems like a pretty solvable problem.\n> It also requires figuring out how this would show up in the output of\n> pg_stat_activity, which I'm not quite sure about. And it might have\n> performance issues too, for some use cases, but it could be an\n> optional feature, so that people who don't want to pay the cost of\n> updating the pg_stat_activity information more frequently do not need\n> to do so.\n>\n> I'm curious to hear if other people agree that this is a problem, what\n> they think about the above ideas for improving things, and if they've\n> got any other suggestions.\n>\n\nThis sounds very similar to the just-raised problem of multiple\nsemicolon-separated queries (see\nhttps://www.postgresql.org/message-id/030a4123-550a-9dc1-d326-3cd5c46bcc59%40amazon.com).\nThey should definitely be considered in the same context, so we don't end\nup creating two incompatible ideas.\n\nAnother idea around this, which I haven't really thought through, but\nfigured I'd throw out anyway. It doesn't have to be in pg_stat_activity, if\nwe can access it. E.g. we could have something like \"SELECT * FROM\npg_querystack(<backend pid>)\". In fact, I've also *often* wanted something\nlike \"SELECT * FROM pg_queryhistory(<backend pid>)\" to see the last 2-3\nthings it did *before* reaching the current point, as a way of identifying\nwhere in an application this happened. That need has led to really ugly\nhacks like https://github.com/mhagander/pg_commandhistory.\n\nThis is not information you'd need all that often, so I think it'd be\nperfectly reasonable to say that you pay a higher price when you get it, if\nwe can keep down the cost of keeping it updated. Of course, this reduces it\ndown to \"how can we get it\". If each backend keeps this information\nlocally, we could send it a signal to say something like \"dump what you\nhave now into shared memory over here\" and read it from there -- which\nwould be cleaner than my hack which dumps it to the log.\n\nI'm sure I'm missing many things in that, but as a wild idea :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Aug 10, 2020 at 4:37 PM Robert Haas <robertmhaas@gmail.com> wrote:Is there a way that we can show information about nested queries in\npg_stat_activity? It's often inconvenient for users when somebody's\nexecuting a function and it doesn't seem to be finishing as quickly as\nanticipated. You can't really tell where in that function things broke\ndown. There are a couple of possible ways to attack this problem, but\nthe one that I like best is to just try to advertise the query text of\nall the nested queries that are in progress rather than only the\ntop-level query. This runs up against the problem that we have only a\nfixed-length buffer with which to work, but that doesn't seem like a\nhuge problem: if the outer query fills the buffer, nested queries\nwon't be advertised. If not, the first level of nested query can be\nadvertised using the space remaining. If there's still space left,\nthis can be repeated for multiple levels of nested queries until we\nrun out of bytes. This requires some way of separating one query\nstring from the next, but that seems like a pretty solvable problem.\nIt also requires figuring out how this would show up in the output of\npg_stat_activity, which I'm not quite sure about. And it might have\nperformance issues too, for some use cases, but it could be an\noptional feature, so that people who don't want to pay the cost of\nupdating the pg_stat_activity information more frequently do not need\nto do so.\n\nI'm curious to hear if other people agree that this is a problem, what\nthey think about the above ideas for improving things, and if they've\ngot any other suggestions.This sounds very similar to the just-raised problem of multiple semicolon-separated queries (see https://www.postgresql.org/message-id/030a4123-550a-9dc1-d326-3cd5c46bcc59%40amazon.com). They should definitely be considered in the same context, so we don't end up creating two incompatible ideas.Another idea around this, which I haven't really thought through, but figured I'd throw out anyway. It doesn't have to be in pg_stat_activity, if we can access it. E.g. we could have something like \"SELECT * FROM pg_querystack(<backend pid>)\". In fact, I've also *often* wanted something like \"SELECT * FROM pg_queryhistory(<backend pid>)\" to see the last 2-3 things it did *before* reaching the current point, as a way of identifying where in an application this happened. That need has led to really ugly hacks like https://github.com/mhagander/pg_commandhistory.This is not information you'd need all that often, so I think it'd be perfectly reasonable to say that you pay a higher price when you get it, if we can keep down the cost of keeping it updated. Of course, this reduces it down to \"how can we get it\". If each backend keeps this information locally, we could send it a signal to say something like \"dump what you have now into shared memory over here\" and read it from there -- which would be cleaner than my hack which dumps it to the log.I'm sure I'm missing many things in that, but as a wild idea :)-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 10 Aug 2020 16:44:48 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: nested queries vs. pg_stat_activity"
}
] |
[
{
"msg_contents": "Hello,\n\nAn other solution is to expose nested queryid, and to join it with pg_stat_statements.\nActual development trying to add queryid to pg_stat_activity isn't helpfull, because it is only exposing top level one.\nExtension pg_stat_sql_plans (github) propose a function called pg_backend_queryid(pid),\nthat gives the expected queryid (that is stored in shared memory for each backend) ...\n\nRegards\nPAscal\n\n\n\n\n\n\n\n\nHello,\n\n\n\n\nAn other solution is to expose nested queryid, and to join it with pg_stat_statements.\n\nActual development trying to add queryid to pg_stat_activity isn't helpfull, because it is only exposing top level one.\n\nExtension pg_stat_sql_plans (github) propose a function called pg_backend_queryid(pid),\n\nthat gives the expected queryid (that is stored in shared memory for each backend) ...\n\n\n\n\nRegards\n\nPAscal",
"msg_date": "Mon, 10 Aug 2020 16:51:40 +0000",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "nested queries vs. pg_stat_activity"
},
{
"msg_contents": "On Mon, Aug 10, 2020 at 12:51 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n> An other solution is to expose nested queryid, and to join it with pg_stat_statements.\n> Actual development trying to add queryid to pg_stat_activity isn't helpfull, because it is only exposing top level one.\n> Extension pg_stat_sql_plans (github) propose a function called pg_backend_queryid(pid),\n> that gives the expected queryid (that is stored in shared memory for each backend) ...\n\nThat'd help people using pg_stat_statements, but not others.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 10 Aug 2020 15:51:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: nested queries vs. pg_stat_activity"
},
{
"msg_contents": "On Mon, Aug 10, 2020 at 9:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Aug 10, 2020 at 12:51 PM legrand legrand\n> <legrand_legrand@hotmail.com> wrote:\n> > An other solution is to expose nested queryid, and to join it with\n> pg_stat_statements.\n> > Actual development trying to add queryid to pg_stat_activity isn't\n> helpfull, because it is only exposing top level one.\n> > Extension pg_stat_sql_plans (github) propose a function called\n> pg_backend_queryid(pid),\n> > that gives the expected queryid (that is stored in shared memory for\n> each backend) ...\n>\n> That'd help people using pg_stat_statements, but not others.\n>\n\nWould it even solve the problem for them? pg_stat_statements collects\naggregate stats and not a view of what's happening right now -- so it'd be\nmixing two different types of values. And it would get worse if the same\nthing is executed multiple times concurrently.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Aug 10, 2020 at 9:51 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Aug 10, 2020 at 12:51 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n> An other solution is to expose nested queryid, and to join it with pg_stat_statements.\n> Actual development trying to add queryid to pg_stat_activity isn't helpfull, because it is only exposing top level one.\n> Extension pg_stat_sql_plans (github) propose a function called pg_backend_queryid(pid),\n> that gives the expected queryid (that is stored in shared memory for each backend) ...\n\nThat'd help people using pg_stat_statements, but not others.Would it even solve the problem for them? pg_stat_statements collects aggregate stats and not a view of what's happening right now -- so it'd be mixing two different types of values. And it would get worse if the same thing is executed multiple times concurrently. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 10 Aug 2020 22:09:41 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: nested queries vs. pg_stat_activity"
},
{
"msg_contents": "On Mon, Aug 10, 2020 at 4:09 PM Magnus Hagander <magnus@hagander.net> wrote:\n> Would it even solve the problem for them? pg_stat_statements collects aggregate stats and not a view of what's happening right now -- so it'd be mixing two different types of values. And it would get worse if the same thing is executed multiple times concurrently.\n\nTrue. You could find that you have a queryId that had already been\nevicted from the table.\n\nI think it's better to look for a more direct solution to this problem.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 10 Aug 2020 16:21:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: nested queries vs. pg_stat_activity"
},
{
"msg_contents": "Hi\n\npo 10. 8. 2020 v 22:21 odesílatel Robert Haas <robertmhaas@gmail.com>\nnapsal:\n\n> On Mon, Aug 10, 2020 at 4:09 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > Would it even solve the problem for them? pg_stat_statements collects\n> aggregate stats and not a view of what's happening right now -- so it'd be\n> mixing two different types of values. And it would get worse if the same\n> thing is executed multiple times concurrently.\n>\n> True. You could find that you have a queryId that had already been\n> evicted from the table.\n>\n> I think it's better to look for a more direct solution to this problem.\n>\n\nI am thinking about an extension (but it can be in core too) that does copy\nquery string and execution plan to shared memory to separate buffers per\nsession (before query start). It should eliminate a problem with\nperformance with locks\n\nThere can be two functions\n\nshow_query(pid int, \"top\" bool default true) .. it shows query without\ntruncating\nshow_plan(pid int, \"top\" bool default true, format text default \"text\")\n\nWhen the argument \"top\" is false, then you can see the current query.\n\nRegards\n\nPavel\n\n\n\n\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\nHipo 10. 8. 2020 v 22:21 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:On Mon, Aug 10, 2020 at 4:09 PM Magnus Hagander <magnus@hagander.net> wrote:\n> Would it even solve the problem for them? pg_stat_statements collects aggregate stats and not a view of what's happening right now -- so it'd be mixing two different types of values. And it would get worse if the same thing is executed multiple times concurrently.\n\nTrue. You could find that you have a queryId that had already been\nevicted from the table.\n\nI think it's better to look for a more direct solution to this problem.I am thinking about an extension (but it can be in core too) that does copy query string and execution plan to shared memory to separate buffers per session (before query start). It should eliminate a problem with performance with locksThere can be two functionsshow_query(pid int, \"top\" bool default true) .. it shows query without truncatingshow_plan(pid int, \"top\" bool default true, format text default \"text\")When the argument \"top\" is false, then you can see the current query.RegardsPavel\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 10 Aug 2020 23:15:50 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: nested queries vs. pg_stat_activity"
}
] |
[
{
"msg_contents": "Last week, James reported to us that after promoting a replica, some\nseqscan was taking a huge amount of time; on investigation he saw that\nthere was a high rate of FPI_FOR_HINT wal messages by the seqscan.\nLooking closely at the generated traffic, HEAP_XMIN_COMMITTED was being\nset on some tuples.\n\nNow this may seem obvious to some as a drawback of the current system,\nbut I was taken by surprise. The problem was simply that when a page is\nexamined by a seqscan, we do HeapTupleSatisfiesVisibility of each tuple\nin isolation; and for each tuple we call SetHintBits(). And only the\nfirst time the FPI happens; by the time we get to the second tuple, the\npage is already dirty, so there's no need to emit an FPI. But the FPI\nwe sent only had the bit on the first tuple ... so the standby will not\nhave the bit set for any subsequent tuple. And on promotion, the\nstandby will have to have the bits set for all those tuples, unless you\nhappened to dirty the page again later for other reasons.\n\nSo if you have some table where tuples gain hint bits in bulk, and\nrarely modify the pages afterwards, and promote before those pages are\nfrozen, then you may end up with a massive amount of pages that will\nneed hinting after the promote, which can become troublesome.\n\nAttached is a TAP file that reproduces the problem. It always fails,\nbut in the log file you can see the tuples in the primary are all hinted\ncommitted, while on the standby only the first one is hinted committed.\n\n\n\nOne simple idea to try to forestall this problem would be to modify the\nalgorithm so that all tuples are scanned and hinted if the page is going\nto be dirtied -- then send a single FPI setting bits for all tuples,\ninstead of just on the first tuple.\n\n-- \n�lvaro Herrera",
"msg_date": "Mon, 10 Aug 2020 18:56:37 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndQuadrant.com>",
"msg_from_op": true,
"msg_subject": "massive FPI_FOR_HINT load after promote"
},
{
"msg_contents": "On Tue, 11 Aug 2020 at 07:56, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Last week, James reported to us that after promoting a replica, some\n> seqscan was taking a huge amount of time; on investigation he saw that\n> there was a high rate of FPI_FOR_HINT wal messages by the seqscan.\n> Looking closely at the generated traffic, HEAP_XMIN_COMMITTED was being\n> set on some tuples.\n>\n> Now this may seem obvious to some as a drawback of the current system,\n> but I was taken by surprise. The problem was simply that when a page is\n> examined by a seqscan, we do HeapTupleSatisfiesVisibility of each tuple\n> in isolation; and for each tuple we call SetHintBits(). And only the\n> first time the FPI happens; by the time we get to the second tuple, the\n> page is already dirty, so there's no need to emit an FPI. But the FPI\n> we sent only had the bit on the first tuple ... so the standby will not\n> have the bit set for any subsequent tuple. And on promotion, the\n> standby will have to have the bits set for all those tuples, unless you\n> happened to dirty the page again later for other reasons.\n>\n> So if you have some table where tuples gain hint bits in bulk, and\n> rarely modify the pages afterwards, and promote before those pages are\n> frozen, then you may end up with a massive amount of pages that will\n> need hinting after the promote, which can become troublesome.\n\nDid the case you observed not use hot standby? I thought the impact of\nthis issue could be somewhat alleviated in hot standby cases since\nread queries on the hot standby can set hint bits.\n\n>\n> One simple idea to try to forestall this problem would be to modify the\n> algorithm so that all tuples are scanned and hinted if the page is going\n> to be dirtied -- then send a single FPI setting bits for all tuples,\n> instead of just on the first tuple.\n>\n\nThis idea seems good to me but I'm concerned a bit that the\nprobability of concurrent processes writing FPI for the same page\nmight get higher since concurrent processes could set hint bits at the\nsame time. If it's true, I wonder if we can advertise hint bits are\nbeing updated to prevent concurrent FPI writes for the same page.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Aug 2020 15:55:11 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: massive FPI_FOR_HINT load after promote"
},
{
"msg_contents": "On Tue, Aug 11, 2020 at 2:55 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 11 Aug 2020 at 07:56, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > Last week, James reported to us that after promoting a replica, some\n> > seqscan was taking a huge amount of time; on investigation he saw that\n> > there was a high rate of FPI_FOR_HINT wal messages by the seqscan.\n> > Looking closely at the generated traffic, HEAP_XMIN_COMMITTED was being\n> > set on some tuples.\n> >\n> > Now this may seem obvious to some as a drawback of the current system,\n> > but I was taken by surprise. The problem was simply that when a page is\n> > examined by a seqscan, we do HeapTupleSatisfiesVisibility of each tuple\n> > in isolation; and for each tuple we call SetHintBits(). And only the\n> > first time the FPI happens; by the time we get to the second tuple, the\n> > page is already dirty, so there's no need to emit an FPI. But the FPI\n> > we sent only had the bit on the first tuple ... so the standby will not\n> > have the bit set for any subsequent tuple. And on promotion, the\n> > standby will have to have the bits set for all those tuples, unless you\n> > happened to dirty the page again later for other reasons.\n> >\n> > So if you have some table where tuples gain hint bits in bulk, and\n> > rarely modify the pages afterwards, and promote before those pages are\n> > frozen, then you may end up with a massive amount of pages that will\n> > need hinting after the promote, which can become troublesome.\n>\n> Did the case you observed not use hot standby? I thought the impact of\n> this issue could be somewhat alleviated in hot standby cases since\n> read queries on the hot standby can set hint bits.\n\nWe do have hot standby enabled, and there are sometimes large queries\nthat may do seq scans that run against a replica, but there are\nmultiple replicas (and each one would have to have the bits set), and\na given replica that gets promoted in our topology isn't guaranteed to\nbe one that's seen those reads.\n\nJames\n\n\n",
"msg_date": "Tue, 11 Aug 2020 12:53:30 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: massive FPI_FOR_HINT load after promote"
},
{
"msg_contents": "On 2020-Aug-11, Masahiko Sawada wrote:\n\n> On Tue, 11 Aug 2020 at 07:56, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > So if you have some table where tuples gain hint bits in bulk, and\n> > rarely modify the pages afterwards, and promote before those pages are\n> > frozen, then you may end up with a massive amount of pages that will\n> > need hinting after the promote, which can become troublesome.\n> \n> Did the case you observed not use hot standby? I thought the impact of\n> this issue could be somewhat alleviated in hot standby cases since\n> read queries on the hot standby can set hint bits.\n\nOh, interesting, I didn't know that. However, it's not 100% true: the\nstandby can set the bit in shared buffers, but it does not mark the page\ndirty. So when the page is evicted, those bits that were set are lost.\nThat's not great. See MarkBufferDirtyHint:\n\n\t\t/*\n\t\t * If we need to protect hint bit updates from torn writes, WAL-log a\n\t\t * full page image of the page. This full page image is only necessary\n\t\t * if the hint bit update is the first change to the page since the\n\t\t * last checkpoint.\n\t\t *\n\t\t * We don't check full_page_writes here because that logic is included\n\t\t * when we call XLogInsert() since the value changes dynamically.\n\t\t */\n\t\tif (XLogHintBitIsNeeded() &&\n\t\t\t(pg_atomic_read_u32(&bufHdr->state) & BM_PERMANENT))\n\t\t{\n\t\t\t/*\n\t\t\t * If we must not write WAL, due to a relfilenode-specific\n\t\t\t * condition or being in recovery, don't dirty the page. We can\n\t\t\t * set the hint, just not dirty the page as a result so the hint\n\t\t\t * is lost when we evict the page or shutdown.\n\t\t\t *\n\t\t\t * See src/backend/storage/page/README for longer discussion.\n\t\t\t */\n\t\t\tif (RecoveryInProgress() ||\n\t\t\t\tRelFileNodeSkippingWAL(bufHdr->tag.rnode))\n\t\t\t\treturn;\n\n\n> > One simple idea to try to forestall this problem would be to modify the\n> > algorithm so that all tuples are scanned and hinted if the page is going\n> > to be dirtied -- then send a single FPI setting bits for all tuples,\n> > instead of just on the first tuple.\n> \n> This idea seems good to me but I'm concerned a bit that the\n> probability of concurrent processes writing FPI for the same page\n> might get higher since concurrent processes could set hint bits at the\n> same time. If it's true, I wonder if we can advertise hint bits are\n> being updated to prevent concurrent FPI writes for the same page.\n\nHmm, a very good point. Sounds like we would need to obtain an\nexclusive lock on the page .. but that would be very problematic.\n\nI don't have a concrete proposal to solve this problem ATM, but it's\nmore and more looking like it's a serious problem.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Aug 2020 13:41:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: massive FPI_FOR_HINT load after promote"
},
{
"msg_contents": "On Wed, 12 Aug 2020 at 02:42, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Aug-11, Masahiko Sawada wrote:\n>\n> > On Tue, 11 Aug 2020 at 07:56, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> > > So if you have some table where tuples gain hint bits in bulk, and\n> > > rarely modify the pages afterwards, and promote before those pages are\n> > > frozen, then you may end up with a massive amount of pages that will\n> > > need hinting after the promote, which can become troublesome.\n> >\n> > Did the case you observed not use hot standby? I thought the impact of\n> > this issue could be somewhat alleviated in hot standby cases since\n> > read queries on the hot standby can set hint bits.\n>\n> Oh, interesting, I didn't know that. However, it's not 100% true: the\n> standby can set the bit in shared buffers, but it does not mark the page\n> dirty. So when the page is evicted, those bits that were set are lost.\n> That's not great. See MarkBufferDirtyHint:\n\nYeah, you're right.\n\n>\n> > > One simple idea to try to forestall this problem would be to modify the\n> > > algorithm so that all tuples are scanned and hinted if the page is going\n> > > to be dirtied -- then send a single FPI setting bits for all tuples,\n> > > instead of just on the first tuple.\n> >\n> > This idea seems good to me but I'm concerned a bit that the\n> > probability of concurrent processes writing FPI for the same page\n> > might get higher since concurrent processes could set hint bits at the\n> > same time. If it's true, I wonder if we can advertise hint bits are\n> > being updated to prevent concurrent FPI writes for the same page.\n>\n> Hmm, a very good point. Sounds like we would need to obtain an\n> exclusive lock on the page .. but that would be very problematic.\n>\n\nI think that when the page is going to be dirty only updating hint\nbits on the page and writing FPI need to be performed exclusively. So\nperhaps we can add a flag, say BM_UPDATE_HINTBITS, to buffer\ndescriptor indicating the hint bits are being updated.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\n\n\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 14 Aug 2020 11:33:48 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: massive FPI_FOR_HINT load after promote"
},
{
"msg_contents": "On Mon, 10 Aug 2020 at 23:56, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> The problem was simply that when a page is\n> examined by a seqscan, we do HeapTupleSatisfiesVisibility of each tuple\n> in isolation; and for each tuple we call SetHintBits(). And only the\n> first time the FPI happens; by the time we get to the second tuple, the\n> page is already dirty, so there's no need to emit an FPI. But the FPI\n> we sent only had the bit on the first tuple ... so the standby will not\n> have the bit set for any subsequent tuple. And on promotion, the\n> standby will have to have the bits set for all those tuples, unless you\n> happened to dirty the page again later for other reasons.\n\nWhich probably means that pg_rewind is broken because it won't be able\nto rewind correctly.\n\n> One simple idea to try to forestall this problem would be to modify the\n> algorithm so that all tuples are scanned and hinted if the page is going\n> to be dirtied -- then send a single FPI setting bits for all tuples,\n> instead of just on the first tuple.\n\nThis would make latency much worse for non seqscan cases.\n\nCertainly for seqscans it would make sense to emit a message that sets\nall tuples at once, or possibly emit an FPI and then follow that with\na second message that sets all other hints on the page.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\nMission Critical Databases\n\n\n",
"msg_date": "Fri, 14 Aug 2020 08:55:05 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: massive FPI_FOR_HINT load after promote"
}
] |
[
{
"msg_contents": "I previously[1] posted a patch to have multiple CREATE INDEX CONCURRENTLY\nnot wait for the slowest of them. This is an update of that, with minor\nconflicts fixed and a fresh thread.\n\nTo recap: currently, any CREATE INDEX CONCURRENTLY will wait for all\nother CICs running concurrently to finish, because they can't be\ndistinguished amidst other old snapshots. We can change things by\nhaving CIC set a special flag in PGPROC (like PROC_IN_VACUUM) indicating\nthat it's doing CIC; other CICs will see that flag and will know that\nthey don't need to wait for those processes. With this, CIC on small\ntables don't have to wait for CIC on large tables to complete.\n\n[1] https://postgr.es/m/20200805021109.GA9079@alvherre.pgsql\n\n\n-- \n�lvaro Herrera http://www.linkedin.com/in/alvherre\n\"Escucha y olvidar�s; ve y recordar�s; haz y entender�s\" (Confucio)",
"msg_date": "Mon, 10 Aug 2020 19:38:15 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "+ James Coleman\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Aug 2020 19:41:20 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> To recap: currently, any CREATE INDEX CONCURRENTLY will wait for all\n> other CICs running concurrently to finish, because they can't be\n> distinguished amidst other old snapshots. We can change things by\n> having CIC set a special flag in PGPROC (like PROC_IN_VACUUM) indicating\n> that it's doing CIC; other CICs will see that flag and will know that\n> they don't need to wait for those processes. With this, CIC on small\n> tables don't have to wait for CIC on large tables to complete.\n\nHm. +1 for improving this, if we can, but ...\n\nIt seems clearly unsafe to ignore a CIC that is in active index-building;\na snapshot held for that purpose is just as real as any other. It *might*\nbe all right to ignore a CIC that is just waiting, but you haven't made\nany argument in the patch comments as to why that's safe either.\n(Moreover, at the points where we're just waiting, I don't think we have\na snapshot, so another CIC's WaitForOlderSnapshots shouldn't wait for us\nanyway.)\n\nActually, it doesn't look like you've touched the comments at all.\nWaitForOlderSnapshots' header comment has a long explanation of why\nit's safe to ignore certain processes. That certainly needs to be\nupdated by any patch that's going to change the rules.\n\nBTW, what about REINDEX CONCURRENTLY?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Aug 2020 20:37:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On Mon, Aug 10, 2020 at 8:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > To recap: currently, any CREATE INDEX CONCURRENTLY will wait for all\n> > other CICs running concurrently to finish, because they can't be\n> > distinguished amidst other old snapshots. We can change things by\n> > having CIC set a special flag in PGPROC (like PROC_IN_VACUUM) indicating\n> > that it's doing CIC; other CICs will see that flag and will know that\n> > they don't need to wait for those processes. With this, CIC on small\n> > tables don't have to wait for CIC on large tables to complete.\n>\n> Hm. +1 for improving this, if we can, but ...\n>\n> It seems clearly unsafe to ignore a CIC that is in active index-building;\n> a snapshot held for that purpose is just as real as any other. It *might*\n> be all right to ignore a CIC that is just waiting, but you haven't made\n> any argument in the patch comments as to why that's safe either.\n> (Moreover, at the points where we're just waiting, I don't think we have\n> a snapshot, so another CIC's WaitForOlderSnapshots shouldn't wait for us\n> anyway.)\n\nWhy is a CIC in active index-building something we need to wait for?\nWouldn't it fall under a similar kind of logic to the other snapshot\ntypes we can explicitly ignore? CIC can't be run in a manual\ntransaction, so the snapshot it holds won't be used to perform\narbitrary operations (i.e., the reason why a manual ANALYZE can't be\nignored).\n\n> Actually, it doesn't look like you've touched the comments at all.\n> WaitForOlderSnapshots' header comment has a long explanation of why\n> it's safe to ignore certain processes. That certainly needs to be\n> updated by any patch that's going to change the rules.\n\nAgreed that the comment needs to be updated to discuss the\n(im)possibility of arbitrary operations within a snapshot held by CIC.\n\nJames\n\n\n",
"msg_date": "Mon, 10 Aug 2020 21:26:26 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> Why is a CIC in active index-building something we need to wait for?\n> Wouldn't it fall under a similar kind of logic to the other snapshot\n> types we can explicitly ignore? CIC can't be run in a manual\n> transaction, so the snapshot it holds won't be used to perform\n> arbitrary operations (i.e., the reason why a manual ANALYZE can't be\n> ignored).\n\nExpression indexes that call user-defined functions seem like a\npretty serious risk factor for that argument. Those are exactly\nthe same expressions that ANALYZE will evaluate, as a result of\nwhich we judge it unsafe to ignore. Why would CIC be different?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Aug 2020 11:14:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On Tue, Aug 11, 2020 at 11:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Coleman <jtc331@gmail.com> writes:\n> > Why is a CIC in active index-building something we need to wait for?\n> > Wouldn't it fall under a similar kind of logic to the other snapshot\n> > types we can explicitly ignore? CIC can't be run in a manual\n> > transaction, so the snapshot it holds won't be used to perform\n> > arbitrary operations (i.e., the reason why a manual ANALYZE can't be\n> > ignored).\n>\n> Expression indexes that call user-defined functions seem like a\n> pretty serious risk factor for that argument. Those are exactly\n> the same expressions that ANALYZE will evaluate, as a result of\n> which we judge it unsafe to ignore. Why would CIC be different?\n\nThe comments for WaitForOlderSnapshots() don't discuss that problem at\nall; as far as ANALYZE goes they only say:\n\n* Manual ANALYZEs, however, can't be excluded because they\n* might be within transactions that are going to do arbitrary operations\n* later.\n\nBut nonetheless over in the thread Álvaro linked to we'd discussed\nthese issues already. Andres in [1] and [2] believed that:\n\n> For the snapshot waits we can add a procarray flag\n> (alongside PROCARRAY_VACUUM_FLAG) indicating that the backend is\n> doing. Which WaitForOlderSnapshots() can then use to ignore those CICs,\n> which is safe, because those transactions definitely don't insert into\n> relations targeted by CIC. The change to WaitForOlderSnapshots() would\n> just be to pass the new flag to GetCurrentVirtualXIDs, I think.\n\nand\n\n> What I was thinking of was a new flag, with a distinct value from\n> PROC_IN_VACUUM. It'd currently just be specified in the\n> GetCurrentVirtualXIDs() calls in WaitForOlderSnapshots(). That'd avoid\n> needing to wait for other CICs on different relations. Since CIC is not\n> permitted on system tables, and CIC doesn't do DML on normal tables, it\n> seems fairly obviously correct to exclude other CICs.\n\nIn [3] I'd brought up that a function index can do arbitrary\noperations (including, terribly, a query of another table), and Andres\n(in [4]) noted that:\n\n> Well, even if we consider this an actual problem, we could still use\n> PROC_IN_CIC for non-expression non-partial indexes (index operator\n> themselves better ensure this isn't a problem, or they're ridiculously\n> broken already - they can get called during vacuum).\n\nBut went on to describe how this is a problem with all expression\nindexes (even if those expressions don't do dangerous things) because\nof syscache lookups, but that ideally for expression indexes we'd have\na way to use a different (or more frequently taken) snapshot for the\npurpose of computing those expressions. That's a significantly more\ninvolved patch though.\n\nSo from what I understand, everything that I'd claimed in my previous\nmessage still holds true for non-expression/non-partial indexes. Is\nthere something else I'm missing?\n\nJames\n\n1: https://www.postgresql.org/message-id/20200325191935.jjhdg2zy5k7ath5v%40alap3.anarazel.de\n2: https://www.postgresql.org/message-id/20200325195841.gq4hpl25t6pxv3gl%40alap3.anarazel.de\n3: https://www.postgresql.org/message-id/CAAaqYe_fveT_tvKonVt1imujOURUUMrydGeaBGahqLLy9D-REw%40mail.gmail.com\n4: https://www.postgresql.org/message-id/20200416221207.wmnzbxvjqqpazeob%40alap3.anarazel.de\n\n\n",
"msg_date": "Tue, 11 Aug 2020 14:42:26 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2020-Aug-11, James Coleman wrote:\n\n> In [3] I'd brought up that a function index can do arbitrary\n> operations (including, terribly, a query of another table), and Andres\n> (in [4]) noted that:\n> \n> > Well, even if we consider this an actual problem, we could still use\n> > PROC_IN_CIC for non-expression non-partial indexes (index operator\n> > themselves better ensure this isn't a problem, or they're ridiculously\n> > broken already - they can get called during vacuum).\n> \n> But went on to describe how this is a problem with all expression\n> indexes (even if those expressions don't do dangerous things) because\n> of syscache lookups, but that ideally for expression indexes we'd have\n> a way to use a different (or more frequently taken) snapshot for the\n> purpose of computing those expressions. That's a significantly more\n> involved patch though.\n\nSo the easy first patch here is to add the flag as PROC_IN_SAFE_CIC,\nwhich is set only for CIC when it's used for indexes that are neither\non expressions nor partial. Then ignore those in WaitForOlderSnapshots.\nThe attached patch does it that way. (Also updated per recent\nconflicts).\n\nI did not set the flag in REINDEX CONCURRENTLY, but as I understand it\ncan be done too, since in essence it's the same thing as a CIC from a\nsnapshot management point of view.\n\nAlso, per [1], ISTM this flag could be used to tell lazy VACUUM to\nignore the Xmin of this process too, which the previous formulation\n(where all CICs were so marked) could not. This patch doesn't do that\nyet, but it seems the natural next step to take.\n\n[1] https://postgr.es/m/20191101203310.GA12239@alvherre.pgsql\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 19 Aug 2020 14:16:46 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On Wed, Aug 19, 2020 at 02:16:46PM -0400, Alvaro Herrera wrote:\n> I did not set the flag in REINDEX CONCURRENTLY, but as I understand it\n> can be done too, since in essence it's the same thing as a CIC from a\n> snapshot management point of view.\n\nYes, I see no problems for REINDEX CONCURRENTLY as well as long as\nthere are no predicates and expressions involved. The transactions\nthat should be patched are all started in ReindexRelationConcurrently.\nThe transaction of index_concurrently_swap() cannot set up that\nthough. Only thing to be careful is to make sure that safe_flag is\ncorrect depending on the list of indexes worked on.\n\n> Also, per [1], ISTM this flag could be used to tell lazy VACUUM to\n> ignore the Xmin of this process too, which the previous formulation\n> (where all CICs were so marked) could not. This patch doesn't do that\n> yet, but it seems the natural next step to take.\n> \n> [1] https://postgr.es/m/20191101203310.GA12239@alvherre.pgsql\n\nCould we consider renaming vacuumFlags? With more flags associated to\na PGPROC entry that are not related to vacuum, the current naming\nmakes things confusing. Something like statusFlags could fit better\nin the picture?\n--\nMichael",
"msg_date": "Thu, 20 Aug 2020 15:11:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "> On Thu, Aug 20, 2020 at 03:11:19PM +0900, Michael Paquier wrote:\n> On Wed, Aug 19, 2020 at 02:16:46PM -0400, Alvaro Herrera wrote:\n> > I did not set the flag in REINDEX CONCURRENTLY, but as I understand it\n> > can be done too, since in essence it's the same thing as a CIC from a\n> > snapshot management point of view.\n>\n> Yes, I see no problems for REINDEX CONCURRENTLY as well as long as\n> there are no predicates and expressions involved. The transactions\n> that should be patched are all started in ReindexRelationConcurrently.\n> The transaction of index_concurrently_swap() cannot set up that\n> though. Only thing to be careful is to make sure that safe_flag is\n> correct depending on the list of indexes worked on.\n\nHi,\n\nAfter looking through the thread and reading the patch it seems good,\nand there are only few minor questions:\n\n* Doing the same for REINDEX CONCURRENTLY, which does make sense. In\n fact it's already mentioned in the commentaries as done, which a bit\n confusing.\n\n* Naming, to be more precise what suggested Michael:\n\n> Could we consider renaming vacuumFlags? With more flags associated to\n> a PGPROC entry that are not related to vacuum, the current naming\n> makes things confusing. Something like statusFlags could fit better\n> in the picture?\n\n which sounds reasonable, and similar one about flag name\n PROC_IN_SAFE_CIC - if it covers both CREATE INDEX/REINDEX CONCURRENTLY\n maybe just PROC_IN_SAFE_IC?\n\nAny plans about those questions? I can imagine that are the only missing\nparts.\n\n\n",
"msg_date": "Tue, 3 Nov 2020 19:14:47 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "> On Tue, Nov 03, 2020 at 07:14:47PM +0100, Dmitry Dolgov wrote:\n> > On Thu, Aug 20, 2020 at 03:11:19PM +0900, Michael Paquier wrote:\n> > On Wed, Aug 19, 2020 at 02:16:46PM -0400, Alvaro Herrera wrote:\n> > > I did not set the flag in REINDEX CONCURRENTLY, but as I understand it\n> > > can be done too, since in essence it's the same thing as a CIC from a\n> > > snapshot management point of view.\n> >\n> > Yes, I see no problems for REINDEX CONCURRENTLY as well as long as\n> > there are no predicates and expressions involved. The transactions\n> > that should be patched are all started in ReindexRelationConcurrently.\n> > The transaction of index_concurrently_swap() cannot set up that\n> > though. Only thing to be careful is to make sure that safe_flag is\n> > correct depending on the list of indexes worked on.\n>\n> Hi,\n>\n> After looking through the thread and reading the patch it seems good,\n> and there are only few minor questions:\n>\n> * Doing the same for REINDEX CONCURRENTLY, which does make sense. In\n> fact it's already mentioned in the commentaries as done, which a bit\n> confusing.\n\nJust to give it a shot, would the attached change be enough?",
"msg_date": "Mon, 9 Nov 2020 16:47:43 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On Mon, Nov 09, 2020 at 04:47:43PM +0100, Dmitry Dolgov wrote:\n> > On Tue, Nov 03, 2020 at 07:14:47PM +0100, Dmitry Dolgov wrote:\n> > > On Thu, Aug 20, 2020 at 03:11:19PM +0900, Michael Paquier wrote:\n> > > On Wed, Aug 19, 2020 at 02:16:46PM -0400, Alvaro Herrera wrote:\n> > > > I did not set the flag in REINDEX CONCURRENTLY, but as I understand it\n> > > > can be done too, since in essence it's the same thing as a CIC from a\n> > > > snapshot management point of view.\n> > >\n> > > Yes, I see no problems for REINDEX CONCURRENTLY as well as long as\n> > > there are no predicates and expressions involved. The transactions\n> > > that should be patched are all started in ReindexRelationConcurrently.\n> > > The transaction of index_concurrently_swap() cannot set up that\n> > > though. Only thing to be careful is to make sure that safe_flag is\n> > > correct depending on the list of indexes worked on.\n> >\n> > Hi,\n> >\n> > After looking through the thread and reading the patch it seems good,\n> > and there are only few minor questions:\n> >\n> > * Doing the same for REINDEX CONCURRENTLY, which does make sense. In\n> > fact it's already mentioned in the commentaries as done, which a bit\n> > confusing.\n> \n> Just to give it a shot, would the attached change be enough?\n\nCould it be possible to rename vacuumFlags to statusFlags first? I\ndid not see any objection to do this suggestion.\n\n> +\t\tLWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> +\t\tMyProc->vacuumFlags |= PROC_IN_SAFE_IC;\n> +\t\tProcGlobal->vacuumFlags[MyProc->pgxactoff] = MyProc->vacuumFlags;\n> +\t\tLWLockRelease(ProcArrayLock);\n\nI can't help noticing that you are repeating the same code pattern\neight times. I think that this should be in its own routine, and that\nwe had better document that this should be called just after starting\na transaction, with an assertion enforcing that.\n--\nMichael",
"msg_date": "Tue, 10 Nov 2020 10:28:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n>> +\t\tLWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n>> +\t\tMyProc->vacuumFlags |= PROC_IN_SAFE_IC;\n>> +\t\tProcGlobal->vacuumFlags[MyProc->pgxactoff] = MyProc->vacuumFlags;\n>> +\t\tLWLockRelease(ProcArrayLock);\n\n> I can't help noticing that you are repeating the same code pattern\n> eight times. I think that this should be in its own routine, and that\n> we had better document that this should be called just after starting\n> a transaction, with an assertion enforcing that.\n\nDo we really need exclusive lock on the ProcArray to make this flag\nchange? That seems pretty bad from a concurrency standpoint.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Nov 2020 20:32:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On Mon, Nov 09, 2020 at 08:32:13PM -0500, Tom Lane wrote:\n> Do we really need exclusive lock on the ProcArray to make this flag\n> change? That seems pretty bad from a concurrency standpoint.\n\nAny place where we update vacuumFlags acquires an exclusive LWLock on\nProcArrayLock. That's held for a very short time, so IMO it won't\nmatter much in practice, particularly if you compare that with the\npotential gains related to the existing wait phases.\n--\nMichael",
"msg_date": "Tue, 10 Nov 2020 10:39:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Nov 09, 2020 at 08:32:13PM -0500, Tom Lane wrote:\n>> Do we really need exclusive lock on the ProcArray to make this flag\n>> change? That seems pretty bad from a concurrency standpoint.\n\n> Any place where we update vacuumFlags acquires an exclusive LWLock on\n> ProcArrayLock. That's held for a very short time, so IMO it won't\n> matter much in practice, particularly if you compare that with the\n> potential gains related to the existing wait phases.\n\nNot sure I believe that it doesn't matter much in practice. If there's\na steady stream of shared ProcArrayLock acquisitions (for snapshot\nacquisition) then somebody wanting exclusive lock will create a big\nhiccup, whether they hold it for a short time or not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Nov 2020 20:51:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2020-Nov-09, Tom Lane wrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Mon, Nov 09, 2020 at 08:32:13PM -0500, Tom Lane wrote:\n> >> Do we really need exclusive lock on the ProcArray to make this flag\n> >> change? That seems pretty bad from a concurrency standpoint.\n> \n> > Any place where we update vacuumFlags acquires an exclusive LWLock on\n> > ProcArrayLock. That's held for a very short time, so IMO it won't\n> > matter much in practice, particularly if you compare that with the\n> > potential gains related to the existing wait phases.\n> \n> Not sure I believe that it doesn't matter much in practice. If there's\n> a steady stream of shared ProcArrayLock acquisitions (for snapshot\n> acquisition) then somebody wanting exclusive lock will create a big\n> hiccup, whether they hold it for a short time or not.\n\nYeah ... it would be much better if we can make it use atomics instead.\nCurrently it's an uint8, and in PGPROC itself it's probably not a big\ndeal to enlarge that, but I fear that quadrupling the size of the\nmirroring array in PROC_HDR might be bad for performance. However,\nmaybe if we use atomics to access it, then we don't need to mirror it\nanymore? That would need some benchmarking of GetSnapshotData.\n\n\n\n",
"msg_date": "Mon, 9 Nov 2020 23:31:15 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On Mon, Nov 09, 2020 at 11:31:15PM -0300, Alvaro Herrera wrote:\n> Yeah ... it would be much better if we can make it use atomics instead.\n> Currently it's an uint8, and in PGPROC itself it's probably not a big\n> deal to enlarge that, but I fear that quadrupling the size of the\n> mirroring array in PROC_HDR might be bad for performance. However,\n> maybe if we use atomics to access it, then we don't need to mirror it\n> anymore? That would need some benchmarking of GetSnapshotData.\n\nHmm. If you worry about the performance impact here, it is possible\nto do a small performance test without this patch. vacuum_rel() sets\nthe flag for a non-full VACUUM, so with one backend running a manual\nVACUUM in loop on an empty relation could apply some pressure on any\nbenchmark, even a simple pgbench.\n--\nMichael",
"msg_date": "Tue, 10 Nov 2020 11:44:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Yeah ... it would be much better if we can make it use atomics instead.\n\nI was thinking more like \"do we need any locking at all\".\n\nAssuming that a proc's vacuumFlags can be set by only the process itself,\nthere's no write conflicts to worry about. On the read side, there's a\nhazard that onlookers will not see the PROC_IN_SAFE_IC flag set; but\nthat's not any different from what the outcome would be if they looked\njust before this stanza executes. And even if they don't see it, at worst\nwe lose the optimization being proposed.\n\nThere is a question of whether it's important that both copies of the flag\nappear to update atomically ... but that just begs the question \"why in\nheaven's name are there two copies?\"\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Nov 2020 22:02:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "> On Mon, Nov 09, 2020 at 10:02:27PM -0500, Tom Lane wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Yeah ... it would be much better if we can make it use atomics instead.\n>\n> I was thinking more like \"do we need any locking at all\".\n>\n> Assuming that a proc's vacuumFlags can be set by only the process itself,\n> there's no write conflicts to worry about. On the read side, there's a\n> hazard that onlookers will not see the PROC_IN_SAFE_IC flag set; but\n> that's not any different from what the outcome would be if they looked\n> just before this stanza executes. And even if they don't see it, at worst\n> we lose the optimization being proposed.\n>\n> There is a question of whether it's important that both copies of the flag\n> appear to update atomically ... but that just begs the question \"why in\n> heaven's name are there two copies?\"\n\nSounds right, but after reading the thread about GetSnapshotData\nscalability more thoroughly there seem to be an assumption that those\ncopies have to be updated at the same time under the same lock, and\nclaims that in some cases justification for correctness around not\ntaking ProcArrayLock is too complicated, at least for now.\n\nInteresting enough, similar discussion happened about vaccumFlags before\nwith the same conclusion that theoretically it's fine to update without\nholding the lock, but this assumption could change one day and it's\nbetter to avoid such risks. Having said that I believe it makes sense to\ncontinue with locking. Are there any other opinions? I'll try to\nbenchmark it in the meantime.\n\n\n",
"msg_date": "Thu, 12 Nov 2020 16:36:32 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On Thu, Nov 12, 2020 at 04:36:32PM +0100, Dmitry Dolgov wrote:\n> Interesting enough, similar discussion happened about vaccumFlags before\n> with the same conclusion that theoretically it's fine to update without\n> holding the lock, but this assumption could change one day and it's\n> better to avoid such risks. Having said that I believe it makes sense to\n> continue with locking. Are there any other opinions? I'll try to\n> benchmark it in the meantime.\n\nThanks for planning some benchmarking for this specific patch. I have\nto admit that the possibility of switching vacuumFlags to use atomics\nis very appealing in the long term, with or without considering this\npatch, even if we had better be sure that this patch has no actual\neffect on concurrency first if atomics are not used in worst-case\nscenarios.\n--\nMichael",
"msg_date": "Fri, 13 Nov 2020 09:25:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "> On Fri, Nov 13, 2020 at 09:25:40AM +0900, Michael Paquier wrote:\n> On Thu, Nov 12, 2020 at 04:36:32PM +0100, Dmitry Dolgov wrote:\n> > Interesting enough, similar discussion happened about vaccumFlags before\n> > with the same conclusion that theoretically it's fine to update without\n> > holding the lock, but this assumption could change one day and it's\n> > better to avoid such risks. Having said that I believe it makes sense to\n> > continue with locking. Are there any other opinions? I'll try to\n> > benchmark it in the meantime.\n>\n> Thanks for planning some benchmarking for this specific patch. I have\n> to admit that the possibility of switching vacuumFlags to use atomics\n> is very appealing in the long term, with or without considering this\n> patch, even if we had better be sure that this patch has no actual\n> effect on concurrency first if atomics are not used in worst-case\n> scenarios.\n\nI've tried first to test scenarios where GetSnapshotData produces\nsignificant lock contention and \"reindex concurrently\" implementation\nwith locks interferes with it. The idea I had is to create a test\nfunction that constantly calls GetSnapshotData (perf indeed shows\nsignificant portion of time spent on contended lock), and clash it with\na stream of \"reindex concurrently\" of an empty relation (which still\nreaches safe_index check). I guess it could be considered as an\nartificial extreme case. Measuring GetSnapshotData (or rather the\nsurrounding wrapper, to distinguish calls from the test function from\neverything else) latency without reindex, with reindex and locks, with\nreindex without locks should produce different \"modes\" and comparing\nthem we can make some conclusions.\n\nLatency histograms without reindex (nanoseconds):\n\n nsecs : count distribution\n 512 -> 1023 : 0 | |\n 1024 -> 2047 : 10001209 |****************************************|\n 2048 -> 4095 : 76936 | |\n 4096 -> 8191 : 1468 | |\n 8192 -> 16383 : 98 | |\n 16384 -> 32767 : 39 | |\n 32768 -> 65535 : 6 | |\n\nThe same with reindex without locks:\n\n nsecs : count distribution\n 512 -> 1023 : 0 | |\n 1024 -> 2047 : 111345 | |\n 2048 -> 4095 : 6997627 |****************************************|\n 4096 -> 8191 : 18575 | |\n 8192 -> 16383 : 586 | |\n 16384 -> 32767 : 312 | |\n 32768 -> 65535 : 18 | |\n\nThe same with reindex with locks:\n\n nsecs : count distribution\n 512 -> 1023 : 0 | |\n 1024 -> 2047 : 59438 | |\n 2048 -> 4095 : 6901187 |****************************************|\n 4096 -> 8191 : 18584 | |\n 8192 -> 16383 : 581 | |\n 16384 -> 32767 : 280 | |\n 32768 -> 65535 : 84 | |\n\nLooks like with reindex without locks is indeed faster (there are mode\nsamples in lower time section), but not particularly significant to the\nwhole distribution, especially taking into account extremity of the\ntest.\n\nI'll take a look at benchmarking of switching vacuumFlags to use\natomics, but as it's probably a bit off topic I'm going to attach\nanother version of the patch with locks and suggested changes. To which\nI have one question:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n\n> I think that this should be in its own routine, and that we had better\n> document that this should be called just after starting a transaction,\n> with an assertion enforcing that.\n\nI'm not sure which exactly assertion condition do you mean?",
"msg_date": "Mon, 16 Nov 2020 19:24:46 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On Tue, 10 Nov 2020 at 03:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Yeah ... it would be much better if we can make it use atomics instead.\n>\n> I was thinking more like \"do we need any locking at all\".\n>\n> Assuming that a proc's vacuumFlags can be set by only the process itself,\n> there's no write conflicts to worry about. On the read side, there's a\n> hazard that onlookers will not see the PROC_IN_SAFE_IC flag set; but\n> that's not any different from what the outcome would be if they looked\n> just before this stanza executes. And even if they don't see it, at worst\n> we lose the optimization being proposed.\n>\n> There is a question of whether it's important that both copies of the flag\n> appear to update atomically ... but that just begs the question \"why in\n> heaven's name are there two copies?\"\n\nAgreed to all of the above, but I think the issue is miniscule because\nProcArrayLock is acquired in LW_EXCLUSIVE mode at transaction commit.\nSo it doesn't seem like there is much need to optimize this particular\naspect of the patch.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 16 Nov 2020 18:36:51 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2020-Nov-16, Dmitry Dolgov wrote:\n\n> The same with reindex without locks:\n> \n> nsecs : count distribution\n> 512 -> 1023 : 0 | |\n> 1024 -> 2047 : 111345 | |\n> 2048 -> 4095 : 6997627 |****************************************|\n> 4096 -> 8191 : 18575 | |\n> 8192 -> 16383 : 586 | |\n> 16384 -> 32767 : 312 | |\n> 32768 -> 65535 : 18 | |\n> \n> The same with reindex with locks:\n> \n> nsecs : count distribution\n> 512 -> 1023 : 0 | |\n> 1024 -> 2047 : 59438 | |\n> 2048 -> 4095 : 6901187 |****************************************|\n> 4096 -> 8191 : 18584 | |\n> 8192 -> 16383 : 581 | |\n> 16384 -> 32767 : 280 | |\n> 32768 -> 65535 : 84 | |\n> \n> Looks like with reindex without locks is indeed faster (there are mode\n> samples in lower time section), but not particularly significant to the\n> whole distribution, especially taking into account extremity of the\n> test.\n\nI didn't analyze these numbers super carefully, but yeah it doesn't look\nsignificant.\n\nI'm looking at these patches now, with intention to push.\n\n\n\n",
"msg_date": "Mon, 16 Nov 2020 17:35:54 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2020-Nov-09, Tom Lane wrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> >> +\t\tLWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> >> +\t\tMyProc->vacuumFlags |= PROC_IN_SAFE_IC;\n> >> +\t\tProcGlobal->vacuumFlags[MyProc->pgxactoff] = MyProc->vacuumFlags;\n> >> +\t\tLWLockRelease(ProcArrayLock);\n> \n> > I can't help noticing that you are repeating the same code pattern\n> > eight times. I think that this should be in its own routine, and that\n> > we had better document that this should be called just after starting\n> > a transaction, with an assertion enforcing that.\n> \n> Do we really need exclusive lock on the ProcArray to make this flag\n> change? That seems pretty bad from a concurrency standpoint.\n\nBTW I now know that the reason for taking ProcArrayLock is not the\nvacuumFlags itself, but rather MyProc->pgxactoff, which can move.\n\nOn the other hand, if we stopped mirroring the flags in ProcGlobal, it\nwould mean we would have to access all procs' PGPROC entries in\nGetSnapshotData, which is undesirable for performance reason (per commit\n5788e258bb26).\n\n\n",
"msg_date": "Mon, 16 Nov 2020 21:08:32 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "I am really unsure about the REINDEX CONCURRENTLY part of this, for two\nreasons:\n\n1. It is not as good when reindexing multiple indexes, because we can\nonly apply the flag if *all* indexes are \"safe\". Any unsafe index means\nwe step down from it for the whole thing. This is probably not worth\nworrying much about, but still.\n\n2. In some of the waiting transactions, we actually do more things than\nwhat we do in CREATE INDEX CONCURRENTLY transactions --- some catalog\nupdates, but we also do the whole index validation phase. Is that OK?\nIt's not as clear to me that it is safe to set the flag in all those\nplaces.\n\nI moved the comments to the new function and made it inline. I also\nchanged the way we determine how the function is safe; there's no reason\nto build an IndexInfo if we can simply look at rel->rd_indexprs and\nrel->indpred.\n\nI've been wondering if it would be sane/safe to do the WaitForFoo stuff\noutside of any transaction.\n\nI'll have a look again tomorrow.",
"msg_date": "Mon, 16 Nov 2020 21:23:41 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 09:23:41PM -0300, Alvaro Herrera wrote:\n> I've been wondering if it would be sane/safe to do the WaitForFoo stuff\n> outside of any transaction.\n\nThis needs to remain within a transaction as CIC holds a session lock\non the parent table, which would not be cleaned up without a\ntransaction context.\n--\nMichael",
"msg_date": "Tue, 17 Nov 2020 09:38:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2020-Nov-16, Alvaro Herrera wrote:\n\n> On 2020-Nov-09, Tom Lane wrote:\n> \n> > Michael Paquier <michael@paquier.xyz> writes:\n> > >> +\t\tLWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> > >> +\t\tMyProc->vacuumFlags |= PROC_IN_SAFE_IC;\n> > >> +\t\tProcGlobal->vacuumFlags[MyProc->pgxactoff] = MyProc->vacuumFlags;\n> > >> +\t\tLWLockRelease(ProcArrayLock);\n> > \n> > > I can't help noticing that you are repeating the same code pattern\n> > > eight times. I think that this should be in its own routine, and that\n> > > we had better document that this should be called just after starting\n> > > a transaction, with an assertion enforcing that.\n> > \n> > Do we really need exclusive lock on the ProcArray to make this flag\n> > change? That seems pretty bad from a concurrency standpoint.\n> \n> BTW I now know that the reason for taking ProcArrayLock is not the\n> vacuumFlags itself, but rather MyProc->pgxactoff, which can move.\n\n... ah, but I realize now that this means that we can use shared lock\nhere, not exclusive, which is already an enormous improvement. That's\nbecause ->pgxactoff can only be changed with exclusive lock held; so as\nlong as we hold shared, the array item cannot move.\n\n\n",
"msg_date": "Tue, 17 Nov 2020 12:55:01 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "So I made the change to set the statusFlags with only LW_SHARED -- both\nin existing uses (see 0002) and in the new CIC code (0003). I don't\nthink 0002 is going to have a tremendous performance impact, because it\nchanges operations that are very infrequent. But even so, it would be\nweird to leave code around that uses exclusive lock when we're going to\nadd new code that uses shared lock for the same thing.\n\nI still left the REINDEX CONCURRENTLY support in split out in 0004; I\nintend to get the first three patches pushed now, and look into 0004\nagain later.",
"msg_date": "Tue, 17 Nov 2020 14:14:53 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On Tue, Nov 17, 2020 at 02:14:53PM -0300, Alvaro Herrera wrote:\n> diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c\n> index f1f4df7d70..4324e32656 100644\n> --- a/src/backend/replication/logical/logical.c\n> +++ b/src/backend/replication/logical/logical.c\n> @@ -181,7 +181,7 @@ StartupDecodingContext(List *output_plugin_options,\n> \t */\n> \tif (!IsTransactionOrTransactionBlock())\n> \t{\n> -\t\tLWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> +\t\tLWLockAcquire(ProcArrayLock, LW_SHARED);\n> \t\tMyProc->statusFlags |= PROC_IN_LOGICAL_DECODING;\n> \t\tProcGlobal->statusFlags[MyProc->pgxactoff] = MyProc->statusFlags;\n> \t\tLWLockRelease(ProcArrayLock);\n\nWe don't really document that it is safe to update statusFlags while\nholding a shared lock on ProcArrayLock, right? Could this be\nclarified at least in proc.h?\n\n> +\t/* Determine whether we can call set_safe_index_flag */\n> +\tsafe_index = indexInfo->ii_Expressions == NIL &&\n> +\t\tindexInfo->ii_Predicate == NIL;\n\nThis should tell why we check after expressions and predicates, in\nshort giving an explanation about the snapshot issues with build and\nvalidation when executing those expressions and/or predicates.\n\n> + * Set the PROC_IN_SAFE_IC flag in my PGPROC entry.\n> + *\n> + * When doing concurrent index builds, we can set this flag\n> + * to tell other processes concurrently running CREATE\n> + * INDEX CONCURRENTLY to ignore us when\n> + * doing their waits for concurrent snapshots. On one hand it\n> + * avoids pointlessly waiting for a process that's not interesting\n> + * anyway, but more importantly it avoids deadlocks in some cases.\n> + *\n> + * This can only be done for indexes that don't execute any expressions.\n> + * Caller is responsible for only calling this routine when that\n> + * assumption holds true.\n> + *\n> + * (The flag is reset automatically at transaction end, so it must be\n> + * set for each transaction.)\n\nWould it be better to document that this function had better be called\nafter beginning a transaction? I am wondering if we should not\nenforce some conditions here, say this one or similar:\nAssert(GetTopTransactionIdIfAny() == InvalidTransactionId);\n\n> + */\n> +static inline void\n> +set_safe_index_flag(void)\n\nThis routine name is rather close to index_set_state_flags(), could it\nbe possible to finish with a better name?\n--\nMichael",
"msg_date": "Wed, 18 Nov 2020 10:43:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2020-Nov-18, Michael Paquier wrote:\n\n> On Tue, Nov 17, 2020 at 02:14:53PM -0300, Alvaro Herrera wrote:\n> > diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c\n> > index f1f4df7d70..4324e32656 100644\n> > --- a/src/backend/replication/logical/logical.c\n> > +++ b/src/backend/replication/logical/logical.c\n> > @@ -181,7 +181,7 @@ StartupDecodingContext(List *output_plugin_options,\n> > \t */\n> > \tif (!IsTransactionOrTransactionBlock())\n> > \t{\n> > -\t\tLWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> > +\t\tLWLockAcquire(ProcArrayLock, LW_SHARED);\n> > \t\tMyProc->statusFlags |= PROC_IN_LOGICAL_DECODING;\n> > \t\tProcGlobal->statusFlags[MyProc->pgxactoff] = MyProc->statusFlags;\n> > \t\tLWLockRelease(ProcArrayLock);\n> \n> We don't really document that it is safe to update statusFlags while\n> holding a shared lock on ProcArrayLock, right? Could this be\n> clarified at least in proc.h?\n\nPushed that part with a comment addition. This stuff is completely\nundocumented ...\n\n> > +\t/* Determine whether we can call set_safe_index_flag */\n> > +\tsafe_index = indexInfo->ii_Expressions == NIL &&\n> > +\t\tindexInfo->ii_Predicate == NIL;\n> \n> This should tell why we check after expressions and predicates, in\n> short giving an explanation about the snapshot issues with build and\n> validation when executing those expressions and/or predicates.\n\nFair. It seems good to add a comment to the new function, and reference\nthat in each place where we set the flag.\n\n\n> > + * Set the PROC_IN_SAFE_IC flag in my PGPROC entry.\n\n> Would it be better to document that this function had better be called\n> after beginning a transaction? I am wondering if we should not\n> enforce some conditions here, say this one or similar:\n> Assert(GetTopTransactionIdIfAny() == InvalidTransactionId);\n\nI went with checking MyProc->xid and MyProc->xmin, which is the same in\npractice but notionally closer to what we're doing.\n\n> > +static inline void\n> > +set_safe_index_flag(void)\n> \n> This routine name is rather close to index_set_state_flags(), could it\n> be possible to finish with a better name?\n\nI went with set_indexsafe_procflags(). Ugly ...",
"msg_date": "Wed, 18 Nov 2020 14:58:04 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-17 12:55:01 -0300, Alvaro Herrera wrote:\n> ... ah, but I realize now that this means that we can use shared lock\n> here, not exclusive, which is already an enormous improvement. That's\n> because ->pgxactoff can only be changed with exclusive lock held; so as\n> long as we hold shared, the array item cannot move.\n\nUh, wait a second. The acquisition of this lock hasn't been affected by\nthe snapshot scalability changes, and therefore are unrelated to\n->pgxactoff changing or not.\n\nIn 13 this is:\n\t\tLWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n\t\tMyPgXact->vacuumFlags |= PROC_IN_VACUUM;\n\t\tif (params->is_wraparound)\n\t\t\tMyPgXact->vacuumFlags |= PROC_VACUUM_FOR_WRAPAROUND;\n\t\tLWLockRelease(ProcArrayLock);\n\nLowering this to a shared lock doesn't seem right, at least without a\ndetailed comment explaining why it's safe. Because GetSnapshotData() etc\nlook at all procs with just an LW_SHARED ProcArrayLock, changing\nvacuumFlags without a lock means that two concurrent horizon\ncomputations could come to a different result.\n\nI'm not saying it's definitely wrong to relax things here, but I'm not\nsure we've evaluated it sufficiently.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Nov 2020 11:09:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "> On 2020-11-17 12:55:01 -0300, Alvaro Herrera wrote:\n> > ... ah, but I realize now that this means that we can use shared lock\n> > here, not exclusive, which is already an enormous improvement. That's\n> > because ->pgxactoff can only be changed with exclusive lock held; so as\n> > long as we hold shared, the array item cannot move.\n> \n> Uh, wait a second. The acquisition of this lock hasn't been affected by\n> the snapshot scalability changes, and therefore are unrelated to\n> ->pgxactoff changing or not.\n\nI'm writing a response trying to thoroughly analyze this, but I also\nwanted to report that ProcSleep is being a bit generous with what it\ncalls \"as quickly as possible\" here:\n\n LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n\n /*\n * Only do it if the worker is not working to protect against Xid\n * wraparound.\n */\n statusFlags = ProcGlobal->statusFlags[autovac->pgxactoff];\n if ((statusFlags & PROC_IS_AUTOVACUUM) &&\n !(statusFlags & PROC_VACUUM_FOR_WRAPAROUND))\n {\n int pid = autovac->pid;\n StringInfoData locktagbuf;\n StringInfoData logbuf; /* errdetail for server log */\n\n initStringInfo(&locktagbuf);\n initStringInfo(&logbuf);\n DescribeLockTag(&locktagbuf, &lock->tag);\n appendStringInfo(&logbuf,\n _(\"Process %d waits for %s on %s.\"),\n MyProcPid,\n GetLockmodeName(lock->tag.locktag_lockmethodid,\n lockmode),\n locktagbuf.data);\n\n /* release lock as quickly as possible */\n LWLockRelease(ProcArrayLock);\n\nThe amount of stuff that this is doing with ProcArrayLock held\nexclusively seems a bit excessive; it sounds like we could copy the\nvalues we need first, release the lock, and *then* do all that memory\nallocation and string printing -- it's a lock of code for such a\ncontended lock. Anytime a process sees itself as blocked by autovacuum\nand wants to signal it, there's a ProcArrayLock hiccup (I didn't\nactually measure it, but it's at least five function calls). We could\nmake this more concurrent by copying lock->tag to a local variable,\nreleasing the lock, then doing all the string formatting and printing.\nSee attached quickly.patch.\n\nNow, when this code was written (d7318d43d, 2012), this was a LOG\nmessage; it was demoted to DEBUG1 later (d8f15c95bec, 2015). I think it\nwould be fair to ... remove the message? Or go back to Simon's original\nformulation from commit acac68b2bca, which had this message as DEBUG2\nwithout any string formatting.\n\nThoughts?",
"msg_date": "Wed, 18 Nov 2020 18:41:27 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "\"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-18 18:41:27 -0300, Alvaro Herrera wrote:\n> The amount of stuff that this is doing with ProcArrayLock held\n> exclusively seems a bit excessive; it sounds like we could copy the\n> values we need first, release the lock, and *then* do all that memory\n> allocation and string printing -- it's a lock of code for such a\n> contended lock.\n\nYea, that's a good point.\n\n\n> Anytime a process sees itself as blocked by autovacuum\n> and wants to signal it, there's a ProcArrayLock hiccup (I didn't\n> actually measure it, but it's at least five function calls).\n\nI'm a bit doubtful it's that important - it's limited in frequency\nby deadlock_timeout. But worth improving anyway.\n\n\n> We could make this more concurrent by copying lock->tag to a local\n> variable, releasing the lock, then doing all the string formatting and\n> printing. See attached quickly.patch.\n\nSounds like a plan.\n\n\n> Now, when this code was written (d7318d43d, 2012), this was a LOG\n> message; it was demoted to DEBUG1 later (d8f15c95bec, 2015). I think it\n> would be fair to ... remove the message? Or go back to Simon's original\n> formulation from commit acac68b2bca, which had this message as DEBUG2\n> without any string formatting.\n\nI don't really have an opinion on this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Nov 2020 14:48:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 11:09:28AM -0800, Andres Freund wrote:\n> Uh, wait a second. The acquisition of this lock hasn't been affected by\n> the snapshot scalability changes, and therefore are unrelated to\n> ->pgxactoff changing or not.\n> \n> In 13 this is:\n> \t\tLWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> \t\tMyPgXact->vacuumFlags |= PROC_IN_VACUUM;\n> \t\tif (params->is_wraparound)\n> \t\t\tMyPgXact->vacuumFlags |= PROC_VACUUM_FOR_WRAPAROUND;\n> \t\tLWLockRelease(ProcArrayLock);\n> \n> Lowering this to a shared lock doesn't seem right, at least without a\n> detailed comment explaining why it's safe. Because GetSnapshotData() etc\n> look at all procs with just an LW_SHARED ProcArrayLock, changing\n> vacuumFlags without a lock means that two concurrent horizon\n> computations could come to a different result.\n> \n> I'm not saying it's definitely wrong to relax things here, but I'm not\n> sure we've evaluated it sufficiently.\n\nYeah. While I do like the new assertion that 27838981 has added in\nProcArrayEndTransactionInternal(), this commit feels a bit rushed to\nme. Echoing with my comment from upthread, I am not sure that we\nstill document enough why a shared lock would be completely fine in\nthe case of statusFlags. We have no hints that this could be fine\nbefore 2783898, and 2783898 does not make that look better. FWIW, I\nthink that just using LW_EXCLUSIVE for the CIC change would have been\nfine enough first, after evaluating the performance impact. We could\nevaluate if it is possible to lower the ProcArrayLock acquisition in\nthose code paths on a separate thread.\n--\nMichael",
"msg_date": "Thu, 19 Nov 2020 10:51:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 02:48:40PM -0800, Andres Freund wrote:\n> On 2020-11-18 18:41:27 -0300, Alvaro Herrera wrote:\n>> We could make this more concurrent by copying lock->tag to a local\n>> variable, releasing the lock, then doing all the string formatting and\n>> printing. See attached quickly.patch.\n> \n> Sounds like a plan.\n\n+1.\n\n>> Now, when this code was written (d7318d43d, 2012), this was a LOG\n>> message; it was demoted to DEBUG1 later (d8f15c95bec, 2015). I think it\n>> would be fair to ... remove the message? Or go back to Simon's original\n>> formulation from commit acac68b2bca, which had this message as DEBUG2\n>> without any string formatting.\n> \n> I don't really have an opinion on this.\n\nThat still looks useful for debugging, so DEBUG1 sounds fine to me.\n--\nMichael",
"msg_date": "Thu, 19 Nov 2020 12:13:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 12:13:44PM +0900, Michael Paquier wrote:\n> That still looks useful for debugging, so DEBUG1 sounds fine to me.\n\nBy the way, it strikes me that you could just do nothing as long as\n(log_min_messages > DEBUG1), so you could encapsulate most of the\nlogic that plays with the lock tag using that.\n--\nMichael",
"msg_date": "Thu, 19 Nov 2020 12:39:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "On 2020-Nov-18, Andres Freund wrote:\n\n> In 13 this is:\n> \t\tLWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> \t\tMyPgXact->vacuumFlags |= PROC_IN_VACUUM;\n> \t\tif (params->is_wraparound)\n> \t\t\tMyPgXact->vacuumFlags |= PROC_VACUUM_FOR_WRAPAROUND;\n> \t\tLWLockRelease(ProcArrayLock);\n> \n> Lowering this to a shared lock doesn't seem right, at least without a\n> detailed comment explaining why it's safe. Because GetSnapshotData() etc\n> look at all procs with just an LW_SHARED ProcArrayLock, changing\n> vacuumFlags without a lock means that two concurrent horizon\n> computations could come to a different result.\n> \n> I'm not saying it's definitely wrong to relax things here, but I'm not\n> sure we've evaluated it sufficiently.\n\nTrue. Let's evaluate it.\n\nI changed three spots where statusFlags bits are written:\n\na) StartupDecodingContext: sets PROC_IN_LOGICAL_DECODING\nb) ReplicationSlotRelease: removes PROC_IN_LOGICAL_DECODING\nc) vacuum_rel: sets PROC_IN_VACUUM and potentially\n PROC_VACUUM_FOR_WRAPAROUND\n\nWho reads these flags?\n\nPROC_IN_LOGICAL_DECODING is read by:\n * ComputeXidHorizons\n * GetSnapshotData\n\nPROC_IN_VACUUM is read by:\n * GetCurrentVirtualXIDs\n * ComputeXidHorizons\n * GetSnapshotData\n * ProcArrayInstallImportedXmin\n\nPROC_VACUUM_FOR_WRAPAROUND is read by:\n * ProcSleep\n\n\nProcSleep: no bug here.\nThe flags are checked to see if we should kill() the process (used when\nautovac blocks some other process). The lock here appears to be\ninconsequential, since we release it before we do kill(); so strictly\nspeaking, there is still a race condition where the autovac process\ncould reset the flag after we read it and before we get a chance to\nsignal it. The lock level autovac uses to set the flag is not relevant\neither.\n\nProcArrayInstallImportedXmin:\nThis one is just searching for a matching backend; not affected by the\nflags.\n\nPROC_IN_LOGICAL_DECODING:\nOddly enough, I think the reset of PROC_IN_LOGICAL_DECODING in\nReplicationSlotRelease might be the most problematic one of the lot.\nThat's because a proc's xmin that had been ignored all along by\nComputeXidHorizons, will now be included in the computation. Adding\nasserts that proc->xmin and proc->xid are InvalidXid by the time we\nreset the flag, I got hits in pg_basebackup, test_decoding and\nsubscription tests. I think it's OK for ComputeXidHorizons (since it\njust means that a vacuum that reads a later will remove less rows.) But\nin GetSnapshotData it is just not correct to have the Xmin go backwards.\n\nTherefore it seems to me that this code has a bug independently of the\nlock level used.\n\n\nGetCurrentVirtualXIDs, ComputeXidHorizons, GetSnapshotData:\n\nIn these cases, what we need is that the code computes some xmin (or\nequivalent computation) based on a set of transactions that exclude\nthose marked with the flags. The behavior we want is that if some\ntransaction is marked as vacuum, we ignore the Xid/Xmin *if there is\none*. In other words, if there's no Xid/Xmin, then the flag is not\nimportant. So if we can ensure that the flag is set first, and the\nXid/xmin is installed later, that's sufficient correctness and we don't\nneed to hold exclusive lock. But if we can't ensure that, then we must\nuse exclusive lock, because otherwise we risk another process seeing our\nXid first and not our flag, which would be bad.\n\nIn other words, my conclusion is that there definitely is a bug here and\nI am going to restore the use of exclusive lock for setting the\nstatusFlags.\n\n\nGetSnapshotData has an additional difficulty -- we do the\nUINT32_ACCESS_ONCE(ProcGlobal->xid[i]) read *before* testing the bit. \nSo it's not valid to set the bit without locking out GSD, regardless of\nany barriers we had; if we want this to be safe, we'd have to change\nthis so that the flag is read first, and we read the xid only\nafterwards, with a read barrier.\n\nI *think* we could relax the lock if we had a write barrier in\nbetween: set the flag first, issue a write barrier, set the Xid.\n(I have to admit I'm confused about what needs to happen in the read\ncase: read the bit first, potentially skip the PGPROC entry; but can we\nread the Xid ahead of reading the flag, and if we do read the xid\nafterwards, do we need a read barrier in between?)\nGiven this uncertainty, I'm not proposing to relax the lock from\nexclusive to shared.\n\n\nI didn't get around to reading ComputeXidHorizons, but it seems like\nit'd have the same problem as GSD.\n\n\n",
"msg_date": "Mon, 23 Nov 2020 12:30:05 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> PROC_IN_LOGICAL_DECODING:\n> Oddly enough, I think the reset of PROC_IN_LOGICAL_DECODING in\n> ReplicationSlotRelease might be the most problematic one of the lot.\n> That's because a proc's xmin that had been ignored all along by\n> ComputeXidHorizons, will now be included in the computation. Adding\n> asserts that proc->xmin and proc->xid are InvalidXid by the time we\n> reset the flag, I got hits in pg_basebackup, test_decoding and\n> subscription tests. I think it's OK for ComputeXidHorizons (since it\n> just means that a vacuum that reads a later will remove less rows.) But\n> in GetSnapshotData it is just not correct to have the Xmin go backwards.\n\n> Therefore it seems to me that this code has a bug independently of the\n> lock level used.\n\nThat is only a bug if the flags are *cleared* in a way that's not\natomic with clearing the transaction's xid/xmin, no? I agree that\nonce set, the flag had better stay set till transaction end, but\nthat's not what's at stake here.\n\n> GetCurrentVirtualXIDs, ComputeXidHorizons, GetSnapshotData:\n\n> In these cases, what we need is that the code computes some xmin (or\n> equivalent computation) based on a set of transactions that exclude\n> those marked with the flags. The behavior we want is that if some\n> transaction is marked as vacuum, we ignore the Xid/Xmin *if there is\n> one*. In other words, if there's no Xid/Xmin, then the flag is not\n> important. So if we can ensure that the flag is set first, and the\n> Xid/xmin is installed later, that's sufficient correctness and we don't\n> need to hold exclusive lock. But if we can't ensure that, then we must\n> use exclusive lock, because otherwise we risk another process seeing our\n> Xid first and not our flag, which would be bad.\n\nI don't buy this either. You get the same result if someone looks just\nbefore you take the ProcArrayLock to set the flag. So if there's a\nproblem, it's inherent in the way that the flags are defined or used;\nthe strength of lock used in this stanza won't affect it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Nov 2020 10:42:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2020-Nov-19, Michael Paquier wrote:\n\n> On Thu, Nov 19, 2020 at 12:13:44PM +0900, Michael Paquier wrote:\n> > That still looks useful for debugging, so DEBUG1 sounds fine to me.\n> \n> By the way, it strikes me that you could just do nothing as long as\n> (log_min_messages > DEBUG1), so you could encapsulate most of the\n> logic that plays with the lock tag using that.\n\nGood idea, done.\n\nI also noticed that if we're going to accept a race (which BTW already\nexists) we may as well simplify the code about it.\n\nI think the attached is the final form of this.",
"msg_date": "Mon, 23 Nov 2020 17:31:24 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Nov-19, Michael Paquier wrote:\n>> By the way, it strikes me that you could just do nothing as long as\n>> (log_min_messages > DEBUG1), so you could encapsulate most of the\n>> logic that plays with the lock tag using that.\n\n> Good idea, done.\n\nI'm less sure that that's a good idea. It embeds knowledge here that\nshould not exist outside elog.c; moreover, I'm not entirely sure that\nit's even correct, given the nonlinear ranking of log_min_messages.\n\nMaybe it'd be a good idea to have elog.c expose a new function\nalong the lines of \"bool message_level_is_interesting(int elevel)\"\nto support this and similar future optimizations in a less fragile way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Nov 2020 16:20:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "On 2020-Nov-23, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2020-Nov-19, Michael Paquier wrote:\n> >> By the way, it strikes me that you could just do nothing as long as\n> >> (log_min_messages > DEBUG1), so you could encapsulate most of the\n> >> logic that plays with the lock tag using that.\n> \n> > Good idea, done.\n> \n> I'm less sure that that's a good idea. It embeds knowledge here that\n> should not exist outside elog.c; moreover, I'm not entirely sure that\n> it's even correct, given the nonlinear ranking of log_min_messages.\n\nWell, we already do this in a number of places. But I can get behind\nthis:\n\n> Maybe it'd be a good idea to have elog.c expose a new function\n> along the lines of \"bool message_level_is_interesting(int elevel)\"\n> to support this and similar future optimizations in a less fragile way.\n\n\n",
"msg_date": "Mon, 23 Nov 2020 18:28:02 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Well, we already do this in a number of places. But I can get behind\n> this:\n\n>> Maybe it'd be a good idea to have elog.c expose a new function\n>> along the lines of \"bool message_level_is_interesting(int elevel)\"\n>> to support this and similar future optimizations in a less fragile way.\n\nI'll see about a patch for that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Nov 2020 17:02:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "On 2020-Nov-23, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Well, we already do this in a number of places. But I can get behind\n> > this:\n> \n> >> Maybe it'd be a good idea to have elog.c expose a new function\n> >> along the lines of \"bool message_level_is_interesting(int elevel)\"\n> >> to support this and similar future optimizations in a less fragile way.\n> \n> I'll see about a patch for that.\n\nLooking at that now ...\n\n\n",
"msg_date": "Mon, 23 Nov 2020 19:10:30 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "Here's a draft patch.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 23 Nov 2020 17:32:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "On 2020-Nov-23, Tom Lane wrote:\n\n> Here's a draft patch.\n\nHere's another of my own. Outside of elog.c it seems identical.",
"msg_date": "Mon, 23 Nov 2020 19:38:54 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "On 2020-Nov-23, Alvaro Herrera wrote:\n\n> On 2020-Nov-23, Tom Lane wrote:\n> \n> > Here's a draft patch.\n> \n> Here's another of my own. Outside of elog.c it seems identical.\n\nYour version has the advantage that errstart() doesn't get a new\nfunction call. I'm +1 for going with that ... we could avoid the\nduplicate code with some additional contortions but this changes so\nrarely that it's probably not worth the trouble.\n\n\n\n",
"msg_date": "Mon, 23 Nov 2020 19:41:45 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Nov-23, Tom Lane wrote:\n>> Here's a draft patch.\n\n> Here's another of my own. Outside of elog.c it seems identical.\n\nAh, I see I didn't cover the case in ProcSleep that you were originally on\nabout ... I'd just looked for existing references to log_min_messages\nand client_min_messages.\n\nI think it's important to have the explicit check for elevel >= ERROR.\nI'm not too fussed about whether we invent is_log_level_output_client,\nalthough that name doesn't seem well-chosen compared to\nis_log_level_output.\n\nShall I press forward with this, or do you want to?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Nov 2020 17:45:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Your version has the advantage that errstart() doesn't get a new\n> function call. I'm +1 for going with that ... we could avoid the\n> duplicate code with some additional contortions but this changes so\n> rarely that it's probably not worth the trouble.\n\nI was considering adding that factorization, but marking the function\ninline to avoid adding overhead. Most of elog.c predates our use of\ninline, so it wasn't considered when this code was written.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Nov 2020 17:48:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "On 2020-Nov-23, Tom Lane wrote:\n\n> Ah, I see I didn't cover the case in ProcSleep that you were originally on\n> about ... I'd just looked for existing references to log_min_messages\n> and client_min_messages.\n\nYeah, it seemed bad form to add that when you had just argued against it\n:-)\n\n> I think it's important to have the explicit check for elevel >= ERROR.\n> I'm not too fussed about whether we invent is_log_level_output_client,\n> although that name doesn't seem well-chosen compared to\n> is_log_level_output.\n\nJust replacing \"log\" for \"client\" in that seemed strictly worse, and I\ndidn't (don't) have any other ideas.\n\n> Shall I press forward with this, or do you want to?\n\nPlease feel free to go ahead, including the change to ProcSleep.\n\n\n",
"msg_date": "Mon, 23 Nov 2020 20:02:07 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Nov-23, Tom Lane wrote:\n>> I'm not too fussed about whether we invent is_log_level_output_client,\n>> although that name doesn't seem well-chosen compared to\n>> is_log_level_output.\n\n> Just replacing \"log\" for \"client\" in that seemed strictly worse, and I\n> didn't (don't) have any other ideas.\n\nI never cared that much for \"is_log_level_output\" either. Thinking\nabout renaming it to \"should_output_to_log()\", and then the new function\nwould be \"should_output_to_client()\".\n\n>> Shall I press forward with this, or do you want to?\n\n> Please feel free to go ahead, including the change to ProcSleep.\n\nWill do.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Nov 2020 18:13:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-23 12:30:05 -0300, Alvaro Herrera wrote:\n> ProcSleep: no bug here.\n> The flags are checked to see if we should kill() the process (used when\n> autovac blocks some other process). The lock here appears to be\n> inconsequential, since we release it before we do kill(); so strictly\n> speaking, there is still a race condition where the autovac process\n> could reset the flag after we read it and before we get a chance to\n> signal it. The lock level autovac uses to set the flag is not relevant\n> either.\n\nYea. Even before the recent changes building the lock message under the\nlock didn't make sense. Now that the flags are mirrored in pgproc, we\nprobably should just make this use READ_ONCE(autovac->statusFlags),\nwithout any other use of ProcArrayLock. As you say the race condition\nis between the flag test, the kill, and the signal being processed,\nwhich wasn't previously protected either.\n\n\n> PROC_IN_LOGICAL_DECODING:\n> Oddly enough, I think the reset of PROC_IN_LOGICAL_DECODING in\n> ReplicationSlotRelease might be the most problematic one of the lot.\n> That's because a proc's xmin that had been ignored all along by\n> ComputeXidHorizons, will now be included in the computation. Adding\n> asserts that proc->xmin and proc->xid are InvalidXid by the time we\n> reset the flag, I got hits in pg_basebackup, test_decoding and\n> subscription tests. I think it's OK for ComputeXidHorizons (since it\n> just means that a vacuum that reads a later will remove less rows.) But\n> in GetSnapshotData it is just not correct to have the Xmin go backwards.\n\nI don't think there's a problem. PROC_IN_LOGICAL_DECODING can only be\nset when outside a transaction block, i.e. walsender. In which case\nthere shouldn't be an xid/xmin, I think? Or did you gate your assert on\nPROC_IN_LOGICAL_DECODING being set?\n\n\n> GetCurrentVirtualXIDs, ComputeXidHorizons, GetSnapshotData:\n> \n> In these cases, what we need is that the code computes some xmin (or\n> equivalent computation) based on a set of transactions that exclude\n> those marked with the flags. The behavior we want is that if some\n> transaction is marked as vacuum, we ignore the Xid/Xmin *if there is\n> one*. In other words, if there's no Xid/Xmin, then the flag is not\n> important. So if we can ensure that the flag is set first, and the\n> Xid/xmin is installed later, that's sufficient correctness and we don't\n> need to hold exclusive lock.\n\nThat'd require at least memory barriers in GetSnapshotData()'s loop,\nwhich I'd really like to avoid. Otherwise the order in which memory gets\nwritten in one process doesn't guarantee the order of visibility in\nanother process...\n\n\n\n> In other words, my conclusion is that there definitely is a bug here and\n> I am going to restore the use of exclusive lock for setting the\n> statusFlags.\n\nCool.\n\n\n> GetSnapshotData has an additional difficulty -- we do the\n> UINT32_ACCESS_ONCE(ProcGlobal->xid[i]) read *before* testing the bit. \n> So it's not valid to set the bit without locking out GSD, regardless of\n> any barriers we had; if we want this to be safe, we'd have to change\n> this so that the flag is read first, and we read the xid only\n> afterwards, with a read barrier.\n\nWhich we don't want, because that'd mean slowing down the *extremely*\ncommon case of the majority of backends neither having an xid assigned\nnor doing logical decoding / vacuum. We'd be reading two cachelines\ninstead of one.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 23 Nov 2020 16:40:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2020-Nov-23, Tom Lane wrote:\n\n> I never cared that much for \"is_log_level_output\" either. Thinking\n> about renaming it to \"should_output_to_log()\", and then the new function\n> would be \"should_output_to_client()\".\n\nAh, that sounds nicely symmetric and grammatical.\n\nThanks!\n\n\n",
"msg_date": "Mon, 23 Nov 2020 23:03:32 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "On Mon, Nov 23, 2020 at 06:13:17PM -0500, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> Please feel free to go ahead, including the change to ProcSleep.\n> \n> Will do.\n\nThank you both for 450c823 and 789b938.\n--\nMichael",
"msg_date": "Tue, 24 Nov 2020 11:04:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"as quickly as possible\" (was: remove spurious CREATE INDEX\n CONCURRENTLY wait)"
},
{
"msg_contents": "On 2020-Nov-23, Andres Freund wrote:\n\n> On 2020-11-23 12:30:05 -0300, Alvaro Herrera wrote:\n\n> > PROC_IN_LOGICAL_DECODING:\n> > Oddly enough, I think the reset of PROC_IN_LOGICAL_DECODING in\n> > ReplicationSlotRelease might be the most problematic one of the lot.\n> > That's because a proc's xmin that had been ignored all along by\n> > ComputeXidHorizons, will now be included in the computation. Adding\n> > asserts that proc->xmin and proc->xid are InvalidXid by the time we\n> > reset the flag, I got hits in pg_basebackup, test_decoding and\n> > subscription tests. I think it's OK for ComputeXidHorizons (since it\n> > just means that a vacuum that reads a later will remove less rows.) But\n> > in GetSnapshotData it is just not correct to have the Xmin go backwards.\n> \n> I don't think there's a problem. PROC_IN_LOGICAL_DECODING can only be\n> set when outside a transaction block, i.e. walsender. In which case\n> there shouldn't be an xid/xmin, I think? Or did you gate your assert on\n> PROC_IN_LOGICAL_DECODING being set?\n\nAh, you're right about this one -- I missed the significance of setting\nthe flag only \"when outside of a transaction block\" at the time we call\nStartupDecodingContext.\n\n\n\n",
"msg_date": "Tue, 24 Nov 2020 18:38:09 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2020-Nov-23, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> > GetCurrentVirtualXIDs, ComputeXidHorizons, GetSnapshotData:\n> \n> > In these cases, what we need is that the code computes some xmin (or\n> > equivalent computation) based on a set of transactions that exclude\n> > those marked with the flags. The behavior we want is that if some\n> > transaction is marked as vacuum, we ignore the Xid/Xmin *if there is\n> > one*. In other words, if there's no Xid/Xmin, then the flag is not\n> > important. So if we can ensure that the flag is set first, and the\n> > Xid/xmin is installed later, that's sufficient correctness and we don't\n> > need to hold exclusive lock. But if we can't ensure that, then we must\n> > use exclusive lock, because otherwise we risk another process seeing our\n> > Xid first and not our flag, which would be bad.\n> \n> I don't buy this either. You get the same result if someone looks just\n> before you take the ProcArrayLock to set the flag. So if there's a\n> problem, it's inherent in the way that the flags are defined or used;\n> the strength of lock used in this stanza won't affect it.\n\nThe problem is that the writes could be reordered in a way that makes\nthe Xid appear set to an onlooker before PROC_IN_VACUUM appears set.\nVacuum always sets the bit first, and *then* the xid. If the reader\nalways reads it like that then it's not a problem. But in order to\nguarantee that, we would have to have a read barrier for each pass\nthrough the loop.\n\nWith the LW_EXCLUSIVE lock, we block the readers so that the bit is\nknown set by the time they examine it. As I understand, getting the\nlock is itself a barrier, so there's no danger that we'll set the bit\nand they won't see it.\n\n\n... at least, that how I *imagine* the argument to be. In practice,\nvacuum_rel() calls GetSnapshotData() before installing the\nPROC_IN_VACUUM bit, and therefore there *is* a risk that reader 1 will\nget MyProc->xmin included in their snapshot (because bit not yet set),\nand reader 2 won't. If my understanding is correct, then we should move\nthe PushActiveSnapshot(GetTransactionSnapshot()) call to after we have\nthe PROC_IN_VACUUM bit set.\n\n\n",
"msg_date": "Tue, 24 Nov 2020 18:57:13 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2020-Nov-23, Andres Freund wrote:\n\n> On 2020-11-23 12:30:05 -0300, Alvaro Herrera wrote:\n\n> > In other words, my conclusion is that there definitely is a bug here and\n> > I am going to restore the use of exclusive lock for setting the\n> > statusFlags.\n> \n> Cool.\n\nHere's a patch.\n\nNote it also moves the computation of vacuum's Xmin (per\nGetTransactionSnapshot) to *after* the bit has been set in statusFlags.",
"msg_date": "Wed, 25 Nov 2020 17:03:58 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2020-Nov-25, Alvaro Herrera wrote:\n\n> On 2020-Nov-23, Andres Freund wrote:\n> \n> > On 2020-11-23 12:30:05 -0300, Alvaro Herrera wrote:\n> \n> > > In other words, my conclusion is that there definitely is a bug here and\n> > > I am going to restore the use of exclusive lock for setting the\n> > > statusFlags.\n> > \n> > Cool.\n> \n> Here's a patch.\n\nPushed, thanks.\n\n\n\n",
"msg_date": "Thu, 26 Nov 2020 13:00:37 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "So let's discuss the next step in this series: what to do about REINDEX\nCONCURRENTLY.\n\nI started with Dmitry's patch (an updated version of which I already\nposted in [1]). However, Dmitry missed (and I hadn't noticed) that some\nof the per-index loops are starting transactions also, and that we need\nto set the flag in those. And what's more, in a couple of the\nfunction-scope transactions we do set the flag pointlessly: the\ntransactions there do not acquire a snapshot, so there's no reason to\nset the flag at all, because WaitForOlderSnapshots ignores sessions\nwhose Xmin is 0.\n\nThere are also transactions that wait first, without setting a snapshot,\nand then do some catalog manipulations. I think it's prett much useless\nto set the flag for those, because they're going to be very short\nanyway. (There's also one case of this in CREATE INDEX CONCURRENTLY.)\n\nBut there's a more interesting point also. In Dmitry's patch, we\ndetermine safety for *all* indexes being processed as a set, and then\napply the flag only if they're all deemed safe. But we can optimize\nthis, and set the flag for each index' transaction individually, and\nonly skip it for those specific indexes that are unsafe. So I propose\nto change the data structure used in ReindexRelationConcurrently from\nthe current list of OIDs to a list of (oid,boolean) pairs, to be able to\ntrack setting the flag individually.\n\nThere's one more useful observation: in the function-scope transactions\n(outside the per-index loops), we don't touch the contents of any\nindexes; we just wait or do some catalog manipulation. So we can set\nthe flag *regardless of the safety of any indexes*. We only need to\ncare about the safety of the indexes in the phases where we build the\nindexes and when we validate them.\n\n\n[1] https://postgr.es/m/20201118175804.GA23027@alvherre.pgsql\n\n\n",
"msg_date": "Thu, 26 Nov 2020 16:56:15 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2020-Nov-26, Alvaro Herrera wrote:\n\n> So let's discuss the next step in this series: what to do about REINDEX\n> CONCURRENTLY.\n\n> [...]\n\n... as in the attached.",
"msg_date": "Thu, 26 Nov 2020 19:48:20 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "Actually, I noticed two things. The first of them, addressed in this\nnew version of the patch, is that REINDEX CONCURRENTLY is doing a lot of\nrepetitive work by reopening each index and table in the build/validate\nloops, so that they can report progress. This is easy to remedy by\nadding a couple more members to the new struct (which I also renamed to\nReindexIndexInfo), for tableId and amId. The code seems a bit simpler\nthis way.\n\nThe other thing is that ReindexRelationConcurrenty seems to always be\ncalled with the relations already locked by RangeVarGetRelidExtended.\nSo claiming to acquire locks on the relations over and over is\npointless. (I only noticed this because there was an obvious deadlock\nhazard in one of the loops, that locked index before table.) I think we\nshould reduce all those to NoLock. My patch does not do that.",
"msg_date": "Fri, 27 Nov 2020 13:53:07 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "In the interest of showing progress, I'm going to mark this CF item as\ncommitted, and I'll submit the remaining pieces in a new thread.\n\nThanks!\n\n\n",
"msg_date": "Mon, 30 Nov 2020 16:15:27 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On Mon, Nov 30, 2020 at 04:15:27PM -0300, Alvaro Herrera wrote:\n> In the interest of showing progress, I'm going to mark this CF item as\n> committed, and I'll submit the remaining pieces in a new thread.\n\nThanks for splitting!\n--\nMichael",
"msg_date": "Tue, 1 Dec 2020 10:00:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-25 17:03:58 -0300, Alvaro Herrera wrote:\n> On 2020-Nov-23, Andres Freund wrote:\n> \n> > On 2020-11-23 12:30:05 -0300, Alvaro Herrera wrote:\n> \n> > > In other words, my conclusion is that there definitely is a bug here and\n> > > I am going to restore the use of exclusive lock for setting the\n> > > statusFlags.\n> > \n> > Cool.\n> \n> Here's a patch.\n> \n> Note it also moves the computation of vacuum's Xmin (per\n> GetTransactionSnapshot) to *after* the bit has been set in statusFlags.\n\n> From b813c67a4abe2127b8bd13db7e920f958db15d59 Mon Sep 17 00:00:00 2001\n> From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Date: Tue, 24 Nov 2020 18:10:42 -0300\n> Subject: [PATCH] Restore lock level to update statusFlags\n> \n> Reverts 27838981be9d (some comments are kept). Per discussion, it does\n> not seem safe to relax the lock level used for this; in order for it to\n> be safe, there would have to be memory barriers between the point we set\n> the flag and the point we set the trasaction Xid, which perhaps would\n> not be so bad; but there would also have to be barriers at the readers'\n> side, which from a performance perspective might be bad.\n> \n> Now maybe this analysis is wrong and it *is* safe for some reason, but\n> proof of that is not trivial.\n\nI just noticed that this commit (dcfff74fb16) didn't revert the change of lock\nlevel in ReplicationSlotRelease(). Was that intentional?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Nov 2021 18:07:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2021-Nov-10, Andres Freund wrote:\n\n> > Reverts 27838981be9d (some comments are kept). Per discussion, it does\n> > not seem safe to relax the lock level used for this; in order for it to\n> > be safe, there would have to be memory barriers between the point we set\n> > the flag and the point we set the trasaction Xid, which perhaps would\n> > not be so bad; but there would also have to be barriers at the readers'\n> > side, which from a performance perspective might be bad.\n> > \n> > Now maybe this analysis is wrong and it *is* safe for some reason, but\n> > proof of that is not trivial.\n> \n> I just noticed that this commit (dcfff74fb16) didn't revert the change of lock\n> level in ReplicationSlotRelease(). Was that intentional?\n\nHmm, no, that seems to have been a mistake. I'll restore it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El miedo atento y previsor es la madre de la seguridad\" (E. Burke)\n\n\n",
"msg_date": "Thu, 11 Nov 2021 09:38:07 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
},
{
"msg_contents": "On 2021-11-11 09:38:07 -0300, Alvaro Herrera wrote:\n> > I just noticed that this commit (dcfff74fb16) didn't revert the change of lock\n> > level in ReplicationSlotRelease(). Was that intentional?\n> \n> Hmm, no, that seems to have been a mistake. I'll restore it.\n\nThanks\n\n\n",
"msg_date": "Thu, 11 Nov 2021 11:50:17 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remove spurious CREATE INDEX CONCURRENTLY wait"
}
] |
[
{
"msg_contents": "I want to write some test cases with extended query in core test system.\nbasically it looks like\n\nPreparedStatement preparedStatement = conn.prepareStatement(\"select *\nfrom bigtable\");\npreparedStatement.setFetchSize(4);\nResultSet rs = preparedStatement.executeQuery();\nwhile(rs.next())\n{\n System.out.println(rs.getInt(1));\n // conn.commit();\n conn.rollback();\n}\n\n\nHowever I don't find a way to do that after checking the example in\nsrc/test/xxx/t/xxx.pl\nwhere most often used object is PostgresNode, which don't have such\nabilities.\n\nCan I do that in core system, I tried grep '\\->prepare' and '\\->execute'\nand get nothing.\nam I miss something?\n\n\n-- \nBest Regards\nAndy Fan\n\nI want to write some test cases with extended query in core test system. basically it looks like PreparedStatement preparedStatement = conn.prepareStatement(\"select * from bigtable\");preparedStatement.setFetchSize(4);ResultSet rs = preparedStatement.executeQuery();while(rs.next()){ System.out.println(rs.getInt(1)); // conn.commit(); conn.rollback();}However I don't find a way to do that after checking the example in src/test/xxx/t/xxx.pl where most often used object is PostgresNode, which don't have such abilities. Can I do that in core system, I tried grep '\\->prepare' and '\\->execute' and get nothing.am I miss something? -- Best RegardsAndy Fan",
"msg_date": "Tue, 11 Aug 2020 10:43:23 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Can I test Extended Query in core test framework"
},
{
"msg_contents": "You could run PREPARE and EXECUTE as SQL commands from psql. Please\ntake a look at the documentation of those two commands. I haven't\nlooked at TAP infrastructure, but you could open a psql session to a\nrunning server and send an arbitrary number of SQL queries through it.\n\nSaid that a server starts caching plan only after it sees a certain\nnumber of EXECUTEs. So if you are testing cached plans, that's\nsomething to worry about.\n\nOn Tue, Aug 11, 2020 at 8:13 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> I want to write some test cases with extended query in core test system. basically it looks like\n>\n> PreparedStatement preparedStatement = conn.prepareStatement(\"select * from bigtable\");\n> preparedStatement.setFetchSize(4);\n> ResultSet rs = preparedStatement.executeQuery();\n> while(rs.next())\n> {\n> System.out.println(rs.getInt(1));\n> // conn.commit();\n> conn.rollback();\n> }\n>\n>\n> However I don't find a way to do that after checking the example in src/test/xxx/t/xxx.pl\n> where most often used object is PostgresNode, which don't have such abilities.\n>\n> Can I do that in core system, I tried grep '\\->prepare' and '\\->execute' and get nothing.\n> am I miss something?\n>\n>\n> --\n> Best Regards\n> Andy Fan\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 11 Aug 2020 18:36:03 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Can I test Extended Query in core test framework"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> I want to write some test cases with extended query in core test system.\n\nWhy? (That is, what is it you need to test exactly?)\n\npsql has no ability to issue extended queries AFAIR, so the normal\nregression test scripts can't exercise this. We haven't built anything\nfor it in the TAP infrastructure either. We do have test coverage\nvia pgbench and ecpg, though I concede that's pretty indirect.\n\nI recall someone (Andres, possibly) speculating about building a tool\nspecifically to exercise low-level protocol issues, but that hasn't\nbeen done either.\n\nNone of these are necessarily germane to any particular test requirement,\nwhich is why I'm wondering. The JDBC fragment you show seems like it's\nsomething that should be tested by, well, JDBC. What's interesting about\nit for any other client?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Aug 2020 11:22:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can I test Extended Query in core test framework"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-11 11:22:49 -0400, Tom Lane wrote:\n> I recall someone (Andres, possibly) speculating about building a tool\n> specifically to exercise low-level protocol issues, but that hasn't\n> been done either.\n\nYea, I mentioned the possibility, but didn't plan to work on it. I am\nnot a perl person by any stretch (even though that's where I\nstarted...). But we can (and do iirc) have tests that just use libpq,\nso it should be possible to test things like this at a bit higher cost.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Aug 2020 09:01:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Can I test Extended Query in core test framework"
},
{
"msg_contents": "Thank you Ashutosh for your reply.\n\nOn Tue, Aug 11, 2020 at 9:06 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> You could run PREPARE and EXECUTE as SQL commands from psql. Please\n> take a look at the documentation of those two commands. I haven't\n> looked at TAP infrastructure, but you could open a psql session to a\n> running server and send an arbitrary number of SQL queries through it.\n>\n>\nPREPARE & EXECUTE doesn't go with the extended query way. it is\nstill exec_simple_query. What I did is I hacked some exec_bind_message\n[1] logic, that's why I want to test extended queries.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWqvwmo=NLPGa_OHXB4F+u4Ts1_3YRy9M6XTjLt9DKHvvg@mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan\n\nThank you Ashutosh for your reply. On Tue, Aug 11, 2020 at 9:06 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:You could run PREPARE and EXECUTE as SQL commands from psql. Please\ntake a look at the documentation of those two commands. I haven't\nlooked at TAP infrastructure, but you could open a psql session to a\nrunning server and send an arbitrary number of SQL queries through it.\nPREPARE & EXECUTE doesn't go with the extended query way. it is still exec_simple_query. What I did is I hacked some exec_bind_message[1] logic, that's why I want to test extended queries. [1] https://www.postgresql.org/message-id/CAKU4AWqvwmo=NLPGa_OHXB4F+u4Ts1_3YRy9M6XTjLt9DKHvvg@mail.gmail.com -- Best RegardsAndy Fan",
"msg_date": "Wed, 12 Aug 2020 09:30:31 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can I test Extended Query in core test framework"
},
{
"msg_contents": "On Tue, Aug 11, 2020 at 11:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > I want to write some test cases with extended query in core test system.\n>\n> Why? (That is, what is it you need to test exactly?)\n>\n>\nThanks for your attention. The background is I hacked exec_bind_message[1],\nthen I want to add some test cases to make sure the logic can be tested\nautomatically\nin the core system. I can't distinguish if the logic might be so straight\nor not so it\ndoesn't deserve the test in practice.\n\npsql has no ability to issue extended queries AFAIR, so the normal\n> regression test scripts can't exercise this. We haven't built anything\n> for it in the TAP infrastructure either. We do have test coverage\n> via pgbench and ecpg, though I concede that's pretty indirect.\n>\n> I recall someone (Andres, possibly) speculating about building a tool\n> specifically to exercise low-level protocol issues, but that hasn't\n> been done either.\n>\n\nThanks for this information. and Thanks Andres for the idea and practice.\n\n\n>\n> None of these are necessarily germane to any particular test requirement,\n> which is why I'm wondering. The JDBC fragment you show seems like it's\n> something that should be tested by, well, JDBC. What's interesting about\n> it for any other client?\n>\n>\nThe main purpose is I want to test it in core without other infrastructure\ninvolved.\nI have added a python script to do that now. So the issue is not so\nblocking.\nbut what I am working on[1] is still challenging for me:(\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWqvwmo=NLPGa_OHXB4F+u4Ts1_3YRy9M6XTjLt9DKHvvg@mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Aug 11, 2020 at 11:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> I want to write some test cases with extended query in core test system.\n\nWhy? (That is, what is it you need to test exactly?)\n Thanks for your attention. The background is I hacked exec_bind_message[1],then I want to add some test cases to make sure the logic can be tested automaticallyin the core system. I can't distinguish if the logic might be so straight or not so itdoesn't deserve the test in practice. \npsql has no ability to issue extended queries AFAIR, so the normal\nregression test scripts can't exercise this. We haven't built anything\nfor it in the TAP infrastructure either. We do have test coverage\nvia pgbench and ecpg, though I concede that's pretty indirect.\n\nI recall someone (Andres, possibly) speculating about building a tool\nspecifically to exercise low-level protocol issues, but that hasn't\nbeen done either.Thanks for this information. and Thanks Andres for the idea and practice. \n\nNone of these are necessarily germane to any particular test requirement,\nwhich is why I'm wondering. The JDBC fragment you show seems like it's\nsomething that should be tested by, well, JDBC. What's interesting about\nit for any other client?The main purpose is I want to test it in core without other infrastructure involved.I have added a python script to do that now. So the issue is not so blocking.but what I am working on[1] is still challenging for me:([1] https://www.postgresql.org/message-id/CAKU4AWqvwmo=NLPGa_OHXB4F+u4Ts1_3YRy9M6XTjLt9DKHvvg@mail.gmail.com -- Best RegardsAndy Fan",
"msg_date": "Wed, 12 Aug 2020 10:38:45 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can I test Extended Query in core test framework"
},
{
"msg_contents": "Tatsuo Ishii san, a committer, proposed this to test extended query protocol. Can it be included in Postgres core?\r\n\r\nA toool to test programs by issuing frontend/backend protocol messages\r\nhttps://github.com/tatsuo-ishii/pgproto\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTatsuo Ishii san, a committer, proposed this to test extended query protocol. Can it be included in Postgres core?\n \nA toool to test programs by issuing frontend/backend protocol messages\nhttps://github.com/tatsuo-ishii/pgproto\n \n \nRegards\nTakayuki Tsunakawa",
"msg_date": "Wed, 12 Aug 2020 02:57:58 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Can I test Extended Query in core test framework"
}
] |
[
{
"msg_contents": "I think this change neglected to add plpgsql to the extension\ndependencies in the .control file:\n\n12:53:51 # Failed test 'psql -qc 'CREATE EXTENSION \"cube\"''\n12:53:51 # at t/TestLib.pm line 213.\n12:53:51 not ok 68 - psql -qc 'CREATE EXTENSION \"cube\"'\n12:53:51 # got: '1'\n12:53:51 # expected: '0'\n12:53:51 not ok 69 - extension cube installs without error\n12:53:51 # Failed test 'extension cube installs without error'\n12:53:51 # at t/TestLib.pm line 214.\n12:53:51 # got: 'ERROR: language \"plpgsql\" does not exist\n12:53:51 # HINT: Use CREATE EXTENSION to load the language into the database.\n12:53:51 # '\n\n(The Debian regression tests remove plpgsql before testing all\nextensions in turn.)\n\nChristoph\n\n\n",
"msg_date": "Tue, 11 Aug 2020 13:15:44 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Make contrib modules' installation scripts more secure."
},
{
"msg_contents": "Re: To PostgreSQL Hackers\n> I think this change neglected to add plpgsql to the extension\n> dependencies in the .control file:\n> \n> 12:53:51 # Failed test 'psql -qc 'CREATE EXTENSION \"cube\"''\n> 12:53:51 # at t/TestLib.pm line 213.\n> 12:53:51 not ok 68 - psql -qc 'CREATE EXTENSION \"cube\"'\n> 12:53:51 # got: '1'\n> 12:53:51 # expected: '0'\n> 12:53:51 not ok 69 - extension cube installs without error\n> 12:53:51 # Failed test 'extension cube installs without error'\n> 12:53:51 # at t/TestLib.pm line 214.\n> 12:53:51 # got: 'ERROR: language \"plpgsql\" does not exist\n> 12:53:51 # HINT: Use CREATE EXTENSION to load the language into the database.\n> 12:53:51 # '\n\nOr maybe the argument is that the extension needs plpgsql only at\ninstall time, and not to run, and you could really remove it after the\nCREATE EXTENSION has been done. But that argument feels pretty icky.\nAnd dump-restore would fail.\n\nAt least the error message is very explicit.\n\nChristoph\n\n\n",
"msg_date": "Tue, 11 Aug 2020 13:54:48 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Make contrib modules' installation scripts more secure."
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> I think this change neglected to add plpgsql to the extension\n> dependencies in the .control file:\n\nAdding plpgsql to the extension's dependencies would be a cure worse\nthan the disease: it'd mean that you could not remove plpgsql from the\nsystem after installing cube, either. That is surely unhelpful from\nthe standpoint of someone who would like to have cube without plpgsql.\n\n> (The Debian regression tests remove plpgsql before testing all\n> extensions in turn.)\n\nMeh. I think that's testing a case that we don't guarantee to work.\nThere was already a plpgsql dependency in hstore--1.1--1.2.sql,\nwhich I just cribbed from to make these fixes.\n\nIn the long term, perhaps it'd be worth inventing a concept of an\n\"install-time dependency\", whereby an extension could name something\nit needs to have to run its script, but not necessarily afterwards.\nBut if you're someone who's afraid to have plpgsql installed, the\nidea that it can be sucked in on-demand, behind the scenes, might not\nmake you feel better either.\n\nA band-aid sort of fix would be to roll up the base install scripts\nfor these modules to the latest version, so that a plain install from\nscratch doesn't need to execute any of the catalog adjustments in\ntheir update scripts. That's not terribly attractive from a maintenance\nor testing standpoint, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Aug 2020 11:59:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Make contrib modules' installation scripts more secure."
},
{
"msg_contents": "Re: Tom Lane\n> > (The Debian regression tests remove plpgsql before testing all\n> > extensions in turn.)\n> \n> Meh. I think that's testing a case that we don't guarantee to work.\n> There was already a plpgsql dependency in hstore--1.1--1.2.sql,\n> which I just cribbed from to make these fixes.\n\nThe key difference is that hstore--1.1--1.2.sql was never required for\ninstalling an extension from scratch, only for upgrades. The practical\nrelevance of this distinction is that the upgrade scripts are only run\nonce, while install-time scripts (including the upgrade scripts for\nversions that do not have a direct creation script) are also required\nfor dump-restore cycles. As an admin, I'd very much hate databases\nthat couldn't be restored without extra fiddling.\n\nThe thing that maybe saves us here is that while hstore is trusted, so\nany user can create it, plpgsql is trusted as well, but owned by\npostgres, so even database owners can't drop it from beneath hstore.\nOnly superusers can \"mess up\" a database in that way. But still.\n\n> A band-aid sort of fix would be to roll up the base install scripts\n> for these modules to the latest version, so that a plain install from\n> scratch doesn't need to execute any of the catalog adjustments in\n> their update scripts. That's not terribly attractive from a maintenance\n> or testing standpoint, though.\n\nThat's a pretty small price compared to the dump-reload\ninconsistencies.\n\nI can see the extra maintenance effort, but how many extensions would\nrequire rewriting as direct-install.sql scripts? I guess it's only a\nfew that need plpgsql for upgrades.\n\nChristoph\n\n\n",
"msg_date": "Wed, 12 Aug 2020 12:04:27 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Make contrib modules' installation scripts more secure."
}
] |
[
{
"msg_contents": "Hi,\n\nIn case of smart shutdown postmaster receives SIGTERM from the pg_ctl,\nit \"disallows new connections, but lets existing sessions end their\nwork normally\". Which means that it doesn't abort any ongoing txns in\nany of the sessions and it lets the sessions to exit(on their own) and\nthen the postmaster is shut down. Looks like this behavior is true\nonly if the sessions are executing non-parallel queries. Parallel\nqueries are getting aborted, see [1].\n\nAlthough the postmaster receives two different signals for\nsmart(SIGTERM) and fast(SIGINT) shutdowns, it only sends SIGTERM to\nbgworkers for both the cases. (see pmdie() -> SignalSomeChildren() in\npostmaster.c). In\nStartBackgroundWorker(), bgworkers have the bgworker_die() as default\nhandler for SIGTERM, which just reports FATAL error. Is this handler\ncorrect for both fast and smart shutdown for all types of bgworkers?\n\nFor parallel workers in ParallelWorkerMain(), SIGTERM handler gets\nchanged to die()(which means for both smart and fast shutdowns, the\nsame handler gets used), which sets ProcDiePending = true; and later\nif the parallel workers try to CHECK_FOR_INTERRUPTS(); (for parallel\nseq scan, it is done in ExecScanFetch()), since ProcDiePending was set\nto true, the parallel workers throw error \"terminating connection due\nto administrator command\" in ProcessInterrupts().\nHaving die() as a handler for fast shutdown may be correct, but for\nsmart shutdown, as mentioned in $subject, it looks inconsistent.\n\n1. In general, do we need to allow postmaster to send different\nsignals to bgworkers for fast and smart shutdowns and let them\ndifferentiate the two modes(if needed)?\n2. Do we need to handle smart shutdown for dynamic bgworkers(parallel\nworkers) with a different signal than SIGTERM and retain SIGTERM\nhandler die() as is, since SIGTERM is being sent to bgworkers from\nother places as well? If we do so, we can block that new signal until\nthe parallel workers finish the current query execution or completely\nignore it in ParallelWorkerMain(). If the bgw_flags flag is\nBGWORKER_CLASS_PARALLEL, we can do some changes in postmaster's\nSignalSomeChildren() to detect and send that new signal. Looks like\nSIGUSR2 remains currently ignored for dynamic bgworker, and can be\nused for this purpose.\n\nThoughts?\n\nThanks @vignesh C for inputs and writeup review.\n\n[1] (smart shutdown issued with pg_ctl, while the parallel query is running).\npostgres=# EXPLAIN ANALYZE SELECT COUNT(*) FROM t_test t1, t_test t2\nWHERE t1.many = t2.many;\nERROR: terminating connection due to administrator command\nCONTEXT: parallel worker\n\n[2] The usual behavior of: smart shutdown - lets existing sessions end\ntheir work normally and fast shutdown - abort their current\ntransactions and exit promptly.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 Aug 2020 18:50:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Inconsistent behavior of smart shutdown handling for queries with and\n without parallel workers"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> In case of smart shutdown postmaster receives SIGTERM from the pg_ctl,\n> it \"disallows new connections, but lets existing sessions end their\n> work normally\". Which means that it doesn't abort any ongoing txns in\n> any of the sessions and it lets the sessions to exit(on their own) and\n> then the postmaster is shut down. Looks like this behavior is true\n> only if the sessions are executing non-parallel queries. Parallel\n> queries are getting aborted, see [1].\n\nHm. I kind of wonder why we're killing *anything* early in the\nsmart-shutdown case. postmaster.c has (in pmdie()):\n\n /* autovac workers are told to shut down immediately */\n /* and bgworkers too; does this need tweaking? */\n SignalSomeChildren(SIGTERM,\n BACKEND_TYPE_AUTOVAC | BACKEND_TYPE_BGWORKER);\n /* and the autovac launcher too */\n if (AutoVacPID != 0)\n signal_child(AutoVacPID, SIGTERM);\n /* and the bgwriter too */\n if (BgWriterPID != 0)\n signal_child(BgWriterPID, SIGTERM);\n /* and the walwriter too */\n if (WalWriterPID != 0)\n signal_child(WalWriterPID, SIGTERM);\n\nand it seems like every one of those actions is pretty damfool if we want\nthe existing sessions to continue to have normal performance, quite aside\nfrom whether we're breaking parallel queries. Indeed, to the extent that\nthis is hurting performance, we can expect the existing sessions to take\n*longer* to finish, making this pretty counterproductive.\n\nSo I'm thinking we should move all of these actions to happen only after\nthe regular children are all gone.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Aug 2020 17:28:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent behavior of smart shutdown handling for queries with\n and without parallel workers"
},
{
"msg_contents": "I think the inconsistent behaviour reported in this thread gets\nresolved with the approach and patch being discussed in [1].\n\n>\n> 1. In general, do we need to allow postmaster to send different\n> signals to bgworkers for fast and smart shutdowns and let them\n> differentiate the two modes(if needed)?\n>\n\nIs there any way the bgworkers(for that matter, any postmaster's child\nprocess) knowing that there's a smart shutdown pending? This is\nuseful, if any of the bgworker(if not parallel workers) want to\ndifferentiate the two modes i.e. smart and fast shutdown modes and\nsmartly finish of their work.\n\n[1] - https://www.postgresql.org/message-id/469199.1597337108%40sss.pgh.pa.us\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 14 Aug 2020 11:02:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent behavior of smart shutdown handling for queries with\n and without parallel workers"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Is there any way the bgworkers(for that matter, any postmaster's child\n> process) knowing that there's a smart shutdown pending? This is\n> useful, if any of the bgworker(if not parallel workers) want to\n> differentiate the two modes i.e. smart and fast shutdown modes and\n> smartly finish of their work.\n\nWith the patch I'm working on, the approach is basically that smart\nshutdown changes nothing except for not allowing new connections ...\nuntil the last regular connection is gone, at which point it starts to\nact exactly like fast shutdown. So in those terms there is no need for\nbgworkers to know the difference. If a bgworker did act differently\nduring the initial phase of a smart shutdown, that would arguably be\na bug, just as it's a bug that parallel query isn't working.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Aug 2020 10:40:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent behavior of smart shutdown handling for queries with\n and without parallel workers"
}
] |
[
{
"msg_contents": "There are two ancient hacks in the cygwin and solaris ports that appear \nto have been solved more than 10 years ago, so I think we can remove \nthem. See attached patches.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 12 Aug 2020 09:12:07 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "remove some ancient port hacks"
},
{
"msg_contents": "On 12.08.2020 09:12, Peter Eisentraut wrote:\n> There are two ancient hacks in the cygwin and solaris ports that appear \n> to have been solved more than 10 years ago, so I think we can remove \n> them. See attached patches.\n> \n\nHi Peter,\nThis is really archeology\n\n Check for b20.1\n\nas it was released in 1998.\nNo problem at all to remove it\n\nRegards Marco\nCygwin Package Maintainer\n\n\n",
"msg_date": "Wed, 12 Aug 2020 10:18:00 +0200",
"msg_from": "Marco Atzeri <marco.atzeri@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove some ancient port hacks"
},
{
"msg_contents": "On Wed, Aug 12, 2020 at 09:12:07AM +0200, Peter Eisentraut wrote:\n> There are two ancient hacks in the cygwin and solaris ports that appear to\n> have been solved more than 10 years ago, so I think we can remove them. See\n> attached patches.\n\n+1 for removing these. >10y age is not sufficient justification by itself; if\nsystems that shipped with the defect were not yet EOL, that would tend to\njustify waiting longer. For these particular hacks, though, affected systems\nare both old and EOL.\n\n\n",
"msg_date": "Wed, 12 Aug 2020 20:22:52 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: remove some ancient port hacks"
},
{
"msg_contents": "On 2020-08-12 10:18, Marco Atzeri wrote:\n> On 12.08.2020 09:12, Peter Eisentraut wrote:\n>> There are two ancient hacks in the cygwin and solaris ports that appear\n>> to have been solved more than 10 years ago, so I think we can remove\n>> them. See attached patches.\n>>\n> \n> Hi Peter,\n> This is really archeology\n> \n> Check for b20.1\n> \n> as it was released in 1998.\n> No problem at all to remove it\n\nCommitted. Thanks for the feedback.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 15 Aug 2020 11:39:40 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove some ancient port hacks"
},
{
"msg_contents": "On 2020-08-13 05:22, Noah Misch wrote:\n> On Wed, Aug 12, 2020 at 09:12:07AM +0200, Peter Eisentraut wrote:\n>> There are two ancient hacks in the cygwin and solaris ports that appear to\n>> have been solved more than 10 years ago, so I think we can remove them. See\n>> attached patches.\n> \n> +1 for removing these. >10y age is not sufficient justification by itself; if\n> systems that shipped with the defect were not yet EOL, that would tend to\n> justify waiting longer. For these particular hacks, though, affected systems\n> are both old and EOL.\n\ndone\n\nIn this case, the bug was fixed in the stable release track of this OS, \nso the only way to still be affected would be if you had never installed \nany OS patches in 10 years, which is clearly unreasonable.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 15 Aug 2020 11:41:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove some ancient port hacks"
}
] |
[
{
"msg_contents": "Here is a patch to have pg_dump use pg_get_functiondef() instead of \nassembling the CREATE FUNCTION/PROCEDURE commands itself. This should \nsave on maintenance effort in the future. It's also a prerequisite for \nbeing able to dump functions with SQL-standard function body discussed \nin [0].\n\npg_get_functiondef() was meant for psql's \\ef command, so its defaults \nare slightly different from what pg_dump would like, so this adds a few\noptional parameters for tweaking the behavior. The naming of the \nparameters is up for discussion.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 12 Aug 2020 10:25:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "use pg_get_functiondef() in pg_dump"
},
{
"msg_contents": "On Wed, Aug 12, 2020 at 4:25 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Here is a patch to have pg_dump use pg_get_functiondef() instead of\n> assembling the CREATE FUNCTION/PROCEDURE commands itself. This should\n> save on maintenance effort in the future. It's also a prerequisite for\n> being able to dump functions with SQL-standard function body discussed\n> in [0].\n>\n> pg_get_functiondef() was meant for psql's \\ef command, so its defaults\n> are slightly different from what pg_dump would like, so this adds a few\n> optional parameters for tweaking the behavior. The naming of the\n> parameters is up for discussion.\n\nOne problem with this, which I think Tom pointed out before, is that\nit might make it to handle some forward-compatibility problems. In\nother words, if something that the server is generating needs to be\nmodified for compatibility with a future release, it's not easy to do\nthat. Like if we needed to quote something we weren't previously\nquoting, for example.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 12 Aug 2020 15:54:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: use pg_get_functiondef() in pg_dump"
},
{
"msg_contents": "On 2020-08-12 21:54, Robert Haas wrote:\n> One problem with this, which I think Tom pointed out before, is that\n> it might make it to handle some forward-compatibility problems. In\n> other words, if something that the server is generating needs to be\n> modified for compatibility with a future release, it's not easy to do\n> that. Like if we needed to quote something we weren't previously\n> quoting, for example.\n\nWe already use a lot of other pg_get_*def functions in pg_dump. Does \nthis one introduce any fundamentally new problems?\n\nA hypothetical change where syntax that we accept now would no longer be \naccepted in a (near-)future version would create a lot of upsetness. I \ndon't think we'd do it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 15 Aug 2020 11:49:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: use pg_get_functiondef() in pg_dump"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-08-12 21:54, Robert Haas wrote:\n>> One problem with this, which I think Tom pointed out before, is that\n>> it might make it to handle some forward-compatibility problems. In\n>> other words, if something that the server is generating needs to be\n>> modified for compatibility with a future release, it's not easy to do\n>> that. Like if we needed to quote something we weren't previously\n>> quoting, for example.\n\n> We already use a lot of other pg_get_*def functions in pg_dump. Does \n> this one introduce any fundamentally new problems?\n\nI wouldn't say that it's *fundamentally* new, but nonethless it disturbs\nme that this proposal has pg_dump assembling CREATE FUNCTION commands in\nvery different ways depending on the server version. I'd rather see us\ncontinuing to build the bulk of the command the same as before, and\nintroduce new behavior only for deparsing the function body.\n\nWe've talked before about what a mess it is that some aspects of pg_dump's\noutput are built on the basis of what pg_dump sees in its stable snapshot\nbut others are built by ruleutils.c on the basis of up-to-the-minute\ncatalog contents. While I don't insist that this patch fix that, I'm\nworried that it may be making things worse, or at least getting in the\nway of ever fixing that.\n\nPerhaps these concerns are unfounded, but I'd like to see some arguments\nwhy before we go down this path.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 15 Aug 2020 10:23:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: use pg_get_functiondef() in pg_dump"
},
{
"msg_contents": "I wrote:\n> I wouldn't say that it's *fundamentally* new, but nonethless it disturbs\n> me that this proposal has pg_dump assembling CREATE FUNCTION commands in\n> very different ways depending on the server version. I'd rather see us\n> continuing to build the bulk of the command the same as before, and\n> introduce new behavior only for deparsing the function body.\n\nBTW, a concrete argument for doing it that way is that if you make a\nbackend function that does the whole CREATE-FUNCTION-building job in\nexactly the way pg_dump wants it, that function is nigh useless for\nany other client with slightly different requirements. A trivial\nexample here is that I don't think we want to become locked into\nthe proposition that psql's \\ef and \\sf must print functions exactly\nthe same way that pg_dump would.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 15 Aug 2020 10:36:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: use pg_get_functiondef() in pg_dump"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> I wrote:\n> > I wouldn't say that it's *fundamentally* new, but nonethless it disturbs\n> > me that this proposal has pg_dump assembling CREATE FUNCTION commands in\n> > very different ways depending on the server version. I'd rather see us\n> > continuing to build the bulk of the command the same as before, and\n> > introduce new behavior only for deparsing the function body.\n> \n> BTW, a concrete argument for doing it that way is that if you make a\n> backend function that does the whole CREATE-FUNCTION-building job in\n> exactly the way pg_dump wants it, that function is nigh useless for\n> any other client with slightly different requirements. A trivial\n> example here is that I don't think we want to become locked into\n> the proposition that psql's \\ef and \\sf must print functions exactly\n> the same way that pg_dump would.\n\nThe fact that the need that psql has and that which pg_dump has are at\nleast somewhat similar really argues that we should have put this code\ninto libpgcommon in the first place and not in the backend, and then had\nboth psql and pg_dump use that.\n\nI'm sure there's a lot of folks who'd like to see more of the logic we\nhave in pg_dump for building objects from the catalog available to more\ntools through libpgcommon- psql being one of the absolute first\nuse-cases for exactly that (there's certainly no shortage of people\nwho've asked how they can get a CREATE TABLE statement for a table by\nusing psql...).\n\nThanks,\n\nStephen",
"msg_date": "Sat, 15 Aug 2020 13:39:40 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: use pg_get_functiondef() in pg_dump"
},
{
"msg_contents": ">\n> I'm sure there's a lot of folks who'd like to see more of the logic we\n> have in pg_dump for building objects from the catalog available to more\n> tools through libpgcommon- psql being one of the absolute first\n> use-cases for exactly that (there's certainly no shortage of people\n> who've asked how they can get a CREATE TABLE statement for a table by\n> using psql...).\n>\n\nI count myself among those folks (see\nhttps://www.postgresql.org/message-id/CADkLM%3DfxfsrHASKk_bY_A4uomJ1Te5MfGgD_rwwQfV8wP68ewg%40mail.gmail.com\nfor\ndiscussion of doing DESCRIBE and SHOW CREATE-ish functions either on server\nside or client side).\n\nI'm all for having this as \"just\" as set of pg_get_*def functions, because\nthey allow for the results to be used in queries. Granted, the shape of the\nresult set may not be stable, but that's the sort of thing we can warn for\nthe same way we have warnings for changes to pg_stat_activity. At that\npoint any DESCRIBE/SHOW CREATE server side functions essentially become\njust shells around the pg_get_*def(), with no particular requirement to\nmake those new commands work inside a SELECT.\n\nWould it be totally out of left field to have the functions have an\noptional \"version\" parameter, defaulted to null, that would be used to give\nbackwards compatible results if and when we do make a breaking change?\n\nI'm sure there's a lot of folks who'd like to see more of the logic we\nhave in pg_dump for building objects from the catalog available to more\ntools through libpgcommon- psql being one of the absolute first\nuse-cases for exactly that (there's certainly no shortage of people\nwho've asked how they can get a CREATE TABLE statement for a table by\nusing psql...).I count myself among those folks (see https://www.postgresql.org/message-id/CADkLM%3DfxfsrHASKk_bY_A4uomJ1Te5MfGgD_rwwQfV8wP68ewg%40mail.gmail.com for discussion of doing DESCRIBE and SHOW CREATE-ish functions either on server side or client side).I'm all for having this as \"just\" as set of pg_get_*def functions, because they allow for the results to be used in queries. Granted, the shape of the result set may not be stable, but that's the sort of thing we can warn for the same way we have warnings for changes to pg_stat_activity. At that point any DESCRIBE/SHOW CREATE server side functions essentially become just shells around the pg_get_*def(), with no particular requirement to make those new commands work inside a SELECT.Would it be totally out of left field to have the functions have an optional \"version\" parameter, defaulted to null, that would be used to give backwards compatible results if and when we do make a breaking change?",
"msg_date": "Mon, 17 Aug 2020 19:54:20 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: use pg_get_functiondef() in pg_dump"
},
{
"msg_contents": "Greetings,\n\n* Corey Huinker (corey.huinker@gmail.com) wrote:\n> > I'm sure there's a lot of folks who'd like to see more of the logic we\n> > have in pg_dump for building objects from the catalog available to more\n> > tools through libpgcommon- psql being one of the absolute first\n> > use-cases for exactly that (there's certainly no shortage of people\n> > who've asked how they can get a CREATE TABLE statement for a table by\n> > using psql...).\n> \n> I count myself among those folks (see\n> https://www.postgresql.org/message-id/CADkLM%3DfxfsrHASKk_bY_A4uomJ1Te5MfGgD_rwwQfV8wP68ewg%40mail.gmail.com\n> for\n> discussion of doing DESCRIBE and SHOW CREATE-ish functions either on server\n> side or client side).\n> \n> I'm all for having this as \"just\" as set of pg_get_*def functions, because\n> they allow for the results to be used in queries. Granted, the shape of the\n> result set may not be stable, but that's the sort of thing we can warn for\n> the same way we have warnings for changes to pg_stat_activity. At that\n> point any DESCRIBE/SHOW CREATE server side functions essentially become\n> just shells around the pg_get_*def(), with no particular requirement to\n> make those new commands work inside a SELECT.\n\nAnother advantage of having this in libpgcommon is that the backend\n*and* the frontend could then use it.\n\n> Would it be totally out of left field to have the functions have an\n> optional \"version\" parameter, defaulted to null, that would be used to give\n> backwards compatible results if and when we do make a breaking change?\n\nSo.. the code that's in pg_dump today works to go from \"whatever the\nconnected server's version is\" to \"whatever the version is of the\npg_dump command itself\". If we had the code in libpgcommon, and\nfunctions in the backend to get at it along with psql having that code,\nyou could then, using the code we have today, go from a bunch of\n'source' versions to 'target' version of either the version of the psql\ncommand, or that of the server.\n\nThat is, consider a future where this is all done and all that crazy\nversion-specific code in pg_dump has been moved to libpgcommon in v14,\nand then you have a v15 psql, so:\n\npsql v15 connected to PG v14:\n\nYou do: \\dct mytable -- psql internal command to get 'create table'\nResult: a CREATE TABLE that works for v15\n\nYou do: DESCRIBE mytable; -- PG backend function to get 'create table'\nResult: a CREATE TABLE that works for v14\n\nWithout having to add anything to what we're already doing (yes, yes,\nbeyond the complications of moving this stuff into libpgcommon, but at\nleast we're not having to create some kind of matrix of \"source PG\nversion 10, target PG version 12\" into PG14).\n\nA bit crazy, sure, but would certainly be pretty useful.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 18 Aug 2020 09:18:03 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: use pg_get_functiondef() in pg_dump"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> So.. the code that's in pg_dump today works to go from \"whatever the\n> connected server's version is\" to \"whatever the version is of the\n> pg_dump command itself\". If we had the code in libpgcommon, and\n> functions in the backend to get at it along with psql having that code,\n> you could then, using the code we have today, go from a bunch of\n> 'source' versions to 'target' version of either the version of the psql\n> command, or that of the server.\n\nAt this point, I think I need a high-power telescope even to see the\ngoalposts :-(\n\nIf we actually want to do something like this, we need a plan not just\nsome handwaving. Let's start by enumerating the concerns that would\nhave to be solved. I can think of:\n\n* Execution context. Stephen seems to be envisioning code that could be\ncompiled into the backend not just the frontend, but is that really worth\nthe trouble? Could we share such code across FE/BE at all (it'd certainly\nbe a far more ambitious exercise in common code than we've done to date)?\nWhat's the backend version actually doing, issuing queries over SPI?\n(I suppose if you were rigid about that, it could offer a guarantee\nthat the results match your snapshot, which is pretty attractive.)\n\n* Global vs. per-object activity. pg_dump likes to query the entire state\nof the database to start with, and then follow up by grabbing additional\ndetails about objects it's going to dump. That's not an operating mode\nthat most other clients would want, but if for no other reason than\nperformance, I don't think we can walk away from it for pg_dump ---\nindeed, I think pg_dump probably needs to be fixed to do less per-object\nquerying, not more. Meanwhile applications such as psql \\d would only\nwant to investigate one object at a time. What design can we create that\nwill handle that? If there is persistent state involved, what in the\nworld does that mean for the case of a backend-side library?\n\n* Context in which the output is valid. Target server version was already\nmentioned, but a quick examination of pg_dump output scripts will remind\nyou that there's a bunch more assumptions:\n\nSET statement_timeout = 0;\nSET lock_timeout = 0;\nSET idle_in_transaction_session_timeout = 0;\nSET client_encoding = 'en_US.utf8';\nSET standard_conforming_strings = on;\nSELECT pg_catalog.set_config('search_path', '', false);\nSET check_function_bodies = false;\nSET xmloption = content;\nSET client_min_messages = warning;\nSET row_security = off;\n\nnot to mention special hackery for object ownership and tablespaces.\nSome of these things probably don't matter for other use-cases, but\nothers definitely do. In particular, I really doubt that psql and\nother clients would find it acceptable to force search_path to a\nparticular thing. Which brings us to\n\n* Security. How robust do the output commands need to be, and\nwhat will we have to do that pg_dump doesn't need to?\n\n* No doubt there are some other topics I didn't think of.\n\nThis certainly would be attractive if we had it, but the task\nseems dauntingly large. It's not going to happen without some\nfairly serious investment of time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Aug 2020 13:03:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: use pg_get_functiondef() in pg_dump"
},
{
"msg_contents": "On 2020-08-15 16:36, Tom Lane wrote:\n> I wrote:\n>> I wouldn't say that it's *fundamentally* new, but nonethless it disturbs\n>> me that this proposal has pg_dump assembling CREATE FUNCTION commands in\n>> very different ways depending on the server version. I'd rather see us\n>> continuing to build the bulk of the command the same as before, and\n>> introduce new behavior only for deparsing the function body.\n> \n> BTW, a concrete argument for doing it that way is that if you make a\n> backend function that does the whole CREATE-FUNCTION-building job in\n> exactly the way pg_dump wants it, that function is nigh useless for\n> any other client with slightly different requirements. A trivial\n> example here is that I don't think we want to become locked into\n> the proposition that psql's \\ef and \\sf must print functions exactly\n> the same way that pg_dump would.\n\nThat's why the patch adds optional arguments to the function to choose \nthe behavior that is appropriate for the situation.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 26 Aug 2020 13:13:22 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: use pg_get_functiondef() in pg_dump"
},
{
"msg_contents": "On 2020-08-15 16:23, Tom Lane wrote:\n> I wouldn't say that it's*fundamentally* new, but nonethless it disturbs\n> me that this proposal has pg_dump assembling CREATE FUNCTION commands in\n> very different ways depending on the server version. I'd rather see us\n> continuing to build the bulk of the command the same as before, and\n> introduce new behavior only for deparsing the function body.\n\nOK, I'll work on something like that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 26 Aug 2020 13:14:41 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: use pg_get_functiondef() in pg_dump"
}
] |
[
{
"msg_contents": "Hi,\n\nAfter a smart shutdown is issued(with pg_ctl), run a parallel query,\nthen the query hangs. The postmaster doesn't inform backends about the\nsmart shutdown(see pmdie() -> SIGTERM -> BACKEND_TYPE_NORMAL are not\ninformed), so if they request parallel workers, the postmaster is\nunable to fork any workers as it's status(pmState) gets changed to\nPM_WAIT_BACKENDS(see maybe_start_bgworkers() -->\nbgworker_should_start_now() returns false).\n\nFew ways we could solve this:\n1. Do we want to disallow parallelism when there is a pending smart\nshutdown? - If yes, then, we can let the postmaster know the regular\nbackends whenever a smart shutdown is received and the backends use\nthis info to not consider parallelism. If we use SIGTERM to notify,\nsince the backends have die() as handlers, they just cancel the\nqueries which is again an inconsistent behaviour[1]. Would any other\nsignal like SIGUSR2(I think it's currently ignored by backends) be\nused here? If the signals are overloaded, can we multiplex SIGTERM\nsimilar to SIGUSR1? If we don't want to use signals at all, the\npostmaster can make an entry of it's status in bg worker shared memory\ni.e. BackgroundWorkerData, RegisterDynamicBackgroundWorker() can\nsimply return, without requesting the postmaster for parallel workers.\n\n2. If we want to allow parallelism, then, we can tweak\nbgworker_should_start_now(), detect that the pending bg worker fork\nrequests are for parallelism, and let the postmaster start the\nworkers.\n\nThoughts?\n\nNote: this issue is identified while working on [1]\n\n[1] - https://www.postgresql.org/message-id/CALj2ACWTAQ2uWgj4yRFLQ6t15MMYV_uc3GCT5F5p8R9pzrd7yQ%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Aug 2020 21:02:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 3:32 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> After a smart shutdown is issued(with pg_ctl), run a parallel query,\n> then the query hangs. The postmaster doesn't inform backends about the\n> smart shutdown(see pmdie() -> SIGTERM -> BACKEND_TYPE_NORMAL are not\n> informed), so if they request parallel workers, the postmaster is\n> unable to fork any workers as it's status(pmState) gets changed to\n> PM_WAIT_BACKENDS(see maybe_start_bgworkers() -->\n> bgworker_should_start_now() returns false).\n>\n> Few ways we could solve this:\n> 1. Do we want to disallow parallelism when there is a pending smart\n> shutdown? - If yes, then, we can let the postmaster know the regular\n> backends whenever a smart shutdown is received and the backends use\n> this info to not consider parallelism. If we use SIGTERM to notify,\n> since the backends have die() as handlers, they just cancel the\n> queries which is again an inconsistent behaviour[1]. Would any other\n> signal like SIGUSR2(I think it's currently ignored by backends) be\n> used here? If the signals are overloaded, can we multiplex SIGTERM\n> similar to SIGUSR1? If we don't want to use signals at all, the\n> postmaster can make an entry of it's status in bg worker shared memory\n> i.e. BackgroundWorkerData, RegisterDynamicBackgroundWorker() can\n> simply return, without requesting the postmaster for parallel workers.\n>\n> 2. If we want to allow parallelism, then, we can tweak\n> bgworker_should_start_now(), detect that the pending bg worker fork\n> requests are for parallelism, and let the postmaster start the\n> workers.\n>\n> Thoughts?\n\nHello Bharath,\n\nYeah, the current situation is not good. I think your option 2 sounds\nbetter, because the documented behaviour of smart shutdown is that it\n\"lets existing sessions end their work normally\". I think that means\nthat a query that is already running or allowed to start should be\nable to start new workers and not have its existing workers\nterminated. Arseny Sher wrote a couple of different patches to try\nthat last year, but they fell through the cracks:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGLrJij0BuFtHsMHT4QnLP54Z3S6vGVBCWR8A49%2BNzctCw%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 13 Aug 2020 04:56:03 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Aug 13, 2020 at 3:32 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> After a smart shutdown is issued(with pg_ctl), run a parallel query,\n>> then the query hangs.\n\n> Yeah, the current situation is not good. I think your option 2 sounds\n> better, because the documented behaviour of smart shutdown is that it\n> \"lets existing sessions end their work normally\". I think that means\n> that a query that is already running or allowed to start should be\n> able to start new workers and not have its existing workers\n> terminated. Arseny Sher wrote a couple of different patches to try\n> that last year, but they fell through the cracks:\n> https://www.postgresql.org/message-id/flat/CA%2BhUKGLrJij0BuFtHsMHT4QnLP54Z3S6vGVBCWR8A49%2BNzctCw%40mail.gmail.com\n\nI already commented on this in the other thread that Bharath started [1].\nI think the real issue here is why is the postmaster's SIGTERM handler\ndoing *anything* other than disallowing new connections? It seems quite\npremature to kill support processes of any sort, not only parallel\nworkers. The documentation says existing clients are allowed to end\ntheir work, not that their performance is going to be crippled until\nthey end.\n\nSo I looked at moving the kills of all the support processes to happen\nafter we detect that there are no remaining regular backends, and it\nseems to not be too hard. I realized that the existing PM_WAIT_READONLY\nstate is doing that already, but just for a subset of support processes\nthat it thinks might be active in hot standby mode. So what I did in the\nattached was to repurpose that state as \"PM_WAIT_CLIENTS\", which does the\nright thing in either regular or hot standby mode.\n\nOne other thing I changed here was to remove PM_WAIT_READONLY from the\nset of states in which we'll allow promotion to occur or a new walreceiver\nto start. I'm not convinced that either of those behaviors aren't\nbugs; although if someone thinks they're right, we can certainly put\nback PM_WAIT_CLIENTS in those checks. (But, for example, it does not\nappear possible to reach PM_WAIT_READONLY/PM_WAIT_CLIENTS state with\nShutdown == NoShutdown, so the test in MaybeStartWalReceiver sure looks\nlike confusingly dead code to me. If we do want to allow restarting\nthe walreceiver in this state, the Shutdown condition needs fixed.)\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/65189.1597181322%40sss.pgh.pa.us",
"msg_date": "Wed, 12 Aug 2020 14:00:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 6:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Thu, Aug 13, 2020 at 3:32 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> After a smart shutdown is issued(with pg_ctl), run a parallel query,\n> >> then the query hangs.\n>\n> > Yeah, the current situation is not good. I think your option 2 sounds\n> > better, because the documented behaviour of smart shutdown is that it\n> > \"lets existing sessions end their work normally\". I think that means\n> > that a query that is already running or allowed to start should be\n> > able to start new workers and not have its existing workers\n> > terminated. Arseny Sher wrote a couple of different patches to try\n> > that last year, but they fell through the cracks:\n> > https://www.postgresql.org/message-id/flat/CA%2BhUKGLrJij0BuFtHsMHT4QnLP54Z3S6vGVBCWR8A49%2BNzctCw%40mail.gmail.com\n>\n> I already commented on this in the other thread that Bharath started [1].\n> I think the real issue here is why is the postmaster's SIGTERM handler\n> doing *anything* other than disallowing new connections? It seems quite\n> premature to kill support processes of any sort, not only parallel\n> workers. The documentation says existing clients are allowed to end\n> their work, not that their performance is going to be crippled until\n> they end.\n\nRight. It's pretty strange that during smart shutdown, you could run\nfor hours with no autovacuum, walwriter, bgwriter. I guess Arseny and\nI were looking for a minimal change to fix a bug, but clearly there's a\nmore general problem and this change works out cleaner anyway.\n\n> So I looked at moving the kills of all the support processes to happen\n> after we detect that there are no remaining regular backends, and it\n> seems to not be too hard. I realized that the existing PM_WAIT_READONLY\n> state is doing that already, but just for a subset of support processes\n> that it thinks might be active in hot standby mode. So what I did in the\n> attached was to repurpose that state as \"PM_WAIT_CLIENTS\", which does the\n> right thing in either regular or hot standby mode.\n\nMake sense, works as expected and passes check-world.\n\n> One other thing I changed here was to remove PM_WAIT_READONLY from the\n> set of states in which we'll allow promotion to occur or a new walreceiver\n> to start. I'm not convinced that either of those behaviors aren't\n> bugs; although if someone thinks they're right, we can certainly put\n> back PM_WAIT_CLIENTS in those checks. (But, for example, it does not\n> appear possible to reach PM_WAIT_READONLY/PM_WAIT_CLIENTS state with\n> Shutdown == NoShutdown, so the test in MaybeStartWalReceiver sure looks\n> like confusingly dead code to me. If we do want to allow restarting\n> the walreceiver in this state, the Shutdown condition needs fixed.)\n\nIf a walreceiver is allowed to run, why should it not be allowed to\nrestart? Yeah, I suppose that other test'd need to be Shutdown <=\nSmartShutdown, just like we do in SIGHUP_handler(). Looking at other\nplaces where we test Shutdown == NoShutdown, one that jumps out is the\nautovacuum wraparound defence stuff and the nearby\nPMSIGNAL_START_AUTOVAC_WORKER code.\n\n\n",
"msg_date": "Thu, 13 Aug 2020 06:54:38 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Aug 13, 2020 at 6:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> One other thing I changed here was to remove PM_WAIT_READONLY from the\n>> set of states in which we'll allow promotion to occur or a new walreceiver\n>> to start. I'm not convinced that either of those behaviors aren't\n>> bugs; although if someone thinks they're right, we can certainly put\n>> back PM_WAIT_CLIENTS in those checks. (But, for example, it does not\n>> appear possible to reach PM_WAIT_READONLY/PM_WAIT_CLIENTS state with\n>> Shutdown == NoShutdown, so the test in MaybeStartWalReceiver sure looks\n>> like confusingly dead code to me. If we do want to allow restarting\n>> the walreceiver in this state, the Shutdown condition needs fixed.)\n\n> If a walreceiver is allowed to run, why should it not be allowed to\n> restart?\n\nI'd come to about the same conclusion after thinking more, so v2\nattached undoes that change. I think putting off promotion is fine\nthough; it'll get handled at the next postmaster start. (It looks\nlike the state machine would just proceed to exit anyway if we allowed\nthe promotion, but that's a hard-to-test state transition that we could\ndo without.)\n\n> Yeah, I suppose that other test'd need to be Shutdown <=\n> SmartShutdown, just like we do in SIGHUP_handler(). Looking at other\n> places where we test Shutdown == NoShutdown, one that jumps out is the\n> autovacuum wraparound defence stuff and the nearby\n> PMSIGNAL_START_AUTOVAC_WORKER code.\n\nOh, excellent point! I'd not thought to look at tests of the Shutdown\nvariable, but yeah, those should be <= SmartShutdown if we want autovac\nto continue to operate in this state.\n\nI also noticed that where reaper() is dealing with startup process\nexit(3), it unconditionally sets Shutdown = SmartShutdown which seems\npretty bogus; that variable's value should never be allowed to decrease,\nbut this could cause it. In the attached I did\n\n StartupStatus = STARTUP_NOT_RUNNING;\n- Shutdown = SmartShutdown;\n+ Shutdown = Max(Shutdown, SmartShutdown);\n TerminateChildren(SIGTERM);\n\nBut given that it's forcing immediate termination of all backends,\nI wonder if that's not more like a FastShutdown? (Scary here is\nthat the coverage report shows we're not testing this path, so who\nknows if it works at all.)\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 12 Aug 2020 15:28:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "I wrote:\n> Oh, excellent point! I'd not thought to look at tests of the Shutdown\n> variable, but yeah, those should be <= SmartShutdown if we want autovac\n> to continue to operate in this state.\n\nOn looking closer, there's another problem: setting start_autovac_launcher\nisn't enough to get the AV launcher to run, because ServerLoop() won't\nlaunch it except in PM_RUN state. Likewise, the other \"relaunch a dead\nprocess\" checks in ServerLoop() need to be generalized to support\nrelaunching background processes while we're waiting out the foreground\nclients. So that leads me to the attached v3. I had to re-instantiate\nPM_WAIT_READONLY as an alternate state to PM_WAIT_CLIENTS; these states\nare about the same so far as PostmasterStateMachine is concerned, but\nsome of the should-we-launch-FOO checks care about the difference.\n\nThe various pmState tests are getting messy enough to cry out for\nrefactorization, but I've not attempted that here. There's enough\nvariance in the conditions for launching different subprocesses that\nI'm not very sure what would be a nicer-looking way to write them.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 12 Aug 2020 16:59:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 8:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Oh, excellent point! I'd not thought to look at tests of the Shutdown\n> > variable, but yeah, those should be <= SmartShutdown if we want autovac\n> > to continue to operate in this state.\n>\n> On looking closer, there's another problem: setting start_autovac_launcher\n> isn't enough to get the AV launcher to run, because ServerLoop() won't\n> launch it except in PM_RUN state. Likewise, the other \"relaunch a dead\n> process\" checks in ServerLoop() need to be generalized to support\n> relaunching background processes while we're waiting out the foreground\n> clients. So that leads me to the attached v3. I had to re-instantiate\n> PM_WAIT_READONLY as an alternate state to PM_WAIT_CLIENTS; these states\n> are about the same so far as PostmasterStateMachine is concerned, but\n> some of the should-we-launch-FOO checks care about the difference.\n\nI think we also need:\n\n@@ -2459,6 +2459,9 @@ canAcceptConnections(int backend_type)\n {\n if (pmState == PM_WAIT_BACKUP)\n result = CAC_WAITBACKUP; /* allow\nsuperusers only */\n+ else if (Shutdown <= SmartShutdown &&\n+ backend_type == BACKEND_TYPE_AUTOVAC)\n+ result = CAC_OK;\n else if (Shutdown > NoShutdown)\n return CAC_SHUTDOWN; /* shutdown is pending */\n else if (!FatalError &&\n\n\nRetesting the original complaint, I think we need:\n\n@@ -5911,11 +5912,11 @@ bgworker_should_start_now(BgWorkerStartTime start_time)\n case PM_SHUTDOWN_2:\n case PM_SHUTDOWN:\n case PM_WAIT_BACKENDS:\n- case PM_WAIT_READONLY:\n- case PM_WAIT_CLIENTS:\n case PM_WAIT_BACKUP:\n break;\n\n+ case PM_WAIT_READONLY:\n+ case PM_WAIT_CLIENTS:\n case PM_RUN:\n if (start_time == BgWorkerStart_RecoveryFinished)\n return true;\n\n\n",
"msg_date": "Thu, 13 Aug 2020 09:41:42 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I think we also need:\n\n> + else if (Shutdown <= SmartShutdown &&\n> + backend_type == BACKEND_TYPE_AUTOVAC)\n> + result = CAC_OK;\n\nHm, ok.\n\n> Retesting the original complaint, I think we need:\n\n> @@ -5911,11 +5912,11 @@ bgworker_should_start_now(BgWorkerStartTime start_time)\n> + case PM_WAIT_READONLY:\n> + case PM_WAIT_CLIENTS:\n> case PM_RUN:\n\nSo the question here is whether time-based bgworkers should be allowed to\nrestart in this scenario. I'm not quite sure --- depending on what the\nbgworker's purpose is, you could make an argument either way, I think.\nDo we need some way to control that?\n\nIn any case, we'd want to treat PM_WAIT_READONLY like PM_HOT_STANDBY not\nPM_RUN, no? Also, the state before PM_WAIT_READONLY could have been\nPM_RECOVERY or PM_STARTUP, in which case we don't really want to think\nit's like PM_HOT_STANDBY either; only the BgWorkerStart_PostmasterStart\ncase should be accepted. That suggests that we need yet another pmState,\nor else a more thoroughgoing refactoring of how the postmaster's state\nis represented.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Aug 2020 18:21:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > @@ -5911,11 +5912,11 @@ bgworker_should_start_now(BgWorkerStartTime start_time)\n> > + case PM_WAIT_READONLY:\n> > + case PM_WAIT_CLIENTS:\n> > case PM_RUN:\n>\n> So the question here is whether time-based bgworkers should be allowed to\n> restart in this scenario. I'm not quite sure --- depending on what the\n> bgworker's purpose is, you could make an argument either way, I think.\n> Do we need some way to control that?\n\nI'm not sure why any bgworker would actually want to be shut down or\nnot restarted during the twilight zone of a smart shutdown though --\nif users can do arbitrary stuff, why can't supporting workers carry\non? For example, a hypothetical extension that triggers vacuum freeze\nat smarter times, or a wait event sampling extension, an FDW that uses\nan extra worker to maintain a connection to something, etc etc could\nall be things that a user is indirectly relying on to do their normal\nwork, and I struggle to think of an example of something that you\nexplicitly don't want running just because (in some sense) the server\n*plans* to shut down, when the users get around to logging off. But\nmaybe I lack imagination.\n\n> In any case, we'd want to treat PM_WAIT_READONLY like PM_HOT_STANDBY not\n> PM_RUN, no?\n\nYeah, you're right.\n\n> Also, the state before PM_WAIT_READONLY could have been\n> PM_RECOVERY or PM_STARTUP, in which case we don't really want to think\n> it's like PM_HOT_STANDBY either; only the BgWorkerStart_PostmasterStart\n> case should be accepted. That suggests that we need yet another pmState,\n> or else a more thoroughgoing refactoring of how the postmaster's state\n> is represented.\n\nHmm.\n\n\n",
"msg_date": "Thu, 13 Aug 2020 11:27:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Aug 13, 2020 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Also, the state before PM_WAIT_READONLY could have been\n>> PM_RECOVERY or PM_STARTUP, in which case we don't really want to think\n>> it's like PM_HOT_STANDBY either; only the BgWorkerStart_PostmasterStart\n>> case should be accepted. That suggests that we need yet another pmState,\n>> or else a more thoroughgoing refactoring of how the postmaster's state\n>> is represented.\n\n> Hmm.\n\nI experimented with separating the shutdown-in-progress state into a\nseparate variable, letting us actually reduce not increase the number of\npmStates. This way, PM_RUN and other states still apply until we're\nready to pull the shutdown trigger, so that we don't need to complicate\nstate-based decisions about launching auxiliary processes. This patch\nalso unifies the signal-sending for the smart and fast shutdown paths,\nwhich seems like a nice improvement. I kind of like this, though I'm not\nin love with the particular variable name I used here (smartShutState).\n\nIf we go this way, CAC_WAITBACKUP ought to be renamed since the PMState\nit's named after no longer exists. I left that alone pending making\nfinal naming choices, though.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 12 Aug 2020 22:37:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I experimented with separating the shutdown-in-progress state into a\n> separate variable, letting us actually reduce not increase the number of\n> pmStates. This way, PM_RUN and other states still apply until we're\n> ready to pull the shutdown trigger, so that we don't need to complicate\n> state-based decisions about launching auxiliary processes. This patch\n> also unifies the signal-sending for the smart and fast shutdown paths,\n> which seems like a nice improvement. I kind of like this, though I'm not\n> in love with the particular variable name I used here (smartShutState).\n\nMakes sense. I tested this version on a primary and a replica and\nverified that parallel workers launch, but I saw that autovacuum\nworkers still can't start without something like this:\n\n@@ -2463,7 +2463,8 @@ canAcceptConnections(int backend_type)\n * be returned until we have checked for too many children.\n */\n if (smartShutState != SMART_NORMAL_USAGE &&\n- backend_type != BACKEND_TYPE_BGWORKER)\n+ backend_type != BACKEND_TYPE_BGWORKER &&\n+ backend_type != BACKEND_TYPE_AUTOVAC)\n {\n if (smartShutState == SMART_SUPERUSER_ONLY)\n result = CAC_WAITBACKUP; /* allow\nsuperusers only */\n@@ -2471,7 +2472,8 @@ canAcceptConnections(int backend_type)\n return CAC_SHUTDOWN; /* shutdown is pending */\n }\n if (pmState != PM_RUN &&\n- backend_type != BACKEND_TYPE_BGWORKER)\n+ backend_type != BACKEND_TYPE_BGWORKER &&\n+ backend_type != BACKEND_TYPE_AUTOVAC)\n {\n if (Shutdown > NoShutdown)\n return CAC_SHUTDOWN; /* shutdown is pending */\n\n\n",
"msg_date": "Thu, 13 Aug 2020 16:42:49 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Makes sense. I tested this version on a primary and a replica and\n> verified that parallel workers launch, but I saw that autovacuum\n> workers still can't start without something like this:\n\n> @@ -2463,7 +2463,8 @@ canAcceptConnections(int backend_type)\n> * be returned until we have checked for too many children.\n> */\n> if (smartShutState != SMART_NORMAL_USAGE &&\n> - backend_type != BACKEND_TYPE_BGWORKER)\n> + backend_type != BACKEND_TYPE_BGWORKER &&\n> + backend_type != BACKEND_TYPE_AUTOVAC)\n\nHmmm ... maybe that should be more like\n\n if (smartShutState != SMART_NORMAL_USAGE &&\n backend_type == BACKEND_TYPE_NORMAL)\n\n(the adjacent comment needs adjustment too of course).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Aug 2020 00:58:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "I wrote:\n> Hmmm ... maybe that should be more like\n> if (smartShutState != SMART_NORMAL_USAGE &&\n> backend_type == BACKEND_TYPE_NORMAL)\n\nAfter some more rethinking and testing, here's a v5 that feels\nfairly final to me. I realized that the logic in canAcceptConnections\nwas kind of backwards: it's better to check the main pmState restrictions\nfirst and then the smart-shutdown restrictions afterwards.\n\nI'm assuming we want to back-patch this as far as 9.6, where parallel\nquery began to be a thing.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 13 Aug 2020 12:45:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "On Fri, Aug 14, 2020 at 4:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Hmmm ... maybe that should be more like\n> > if (smartShutState != SMART_NORMAL_USAGE &&\n> > backend_type == BACKEND_TYPE_NORMAL)\n>\n> After some more rethinking and testing, here's a v5 that feels\n> fairly final to me. I realized that the logic in canAcceptConnections\n> was kind of backwards: it's better to check the main pmState restrictions\n> first and then the smart-shutdown restrictions afterwards.\n\nLGTM. I tested this a bit today and it did what I expected for\nparallel queries and vacuum, on primary and standby.\n\n> I'm assuming we want to back-patch this as far as 9.6, where parallel\n> query began to be a thing.\n\nYeah. I mean, it's more radical than what I thought we'd be doing for\nthis, but you could get into real trouble by running in smart shutdown\nmode without the autovac infrastructure alive.\n\n\n",
"msg_date": "Fri, 14 Aug 2020 18:03:47 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Aug 14, 2020 at 4:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> After some more rethinking and testing, here's a v5 that feels\n>> fairly final to me. I realized that the logic in canAcceptConnections\n>> was kind of backwards: it's better to check the main pmState restrictions\n>> first and then the smart-shutdown restrictions afterwards.\n\n> LGTM. I tested this a bit today and it did what I expected for\n> parallel queries and vacuum, on primary and standby.\n\nThanks for reviewing! I'll do the back-patching and push this today.\n\n>> I'm assuming we want to back-patch this as far as 9.6, where parallel\n>> query began to be a thing.\n\n> Yeah. I mean, it's more radical than what I thought we'd be doing for\n> this, but you could get into real trouble by running in smart shutdown\n> mode without the autovac infrastructure alive.\n\nRight. 99.99% of the time, that early shutdown doesn't really cause\nany problems, which is how we've gotten away with it this long. But if\nsomeone did leave session(s) running for a long time after issuing the\nSIGTERM, the results could be bad --- and there's no obvious benefit\nto the early shutdowns anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Aug 2020 10:32:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "\nTom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> On Fri, Aug 14, 2020 at 4:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> After some more rethinking and testing, here's a v5 that feels\n>>> fairly final to me. I realized that the logic in canAcceptConnections\n>>> was kind of backwards: it's better to check the main pmState restrictions\n>>> first and then the smart-shutdown restrictions afterwards.\n>\n>> LGTM. I tested this a bit today and it did what I expected for\n>> parallel queries and vacuum, on primary and standby.\n>\n> Thanks for reviewing! I'll do the back-patching and push this today.\n\nFWIW, I've also looked through the patch and it's fine. Moderate testing\nalso found no issues, check-world works, bgws are started during smart\nshutdown as expected. And surely this is better than the inital\nshorthack of allowing only parallel workers.\n\n\n",
"msg_date": "Fri, 14 Aug 2020 18:31:24 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
},
{
"msg_contents": "Arseny Sher <a.sher@postgrespro.ru> writes:\n> FWIW, I've also looked through the patch and it's fine. Moderate testing\n> also found no issues, check-world works, bgws are started during smart\n> shutdown as expected. And surely this is better than the inital\n> shorthack of allowing only parallel workers.\n\nThanks, appreciate the extra look. It's pushed now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Aug 2020 13:29:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query hangs after a smart shutdown is issued"
}
] |
[
{
"msg_contents": "Hi\n\nI would like to start another thread to follow up on [1], mostly to bump up the\ntopic. Just to remind, it's about how pg_stat_statements jumbling ArrayExpr in\nqueries like:\n\n SELECT something FROM table WHERE col IN (1, 2, 3, ...)\n\nThe current implementation produces different jumble hash for every different\nnumber of arguments for essentially the same query. Unfortunately a lot of ORMs\nlike to generate these types of queries, which in turn leads to\npg_stat_statements pollution. Ideally we want to prevent this and have only one\nrecord for such a query.\n\nAs the result of [1] I've identified two highlighted approaches to improve this\nsituation:\n\n* Reduce the generated ArrayExpr to an array Const immediately, in cases where\n all the inputs are Consts.\n\n* Make repeating Const to contribute nothing to the resulting hash.\n\nI've tried to prototype both approaches to find out pros/cons and be more\nspecific. Attached patches could not be considered a completed piece of work,\nbut they seem to work, mostly pass the tests and demonstrate the point. I would\nlike to get some high level input about them and ideally make it clear what is\nthe preferred solution to continue with.\n\n# Reducing ArrayExpr to an array Const\n\nIIUC this requires producing a Const with ArrayType constvalue in\ntransformAExprIn for ScalarArrayOpExpr. This could be a general improvement,\nsince apparently it's being done later anyway. But it deals only with Const,\nwhich leaves more on the table, e.g. Params and other similar types of\nduplication we observe when repeating constants are wrapped into VALUES.\n\nAnother point here is that it's quite possible this approach will still require\ncorresponding changes in pg_stat_statements, since just preventing duplicates\nto show also loses the information. Ideally we also need to have some\nunderstanding how many elements are actually there and display it, e.g. in\ncases when there is just one outlier query that contains a huge IN list.\n\n# Contribute nothing to the hash\n\nI guess there could be multiple ways of doing this, but the first idea I had in\nmind is to skip jumbling when necessary. At the same time it can be implemented\nmore centralized for different types of queries (although in the attached patch\nthere are only Const & Values). In the simplest case we just identify sequence\nof constants of the same type, which just ignores any other cases when stuff is\nmixed. But I believe it's something that could be considered a rare corner case\nand it's better to start with the simplest solution.\n\nHaving said that I believe the second approach of contributing nothing to the\nhash sounds more appealing, but would love to hear other opinions.\n\n[1]: https://www.postgresql.org/message-id/flat/CAF42k%3DJCfHMJtkAVXCzBn2XBxDC83xb4VhV7jU7enPnZ0CfEQQ%40mail.gmail.com",
"msg_date": "Wed, 12 Aug 2020 18:19:02 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Wed, Aug 12, 2020 at 06:19:02PM +0200, Dmitry Dolgov wrote:\n>\n> I would like to start another thread to follow up on [1], mostly to bump up the\n> topic. Just to remind, it's about how pg_stat_statements jumbling ArrayExpr in\n> queries like:\n>\n> SELECT something FROM table WHERE col IN (1, 2, 3, ...)\n>\n> The current implementation produces different jumble hash for every different\n> number of arguments for essentially the same query. Unfortunately a lot of ORMs\n> like to generate these types of queries, which in turn leads to\n> pg_stat_statements pollution. Ideally we want to prevent this and have only one\n> record for such a query.\n>\n> As the result of [1] I've identified two highlighted approaches to improve this\n> situation:\n>\n> * Reduce the generated ArrayExpr to an array Const immediately, in cases where\n> all the inputs are Consts.\n>\n> * Make repeating Const to contribute nothing to the resulting hash.\n>\n> I've tried to prototype both approaches to find out pros/cons and be more\n> specific. Attached patches could not be considered a completed piece of work,\n> but they seem to work, mostly pass the tests and demonstrate the point. I would\n> like to get some high level input about them and ideally make it clear what is\n> the preferred solution to continue with.\n\nI've implemented the second approach mentioned above, this version was\ntested on our test clusters for some time without visible issues. Will\ncreate a CF item and would appreciate any feedback.",
"msg_date": "Wed, 18 Nov 2020 17:04:32 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHi, I did some test and it works well",
"msg_date": "Wed, 09 Dec 2020 03:37:40 +0000",
"msg_from": "Chengxi Sun <sunchengxi@highgo.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Wed, Dec 09, 2020 at 03:37:40AM +0000, Chengxi Sun wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> Hi, I did some test and it works well\n\nThanks for testing!\n\n\n",
"msg_date": "Wed, 9 Dec 2020 19:49:44 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Wed, Nov 18, 2020 at 05:04:32PM +0100, Dmitry Dolgov wrote:\n> > On Wed, Aug 12, 2020 at 06:19:02PM +0200, Dmitry Dolgov wrote:\n> >\n> > I would like to start another thread to follow up on [1], mostly to bump up the\n> > topic. Just to remind, it's about how pg_stat_statements jumbling ArrayExpr in\n> > queries like:\n> >\n> > SELECT something FROM table WHERE col IN (1, 2, 3, ...)\n> >\n> > The current implementation produces different jumble hash for every different\n> > number of arguments for essentially the same query. Unfortunately a lot of ORMs\n> > like to generate these types of queries, which in turn leads to\n> > pg_stat_statements pollution. Ideally we want to prevent this and have only one\n> > record for such a query.\n> >\n> > As the result of [1] I've identified two highlighted approaches to improve this\n> > situation:\n> >\n> > * Reduce the generated ArrayExpr to an array Const immediately, in cases where\n> > all the inputs are Consts.\n> >\n> > * Make repeating Const to contribute nothing to the resulting hash.\n> >\n> > I've tried to prototype both approaches to find out pros/cons and be more\n> > specific. Attached patches could not be considered a completed piece of work,\n> > but they seem to work, mostly pass the tests and demonstrate the point. I would\n> > like to get some high level input about them and ideally make it clear what is\n> > the preferred solution to continue with.\n>\n> I've implemented the second approach mentioned above, this version was\n> tested on our test clusters for some time without visible issues. Will\n> create a CF item and would appreciate any feedback.\n\nAfter more testing I found couple of things that could be improved,\nnamely in the presence of non-reducible constants one part of the query\nwas not copied into the normalized version, and this approach also could\nbe extended for Params. These are incorporated in the attached patch.",
"msg_date": "Sat, 26 Dec 2020 11:46:35 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Hi,\nA few comments.\n\n+ \"After this number of duplicating constants\nstart to merge them.\",\n\nduplicating -> duplicate\n\n+ foreach(lc, (List *) expr)\n+ {\n+ Node * subExpr = (Node *) lfirst(lc);\n+\n+ if (!IsA(subExpr, Const))\n+ {\n+ allConst = false;\n+ break;\n+ }\n+ }\n\nIt seems the above foreach loop (within foreach(temp, (List *) node)) can\nbe preceded with a check that allConst is true. Otherwise the loop can be\nskipped.\n\n+ if (currentExprIdx == pgss_merge_threshold - 1)\n+ {\n+ JumbleExpr(jstate, expr);\n+\n+ /*\n+ * A const expr is already found, so JumbleExpr must\n+ * record it. Mark it as merged, it will be the\nfirst\n+ * merged but still present in the statement query.\n+ */\n+ Assert(jstate->clocations_count > 0);\n+ jstate->clocations[jstate->clocations_count -\n1].merged = true;\n+ currentExprIdx++;\n+ }\n\nThe above snippet occurs a few times. Maybe extract into a helper method.\n\nCheers\n\nOn Sat, Dec 26, 2020 at 2:45 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Wed, Nov 18, 2020 at 05:04:32PM +0100, Dmitry Dolgov wrote:\n> > > On Wed, Aug 12, 2020 at 06:19:02PM +0200, Dmitry Dolgov wrote:\n> > >\n> > > I would like to start another thread to follow up on [1], mostly to\n> bump up the\n> > > topic. Just to remind, it's about how pg_stat_statements jumbling\n> ArrayExpr in\n> > > queries like:\n> > >\n> > > SELECT something FROM table WHERE col IN (1, 2, 3, ...)\n> > >\n> > > The current implementation produces different jumble hash for every\n> different\n> > > number of arguments for essentially the same query. Unfortunately a\n> lot of ORMs\n> > > like to generate these types of queries, which in turn leads to\n> > > pg_stat_statements pollution. Ideally we want to prevent this and have\n> only one\n> > > record for such a query.\n> > >\n> > > As the result of [1] I've identified two highlighted approaches to\n> improve this\n> > > situation:\n> > >\n> > > * Reduce the generated ArrayExpr to an array Const immediately, in\n> cases where\n> > > all the inputs are Consts.\n> > >\n> > > * Make repeating Const to contribute nothing to the resulting hash.\n> > >\n> > > I've tried to prototype both approaches to find out pros/cons and be\n> more\n> > > specific. Attached patches could not be considered a completed piece\n> of work,\n> > > but they seem to work, mostly pass the tests and demonstrate the\n> point. I would\n> > > like to get some high level input about them and ideally make it clear\n> what is\n> > > the preferred solution to continue with.\n> >\n> > I've implemented the second approach mentioned above, this version was\n> > tested on our test clusters for some time without visible issues. Will\n> > create a CF item and would appreciate any feedback.\n>\n> After more testing I found couple of things that could be improved,\n> namely in the presence of non-reducible constants one part of the query\n> was not copied into the normalized version, and this approach also could\n> be extended for Params. These are incorporated in the attached patch.\n>\n\nHi,A few comments.+ \"After this number of duplicating constants start to merge them.\",duplicating -> duplicate+ foreach(lc, (List *) expr)+ {+ Node * subExpr = (Node *) lfirst(lc);++ if (!IsA(subExpr, Const))+ {+ allConst = false;+ break;+ }+ }It seems the above foreach loop (within foreach(temp, (List *) node)) can be preceded with a check that allConst is true. Otherwise the loop can be skipped.+ if (currentExprIdx == pgss_merge_threshold - 1)+ {+ JumbleExpr(jstate, expr);++ /*+ * A const expr is already found, so JumbleExpr must+ * record it. Mark it as merged, it will be the first+ * merged but still present in the statement query.+ */+ Assert(jstate->clocations_count > 0);+ jstate->clocations[jstate->clocations_count - 1].merged = true;+ currentExprIdx++;+ }The above snippet occurs a few times. Maybe extract into a helper method.CheersOn Sat, Dec 26, 2020 at 2:45 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Wed, Nov 18, 2020 at 05:04:32PM +0100, Dmitry Dolgov wrote:\n> > On Wed, Aug 12, 2020 at 06:19:02PM +0200, Dmitry Dolgov wrote:\n> >\n> > I would like to start another thread to follow up on [1], mostly to bump up the\n> > topic. Just to remind, it's about how pg_stat_statements jumbling ArrayExpr in\n> > queries like:\n> >\n> > SELECT something FROM table WHERE col IN (1, 2, 3, ...)\n> >\n> > The current implementation produces different jumble hash for every different\n> > number of arguments for essentially the same query. Unfortunately a lot of ORMs\n> > like to generate these types of queries, which in turn leads to\n> > pg_stat_statements pollution. Ideally we want to prevent this and have only one\n> > record for such a query.\n> >\n> > As the result of [1] I've identified two highlighted approaches to improve this\n> > situation:\n> >\n> > * Reduce the generated ArrayExpr to an array Const immediately, in cases where\n> > all the inputs are Consts.\n> >\n> > * Make repeating Const to contribute nothing to the resulting hash.\n> >\n> > I've tried to prototype both approaches to find out pros/cons and be more\n> > specific. Attached patches could not be considered a completed piece of work,\n> > but they seem to work, mostly pass the tests and demonstrate the point. I would\n> > like to get some high level input about them and ideally make it clear what is\n> > the preferred solution to continue with.\n>\n> I've implemented the second approach mentioned above, this version was\n> tested on our test clusters for some time without visible issues. Will\n> create a CF item and would appreciate any feedback.\n\nAfter more testing I found couple of things that could be improved,\nnamely in the presence of non-reducible constants one part of the query\nwas not copied into the normalized version, and this approach also could\nbe extended for Params. These are incorporated in the attached patch.",
"msg_date": "Sat, 26 Dec 2020 08:53:28 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sat, Dec 26, 2020 at 08:53:28AM -0800, Zhihong Yu wrote:\n> Hi,\n> A few comments.\n>\n> + foreach(lc, (List *) expr)\n> + {\n> + Node * subExpr = (Node *) lfirst(lc);\n> +\n> + if (!IsA(subExpr, Const))\n> + {\n> + allConst = false;\n> + break;\n> + }\n> + }\n>\n> It seems the above foreach loop (within foreach(temp, (List *) node)) can\n> be preceded with a check that allConst is true. Otherwise the loop can be\n> skipped.\n\nThanks for noticing. Now that I look at it closer I think it's the other\nway around, the loop above checking constants for the first expression\nis not really necessary.\n\n> + if (currentExprIdx == pgss_merge_threshold - 1)\n> + {\n> + JumbleExpr(jstate, expr);\n> +\n> + /*\n> + * A const expr is already found, so JumbleExpr must\n> + * record it. Mark it as merged, it will be the\n> first\n> + * merged but still present in the statement query.\n> + */\n> + Assert(jstate->clocations_count > 0);\n> + jstate->clocations[jstate->clocations_count -\n> 1].merged = true;\n> + currentExprIdx++;\n> + }\n>\n> The above snippet occurs a few times. Maybe extract into a helper method.\n\nOriginally I was hesitant to extract it was because it's quite small\npart of the code. But now I've realized that the part relevant to lists\nis not really correct, which makes those bits even more different, so I\nthink it makes sense to leave it like that. What do you think?",
"msg_date": "Tue, 5 Jan 2021 13:52:30 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Hi, Dmitry:\n\n+ int lastExprLenght = 0;\n\nDid you mean to name the variable lastExprLenghth ?\n\nw.r.t. extracting to helper method, the second and third if (currentExprIdx\n== pgss_merge_threshold - 1) blocks are similar.\nIt is up to you whether to create the helper method.\nI am fine with the current formation.\n\nCheers\n\nOn Tue, Jan 5, 2021 at 4:51 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Sat, Dec 26, 2020 at 08:53:28AM -0800, Zhihong Yu wrote:\n> > Hi,\n> > A few comments.\n> >\n> > + foreach(lc, (List *) expr)\n> > + {\n> > + Node * subExpr = (Node *) lfirst(lc);\n> > +\n> > + if (!IsA(subExpr, Const))\n> > + {\n> > + allConst = false;\n> > + break;\n> > + }\n> > + }\n> >\n> > It seems the above foreach loop (within foreach(temp, (List *) node)) can\n> > be preceded with a check that allConst is true. Otherwise the loop can be\n> > skipped.\n>\n> Thanks for noticing. Now that I look at it closer I think it's the other\n> way around, the loop above checking constants for the first expression\n> is not really necessary.\n>\n> > + if (currentExprIdx == pgss_merge_threshold - 1)\n> > + {\n> > + JumbleExpr(jstate, expr);\n> > +\n> > + /*\n> > + * A const expr is already found, so JumbleExpr\n> must\n> > + * record it. Mark it as merged, it will be the\n> > first\n> > + * merged but still present in the statement\n> query.\n> > + */\n> > + Assert(jstate->clocations_count > 0);\n> > + jstate->clocations[jstate->clocations_count -\n> > 1].merged = true;\n> > + currentExprIdx++;\n> > + }\n> >\n> > The above snippet occurs a few times. Maybe extract into a helper method.\n>\n> Originally I was hesitant to extract it was because it's quite small\n> part of the code. But now I've realized that the part relevant to lists\n> is not really correct, which makes those bits even more different, so I\n> think it makes sense to leave it like that. What do you think?\n>\n\nHi, Dmitry:+ int lastExprLenght = 0;Did you mean to name the variable lastExprLenghth ?w.r.t. extracting to helper method, the second and third if (currentExprIdx == pgss_merge_threshold - 1) blocks are similar.It is up to you whether to create the helper method.I am fine with the current formation.CheersOn Tue, Jan 5, 2021 at 4:51 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Sat, Dec 26, 2020 at 08:53:28AM -0800, Zhihong Yu wrote:\n> Hi,\n> A few comments.\n>\n> + foreach(lc, (List *) expr)\n> + {\n> + Node * subExpr = (Node *) lfirst(lc);\n> +\n> + if (!IsA(subExpr, Const))\n> + {\n> + allConst = false;\n> + break;\n> + }\n> + }\n>\n> It seems the above foreach loop (within foreach(temp, (List *) node)) can\n> be preceded with a check that allConst is true. Otherwise the loop can be\n> skipped.\n\nThanks for noticing. Now that I look at it closer I think it's the other\nway around, the loop above checking constants for the first expression\nis not really necessary.\n\n> + if (currentExprIdx == pgss_merge_threshold - 1)\n> + {\n> + JumbleExpr(jstate, expr);\n> +\n> + /*\n> + * A const expr is already found, so JumbleExpr must\n> + * record it. Mark it as merged, it will be the\n> first\n> + * merged but still present in the statement query.\n> + */\n> + Assert(jstate->clocations_count > 0);\n> + jstate->clocations[jstate->clocations_count -\n> 1].merged = true;\n> + currentExprIdx++;\n> + }\n>\n> The above snippet occurs a few times. Maybe extract into a helper method.\n\nOriginally I was hesitant to extract it was because it's quite small\npart of the code. But now I've realized that the part relevant to lists\nis not really correct, which makes those bits even more different, so I\nthink it makes sense to leave it like that. What do you think?",
"msg_date": "Tue, 5 Jan 2021 07:51:42 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On 1/5/21 10:51 AM, Zhihong Yu wrote:\n> \n> + int lastExprLenght = 0;\n> \n> Did you mean to name the variable lastExprLenghth ?\n> \n> w.r.t. extracting to helper method, the second and third \n> if (currentExprIdx == pgss_merge_threshold - 1) blocks are similar.\n> It is up to you whether to create the helper method.\n> I am fine with the current formation.\n\nDmitry, thoughts on this review?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 18 Mar 2021 09:38:09 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Thu, Mar 18, 2021 at 09:38:09AM -0400, David Steele wrote:\n> On 1/5/21 10:51 AM, Zhihong Yu wrote:\n> >\n> > + � int � � � � lastExprLenght = 0;\n> >\n> > Did you mean to name the variable�lastExprLenghth�?\n> >\n> > w.r.t. extracting to helper method, the second and third\n> > if�(currentExprIdx == pgss_merge_threshold - 1) blocks are similar.\n> > It is up to you whether to create the helper method.\n> > I am fine with the current formation.\n>\n> Dmitry, thoughts on this review?\n\nOh, right. lastExprLenghth is obviously a typo, and as we agreed that\nthe helper is not strictly necessary I wanted to wait a bit hoping for\nmore feedback and eventually to post an accumulated patch. Doesn't make\nsense to post another version only to fix one typo :)\n\n\n",
"msg_date": "Thu, 18 Mar 2021 16:50:02 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Thu, Mar 18, 2021 at 04:50:02PM +0100, Dmitry Dolgov wrote:\n> > On Thu, Mar 18, 2021 at 09:38:09AM -0400, David Steele wrote:\n> > On 1/5/21 10:51 AM, Zhihong Yu wrote:\n> > >\n> > > + int lastExprLenght = 0;\n> > >\n> > > Did you mean to name the variable lastExprLenghth ?\n> > >\n> > > w.r.t. extracting to helper method, the second and third\n> > > if (currentExprIdx == pgss_merge_threshold - 1) blocks are similar.\n> > > It is up to you whether to create the helper method.\n> > > I am fine with the current formation.\n> >\n> > Dmitry, thoughts on this review?\n>\n> Oh, right. lastExprLenghth is obviously a typo, and as we agreed that\n> the helper is not strictly necessary I wanted to wait a bit hoping for\n> more feedback and eventually to post an accumulated patch. Doesn't make\n> sense to post another version only to fix one typo :)\n\nHi,\n\nI've prepared a new rebased version to deal with the new way of\ncomputing query id, but as always there is one tricky part. From what I\nunderstand, now an external module can provide custom implementation for\nquery id computation algorithm. It seems natural to think this machinery\ncould be used instead of patch in the thread, i.e. one could create a\ncustom logic that will enable constants collapsing as needed, so that\nsame queries with different number of constants in an array will be\nhashed into the same record.\n\nBut there is a limitation in how such queries will be normalized\nafterwards — to reduce level of surprise it's necessary to display the\nfact that a certain query in fact had more constants that are showed in\npgss record. Ideally LocationLen needs to carry some bits of information\non what exactly could be skipped, and generate_normalized_query needs to\nunderstand that, both are not reachable for an external module with\ncustom query id logic (without replicating significant part of the\nexisting code). Hence, a new version of the patch.",
"msg_date": "Tue, 15 Jun 2021 17:18:50 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Tue, Jun 15, 2021 at 05:18:50PM +0200, Dmitry Dolgov wrote:\n> > On Thu, Mar 18, 2021 at 04:50:02PM +0100, Dmitry Dolgov wrote:\n> > > On Thu, Mar 18, 2021 at 09:38:09AM -0400, David Steele wrote:\n> > > On 1/5/21 10:51 AM, Zhihong Yu wrote:\n> > > >\n> > > > + int lastExprLenght = 0;\n> > > >\n> > > > Did you mean to name the variable lastExprLenghth ?\n> > > >\n> > > > w.r.t. extracting to helper method, the second and third\n> > > > if (currentExprIdx == pgss_merge_threshold - 1) blocks are similar.\n> > > > It is up to you whether to create the helper method.\n> > > > I am fine with the current formation.\n> > >\n> > > Dmitry, thoughts on this review?\n> >\n> > Oh, right. lastExprLenghth is obviously a typo, and as we agreed that\n> > the helper is not strictly necessary I wanted to wait a bit hoping for\n> > more feedback and eventually to post an accumulated patch. Doesn't make\n> > sense to post another version only to fix one typo :)\n>\n> Hi,\n>\n> I've prepared a new rebased version to deal with the new way of\n> computing query id, but as always there is one tricky part. From what I\n> understand, now an external module can provide custom implementation for\n> query id computation algorithm. It seems natural to think this machinery\n> could be used instead of patch in the thread, i.e. one could create a\n> custom logic that will enable constants collapsing as needed, so that\n> same queries with different number of constants in an array will be\n> hashed into the same record.\n>\n> But there is a limitation in how such queries will be normalized\n> afterwards — to reduce level of surprise it's necessary to display the\n> fact that a certain query in fact had more constants that are showed in\n> pgss record. Ideally LocationLen needs to carry some bits of information\n> on what exactly could be skipped, and generate_normalized_query needs to\n> understand that, both are not reachable for an external module with\n> custom query id logic (without replicating significant part of the\n> existing code). Hence, a new version of the patch.\n\nForgot to mention a couple of people who already reviewed the patch.",
"msg_date": "Wed, 16 Jun 2021 16:02:12 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": ">On Wed, Jun 16, 2021 at 04:02:12PM +0200, Dmitry Dolgov wrote:\n>\n> > I've prepared a new rebased version to deal with the new way of\n> > computing query id, but as always there is one tricky part. From what I\n> > understand, now an external module can provide custom implementation for\n> > query id computation algorithm. It seems natural to think this machinery\n> > could be used instead of patch in the thread, i.e. one could create a\n> > custom logic that will enable constants collapsing as needed, so that\n> > same queries with different number of constants in an array will be\n> > hashed into the same record.\n> >\n> > But there is a limitation in how such queries will be normalized\n> > afterwards — to reduce level of surprise it's necessary to display the\n> > fact that a certain query in fact had more constants that are showed in\n> > pgss record. Ideally LocationLen needs to carry some bits of information\n> > on what exactly could be skipped, and generate_normalized_query needs to\n> > understand that, both are not reachable for an external module with\n> > custom query id logic (without replicating significant part of the\n> > existing code). Hence, a new version of the patch.\n>\n> Forgot to mention a couple of people who already reviewed the patch.\n\nAnd now for something completely different, here is a new patch version.\nIt contains a small fix for one problem we've found during testing (one\npath code was incorrectly assuming find_const_walker results).",
"msg_date": "Thu, 30 Sep 2021 15:49:30 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 6:49 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> >On Wed, Jun 16, 2021 at 04:02:12PM +0200, Dmitry Dolgov wrote:\n> >\n> > > I've prepared a new rebased version to deal with the new way of\n> > > computing query id, but as always there is one tricky part. From what I\n> > > understand, now an external module can provide custom implementation\n> for\n> > > query id computation algorithm. It seems natural to think this\n> machinery\n> > > could be used instead of patch in the thread, i.e. one could create a\n> > > custom logic that will enable constants collapsing as needed, so that\n> > > same queries with different number of constants in an array will be\n> > > hashed into the same record.\n> > >\n> > > But there is a limitation in how such queries will be normalized\n> > > afterwards — to reduce level of surprise it's necessary to display the\n> > > fact that a certain query in fact had more constants that are showed in\n> > > pgss record. Ideally LocationLen needs to carry some bits of\n> information\n> > > on what exactly could be skipped, and generate_normalized_query needs\n> to\n> > > understand that, both are not reachable for an external module with\n> > > custom query id logic (without replicating significant part of the\n> > > existing code). Hence, a new version of the patch.\n> >\n> > Forgot to mention a couple of people who already reviewed the patch.\n>\n> And now for something completely different, here is a new patch version.\n> It contains a small fix for one problem we've found during testing (one\n> path code was incorrectly assuming find_const_walker results).\n>\nHi,\n\nbq. and at position further that specified threshold.\n\n that specified threshold -> than specified threshold\n\nCheers\n\nOn Thu, Sep 30, 2021 at 6:49 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:>On Wed, Jun 16, 2021 at 04:02:12PM +0200, Dmitry Dolgov wrote:\n>\n> > I've prepared a new rebased version to deal with the new way of\n> > computing query id, but as always there is one tricky part. From what I\n> > understand, now an external module can provide custom implementation for\n> > query id computation algorithm. It seems natural to think this machinery\n> > could be used instead of patch in the thread, i.e. one could create a\n> > custom logic that will enable constants collapsing as needed, so that\n> > same queries with different number of constants in an array will be\n> > hashed into the same record.\n> >\n> > But there is a limitation in how such queries will be normalized\n> > afterwards — to reduce level of surprise it's necessary to display the\n> > fact that a certain query in fact had more constants that are showed in\n> > pgss record. Ideally LocationLen needs to carry some bits of information\n> > on what exactly could be skipped, and generate_normalized_query needs to\n> > understand that, both are not reachable for an external module with\n> > custom query id logic (without replicating significant part of the\n> > existing code). Hence, a new version of the patch.\n>\n> Forgot to mention a couple of people who already reviewed the patch.\n\nAnd now for something completely different, here is a new patch version.\nIt contains a small fix for one problem we've found during testing (one\npath code was incorrectly assuming find_const_walker results).Hi,bq. and at position further that specified threshold. that specified threshold -> than specified thresholdCheers",
"msg_date": "Thu, 30 Sep 2021 08:03:16 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Thu, Sep 30, 2021 at 08:03:16AM -0700, Zhihong Yu wrote:\n> On Thu, Sep 30, 2021 at 6:49 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > >On Wed, Jun 16, 2021 at 04:02:12PM +0200, Dmitry Dolgov wrote:\n> > >\n> > > > I've prepared a new rebased version to deal with the new way of\n> > > > computing query id, but as always there is one tricky part. From what I\n> > > > understand, now an external module can provide custom implementation\n> > for\n> > > > query id computation algorithm. It seems natural to think this\n> > machinery\n> > > > could be used instead of patch in the thread, i.e. one could create a\n> > > > custom logic that will enable constants collapsing as needed, so that\n> > > > same queries with different number of constants in an array will be\n> > > > hashed into the same record.\n> > > >\n> > > > But there is a limitation in how such queries will be normalized\n> > > > afterwards — to reduce level of surprise it's necessary to display the\n> > > > fact that a certain query in fact had more constants that are showed in\n> > > > pgss record. Ideally LocationLen needs to carry some bits of\n> > information\n> > > > on what exactly could be skipped, and generate_normalized_query needs\n> > to\n> > > > understand that, both are not reachable for an external module with\n> > > > custom query id logic (without replicating significant part of the\n> > > > existing code). Hence, a new version of the patch.\n> > >\n> > > Forgot to mention a couple of people who already reviewed the patch.\n> >\n> > And now for something completely different, here is a new patch version.\n> > It contains a small fix for one problem we've found during testing (one\n> > path code was incorrectly assuming find_const_walker results).\n> >\n> Hi,\n>\n> bq. and at position further that specified threshold.\n>\n> that specified threshold -> than specified threshold\n\nYou mean in the patch commit message, nowhere else, right? Yep, my spell\nchecker didn't catch that, thanks for noticing!\n\n\n",
"msg_date": "Thu, 30 Sep 2021 17:09:57 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Dmitry Dolgov <9erthalion6@gmail.com> writes:\n> And now for something completely different, here is a new patch version.\n> It contains a small fix for one problem we've found during testing (one\n> path code was incorrectly assuming find_const_walker results).\n\nI've been saying from day one that pushing the query-hashing code into the\ncore was a bad idea, and I think this patch perfectly illustrates why.\nWe can debate whether the rules proposed here are good for\npg_stat_statements or not, but it seems inevitable that they will be a\ndisaster for some other consumers of the query hash. In particular,\ndropping external parameters from the hash seems certain to break\nsomething for somebody --- do you really think that a query with two int\nparameters is equivalent to one with five float parameters for all\nquery-identifying purposes?\n\nI can see the merits of allowing different numbers of IN elements\nto be considered equivalent for pg_stat_statements, but this patch\nseems to go far beyond that basic idea, and I fear the side-effects\nwill be very bad.\n\nAlso, calling eval_const_expressions in the query jumbler is flat\nout unacceptable. There is way too much code that could be reached\nthat way (more or less the entire executor, to start with). I\ndon't have a lot of faith that it'd never modify the input tree,\neither.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Jan 2022 18:02:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On 1/5/22 4:02 AM, Tom Lane wrote:\n> Dmitry Dolgov <9erthalion6@gmail.com> writes:\n>> And now for something completely different, here is a new patch version.\n>> It contains a small fix for one problem we've found during testing (one\n>> path code was incorrectly assuming find_const_walker results).\n> \n> I've been saying from day one that pushing the query-hashing code into the\n> core was a bad idea, and I think this patch perfectly illustrates why.\n> We can debate whether the rules proposed here are good for\n> pg_stat_statements or not, but it seems inevitable that they will be a\n> disaster for some other consumers of the query hash. In particular,\n> dropping external parameters from the hash seems certain to break\n> something for somebody\n+1.\n\nIn a couple of extensions I use different logic of query jumbling - hash \nvalue is more stable in some cases than in default implementation. For \nexample, it should be stable to permutations in 'FROM' section of a query.\nAnd If anyone subtly changes jumbling logic when the extension is \nactive, the instance could get huge performance issues.\n\nLet me suggest, that the core should allow an extension at least to \ndetect such interference between extensions. Maybe hook could be \nreplaced with callback to allow extension see an queryid with underlying \ngeneration logic what it expects.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Wed, 5 Jan 2022 09:37:33 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru> writes:\n> On 1/5/22 4:02 AM, Tom Lane wrote:\n>> I've been saying from day one that pushing the query-hashing code into the\n>> core was a bad idea, and I think this patch perfectly illustrates why.\n\n> +1.\n\n> Let me suggest, that the core should allow an extension at least to \n> detect such interference between extensions. Maybe hook could be \n> replaced with callback to allow extension see an queryid with underlying \n> generation logic what it expects.\n\nI feel like we need to get away from the idea that there is just\none query hash, and somehow let different extensions attach\ndifferently-calculated hashes to a query. I don't have any immediate\nideas about how to do that in a reasonably inexpensive way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Jan 2022 00:13:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Tue, Jan 04, 2022 at 06:02:43PM -0500, Tom Lane wrote:\n> We can debate whether the rules proposed here are good for\n> pg_stat_statements or not, but it seems inevitable that they will be a\n> disaster for some other consumers of the query hash.\n\nHm, which consumers do you mean here, potential extension? Isn't the\nability to use an external module to compute queryid make this situation\npossible anyway?\n\n> do you really think that a query with two int\n> parameters is equivalent to one with five float parameters for all\n> query-identifying purposes?\n\nNope, and it will be hard to figure this out no matter which approach\nwe're talking about, because it mostly depends on the context and type\nof queries I guess. Instead, such functionality should allow some\nreasonable configuration. To be clear, the use case I have in mind here\nis not four or five, but rather a couple of hundreds constants where\nchances that the whole construction was generated automatically by ORM\nis higher than normal.\n\n> I can see the merits of allowing different numbers of IN elements\n> to be considered equivalent for pg_stat_statements, but this patch\n> seems to go far beyond that basic idea, and I fear the side-effects\n> will be very bad.\n\nNot sure why it goes far beyond, but then there were two approaches\nunder consideration, as I've stated in the first message. I already\ndon't remember all the details, but another one was evolving around\ndoing similar things in a more limited fashion in transformAExprIn. The\nproblem would be then to carry the information, necessary to represent\nthe act of \"merging\" some number of queryids together. Any thoughts\nhere?\n\nThe idea of keeping the original queryid untouched and add another type\nof id instead sounds interesting, but it will add too much overhead for\na quite small use case I guess.\n\n\n",
"msg_date": "Wed, 5 Jan 2022 22:11:11 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Wed, Jan 05, 2022 at 10:11:11PM +0100, Dmitry Dolgov wrote:\n> > On Tue, Jan 04, 2022 at 06:02:43PM -0500, Tom Lane wrote:\n> > We can debate whether the rules proposed here are good for\n> > pg_stat_statements or not, but it seems inevitable that they will be a\n> > disaster for some other consumers of the query hash.\n>\n> Hm, which consumers do you mean here, potential extension? Isn't the\n> ability to use an external module to compute queryid make this situation\n> possible anyway?\n>\n> > do you really think that a query with two int\n> > parameters is equivalent to one with five float parameters for all\n> > query-identifying purposes?\n>\n> Nope, and it will be hard to figure this out no matter which approach\n> we're talking about, because it mostly depends on the context and type\n> of queries I guess. Instead, such functionality should allow some\n> reasonable configuration. To be clear, the use case I have in mind here\n> is not four or five, but rather a couple of hundreds constants where\n> chances that the whole construction was generated automatically by ORM\n> is higher than normal.\n>\n> > I can see the merits of allowing different numbers of IN elements\n> > to be considered equivalent for pg_stat_statements, but this patch\n> > seems to go far beyond that basic idea, and I fear the side-effects\n> > will be very bad.\n>\n> Not sure why it goes far beyond, but then there were two approaches\n> under consideration, as I've stated in the first message. I already\n> don't remember all the details, but another one was evolving around\n> doing similar things in a more limited fashion in transformAExprIn. The\n> problem would be then to carry the information, necessary to represent\n> the act of \"merging\" some number of queryids together. Any thoughts\n> here?\n>\n> The idea of keeping the original queryid untouched and add another type\n> of id instead sounds interesting, but it will add too much overhead for\n> a quite small use case I guess.\n\n```\nThu, 10 Mar 2022\nNew status: Waiting on Author\n```\n\nThis seems incorrect, as the only feedback I've got was \"this is a bad\nidea\", and no reaction on follow-up questions.\n\n\n",
"msg_date": "Thu, 10 Mar 2022 17:38:37 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Dmitry Dolgov <9erthalion6@gmail.com> writes:\n> New status: Waiting on Author\n\n> This seems incorrect, as the only feedback I've got was \"this is a bad\n> idea\", and no reaction on follow-up questions.\n\nI changed the status because it seems to me there is no chance of\nthis being committed as-is.\n\n1. I think an absolute prerequisite before we could even consider\nchanging the query jumbler rules this much is to do the work that was\nput off when the jumbler was moved into core: that is, provide some\nhonest support for multiple query-ID generation methods being used at\nthe same time. Even if you successfully make a case for\npg_stat_statements to act this way, other consumers of query IDs\naren't going to be happy with it.\n\n2. You haven't made a case for it. The original complaint was\nabout different lengths of IN lists not being treated as equivalent,\nbut this patch has decided to do I'm-not-even-sure-quite-what\nabout treating different Params as equivalent. Plus you're trying\nto invoke eval_const_expressions in the jumbler; that is absolutely\nNot OK, for both safety and semantic reasons.\n\nIf you backed off to just treating ArrayExprs containing different\nnumbers of Consts as equivalent, maybe that'd be something we could\nadopt without fixing point 1. I don't think anything that fuzzes the\ntreatment of Params can get away with that, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 10 Mar 2022 12:11:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On Thu, Mar 10, 2022 at 12:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > This seems incorrect, as the only feedback I've got was \"this is a bad\n> > idea\", and no reaction on follow-up questions.\n>\n> I changed the status because it seems to me there is no chance of\n> this being committed as-is.\n>\n> 1. I think an absolute prerequisite before we could even consider\n> changing the query jumbler rules this much is to do the work that was\n> put off when the jumbler was moved into core: that is, provide some\n> honest support for multiple query-ID generation methods being used at\n> the same time. Even if you successfully make a case for\n> pg_stat_statements to act this way, other consumers of query IDs\n> aren't going to be happy with it.\n\nFWIW, I don't find this convincing at all. Query jumbling is already\nsomewhat expensive, and it seems unlikely that the same person is\ngoing to want to jumble queries in one way for pg_stat_statements and\nanother way for pg_stat_broccoli or whatever their other extension is.\nPutting a lot of engineering work into something with such a marginal\nuse case seems not worthwhile to me - and also likely futile, because\nI don't see how it could realistically be made nearly as cheap as a\nsingle jumble.\n\n> 2. You haven't made a case for it. The original complaint was\n> about different lengths of IN lists not being treated as equivalent,\n> but this patch has decided to do I'm-not-even-sure-quite-what\n> about treating different Params as equivalent. Plus you're trying\n> to invoke eval_const_expressions in the jumbler; that is absolutely\n> Not OK, for both safety and semantic reasons.\n\nI think there are two separate points here, one about patch quality\nand the other about whether the basic idea is good. I think the basic\nidea is good. I do not contend that collapsing IN-lists of arbitrary\nlength is what everyone wants in all cases, but it seems entirely\nreasonable to me to think that it is what some people want. So I would\nsay just make it a parameter and let people configure whichever\nbehavior they want. My bet is 95% of users would prefer to have it on,\nbut even if that's wildly wrong, having it as an optional behavior\nhurts nobody. Let it be off by default and let those who want it flip\nthe toggle. On the code quality issue, I haven't read the patch but\nyour concerns sound well-founded to me from reading what you wrote.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 10 Mar 2022 12:32:08 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Thu, Mar 10, 2022 at 12:32:08PM -0500, Robert Haas wrote:\n> On Thu, Mar 10, 2022 at 12:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > 2. You haven't made a case for it. The original complaint was\n> > about different lengths of IN lists not being treated as equivalent,\n> > but this patch has decided to do I'm-not-even-sure-quite-what\n> > about treating different Params as equivalent. Plus you're trying\n> > to invoke eval_const_expressions in the jumbler; that is absolutely\n> > Not OK, for both safety and semantic reasons.\n>\n> I think there are two separate points here, one about patch quality\n> and the other about whether the basic idea is good. I think the basic\n> idea is good. I do not contend that collapsing IN-lists of arbitrary\n> length is what everyone wants in all cases, but it seems entirely\n> reasonable to me to think that it is what some people want. So I would\n> say just make it a parameter and let people configure whichever\n> behavior they want. My bet is 95% of users would prefer to have it on,\n> but even if that's wildly wrong, having it as an optional behavior\n> hurts nobody. Let it be off by default and let those who want it flip\n> the toggle. On the code quality issue, I haven't read the patch but\n> your concerns sound well-founded to me from reading what you wrote.\n\nI have the same understanding, there is a toggle in the patch exactly\nfor this purpose.\n\nTo give a bit more context, the whole development was ORM-driven rather\nthan pulled out of thin air -- people were complaining about huge\ngenerated queries that could be barely displayed in monitoring, I was\ntrying to address it via collapsing the list where it was happening. In\nother words \"I'm-not-even-sure-quite-what\" part may be indeed too\nextensive, but was triggered by real world issues.\n\nOf course, I could get the implementation not quite right, e.g. I wasn't\naware about dangers of using eval_const_expressions. But that's what the\nCF item and the corresponding discussion is for, I guess. Let me see\nwhat I could do to improve it.\n\n\n",
"msg_date": "Thu, 10 Mar 2022 20:06:51 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Thu, Mar 10, 2022 at 12:11:59PM -0500, Tom Lane wrote:\n> Dmitry Dolgov <9erthalion6@gmail.com> writes:\n> > New status: Waiting on Author\n>\n> > This seems incorrect, as the only feedback I've got was \"this is a bad\n> > idea\", and no reaction on follow-up questions.\n>\n> I changed the status because it seems to me there is no chance of\n> this being committed as-is.\n>\n> 1. I think an absolute prerequisite before we could even consider\n> changing the query jumbler rules this much is to do the work that was\n> put off when the jumbler was moved into core: that is, provide some\n> honest support for multiple query-ID generation methods being used at\n> the same time. Even if you successfully make a case for\n> pg_stat_statements to act this way, other consumers of query IDs\n> aren't going to be happy with it.\n>\n> 2. You haven't made a case for it. The original complaint was\n> about different lengths of IN lists not being treated as equivalent,\n> but this patch has decided to do I'm-not-even-sure-quite-what\n> about treating different Params as equivalent. Plus you're trying\n> to invoke eval_const_expressions in the jumbler; that is absolutely\n> Not OK, for both safety and semantic reasons.\n>\n> If you backed off to just treating ArrayExprs containing different\n> numbers of Consts as equivalent, maybe that'd be something we could\n> adopt without fixing point 1. I don't think anything that fuzzes the\n> treatment of Params can get away with that, though.\n\nHere is the limited version of list collapsing functionality, which\ndoesn't utilize eval_const_expressions and ignores most of the stuff\nexcept ArrayExprs. Any thoughts/more suggestions?",
"msg_date": "Sat, 12 Mar 2022 15:10:30 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On Sat, Mar 12, 2022 at 9:11 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> Here is the limited version of list collapsing functionality, which\n> doesn't utilize eval_const_expressions and ignores most of the stuff\n> except ArrayExprs. Any thoughts/more suggestions?\n\nThe proposed commit message says this commit intends to \"Make Consts\ncontribute nothing to the jumble hash if they're part of a series and\nat position further that specified threshold.\" I'm not sure whether\nthat's what the patch actually implements because I can't immediately\nunderstand the new logic you've added, but I think if we did what that\nsentence said then, supposing the threshold is set to 1, it would\nresult in producing the same hash for \"x in (1,2)\" that we do for \"x\nin (1,3)\" but a different hash for \"x in (2,3)\" which does not sound\nlike what we want. What I would have thought we'd do is: if the list\nis all constants and long enough to satisfy the threshold then nothing\nin the list gets jumbled.\n\nI'm a little surprised that there's not more context-awareness in this\ncode. It seems that it applies to every ArrayExpr found in the query,\nwhich I think would extend to cases beyond something = IN(whatever).\nIn particular, any use of ARRAY[] in the query would be impacted. Now,\nthe comments seem to imply that's pretty intentional, but from the\nuser's point of view, WHERE x in (1,3) and x = any(array[1,3]) are two\ndifferent things. If anything like this is to be adopted, we certainly\nneed to be precise about exactly what it is doing and which cases are\ncovered. I thought of looking at the documentation to see whether\nyou'd tried to clarify this there, and found that you hadn't written\nany.\n\nIn short, I think this patch is not really very close to being in\ncommittable shape even if nobody were objecting to the concept.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 14 Mar 2022 10:17:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Mon, Mar 14, 2022 at 10:17:57AM -0400, Robert Haas wrote:\n> On Sat, Mar 12, 2022 at 9:11 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > Here is the limited version of list collapsing functionality, which\n> > doesn't utilize eval_const_expressions and ignores most of the stuff\n> > except ArrayExprs. Any thoughts/more suggestions?\n>\n> The proposed commit message says this commit intends to \"Make Consts\n> contribute nothing to the jumble hash if they're part of a series and\n> at position further that specified threshold.\" I'm not sure whether\n> that's what the patch actually implements because I can't immediately\n> understand the new logic you've added, but I think if we did what that\n> sentence said then, supposing the threshold is set to 1, it would\n> result in producing the same hash for \"x in (1,2)\" that we do for \"x\n> in (1,3)\" but a different hash for \"x in (2,3)\" which does not sound\n> like what we want. What I would have thought we'd do is: if the list\n> is all constants and long enough to satisfy the threshold then nothing\n> in the list gets jumbled.\n\nWell, yeah, the commit message is somewhat clumsy in this regard. It\nworks almost in the way you've described, except if the list is all\nconstants and long enough to satisfy the threshold then *first N\nelements (where N == threshold) will be jumbled -- to leave at least\nsome traces of it in pgss.\n\n> I'm a little surprised that there's not more context-awareness in this\n> code. It seems that it applies to every ArrayExpr found in the query,\n> which I think would extend to cases beyond something = IN(whatever).\n> In particular, any use of ARRAY[] in the query would be impacted. Now,\n> the comments seem to imply that's pretty intentional, but from the\n> user's point of view, WHERE x in (1,3) and x = any(array[1,3]) are two\n> different things. If anything like this is to be adopted, we certainly\n> need to be precise about exactly what it is doing and which cases are\n> covered.\n\nI'm not sure if I follow the last point. WHERE x in (1,3) and x =\nany(array[1,3]) are two different things for sure, but in which way are\nthey going to be mixed together because of this change? My goal was to\nmake only the following transformation, without leaving any uncertainty:\n\nWHERE x in (1, 2, 3, 4, 5) -> WHERE x in (1, 2, ...)\nWHERE x = any(array[1, 2, 3, 4, 5]) -> WHERE x = any(array[1, 2, ...])\n\n> I thought of looking at the documentation to see whether you'd tried\n> to clarify this there, and found that you hadn't written any.\n>\n> In short, I think this patch is not really very close to being in\n> committable shape even if nobody were objecting to the concept.\n\nSure, I'll add documentation. To be honest I'm not targeting PG15 with\nthis, just want to make some progress. Thanks for the feedback, I'm glad\nto see it coming!\n\n\n",
"msg_date": "Mon, 14 Mar 2022 15:57:34 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 10:57 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> Well, yeah, the commit message is somewhat clumsy in this regard. It\n> works almost in the way you've described, except if the list is all\n> constants and long enough to satisfy the threshold then *first N\n> elements (where N == threshold) will be jumbled -- to leave at least\n> some traces of it in pgss.\n\nBut that seems to me to be a thing we would not want. Why do you think\notherwise?\n\n> I'm not sure if I follow the last point. WHERE x in (1,3) and x =\n> any(array[1,3]) are two different things for sure, but in which way are\n> they going to be mixed together because of this change? My goal was to\n> make only the following transformation, without leaving any uncertainty:\n>\n> WHERE x in (1, 2, 3, 4, 5) -> WHERE x in (1, 2, ...)\n> WHERE x = any(array[1, 2, 3, 4, 5]) -> WHERE x = any(array[1, 2, ...])\n\nI understand. I think it might be OK to transform both of those\nthings, but I don't think it's very clear either from the comments or\nthe nonexistent documentation that both of those cases are affected --\nand I think that needs to be clear. Not sure exactly how to do that,\njust saying that we can't add behavior unless it will be clear to\nusers what the behavior is.\n\n> Sure, I'll add documentation. To be honest I'm not targeting PG15 with\n> this, just want to make some progress.\n\nwfm!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 14 Mar 2022 11:02:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Mon, Mar 14, 2022 at 11:02:16AM -0400, Robert Haas wrote:\n> On Mon, Mar 14, 2022 at 10:57 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > Well, yeah, the commit message is somewhat clumsy in this regard. It\n> > works almost in the way you've described, except if the list is all\n> > constants and long enough to satisfy the threshold then *first N\n> > elements (where N == threshold) will be jumbled -- to leave at least\n> > some traces of it in pgss.\n>\n> But that seems to me to be a thing we would not want. Why do you think\n> otherwise?\n\nHm. Well, if the whole list would be not jumbled, the transformation\nwould look like this, right?\n\nWHERE x in (1, 2, 3, 4, 5) -> WHERE x in (...)\n\nLeaving some number of original elements in place gives some clue for\nthe reader about at least what type of data the array has contained.\nWhich hopefully makes it a bit easier to identify even in the collapsed\nform:\n\nWHERE x in (1, 2, 3, 4, 5) -> WHERE x in (1, 2, ...)\n\n> > I'm not sure if I follow the last point. WHERE x in (1,3) and x =\n> > any(array[1,3]) are two different things for sure, but in which way are\n> > they going to be mixed together because of this change? My goal was to\n> > make only the following transformation, without leaving any uncertainty:\n> >\n> > WHERE x in (1, 2, 3, 4, 5) -> WHERE x in (1, 2, ...)\n> > WHERE x = any(array[1, 2, 3, 4, 5]) -> WHERE x = any(array[1, 2, ...])\n>\n> I understand. I think it might be OK to transform both of those\n> things, but I don't think it's very clear either from the comments or\n> the nonexistent documentation that both of those cases are affected --\n> and I think that needs to be clear. Not sure exactly how to do that,\n> just saying that we can't add behavior unless it will be clear to\n> users what the behavior is.\n\nYep, got it.\n\n\n",
"msg_date": "Mon, 14 Mar 2022 16:10:28 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Mar 14, 2022 at 10:57 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>> I'm not sure if I follow the last point. WHERE x in (1,3) and x =\n>> any(array[1,3]) are two different things for sure, but in which way are\n>> they going to be mixed together because of this change? My goal was to\n>> make only the following transformation, without leaving any uncertainty:\n>> \n>> WHERE x in (1, 2, 3, 4, 5) -> WHERE x in (1, 2, ...)\n>> WHERE x = any(array[1, 2, 3, 4, 5]) -> WHERE x = any(array[1, 2, ...])\n\n> I understand. I think it might be OK to transform both of those\n> things, but I don't think it's very clear either from the comments or\n> the nonexistent documentation that both of those cases are affected --\n> and I think that needs to be clear.\n\nWe've transformed IN(...) to ANY(ARRAY[...]) at the parser stage for a\nlong time, and this has been visible to users of either EXPLAIN or\npg_stat_statements for the same length of time. I doubt people are\ngoing to find that surprising. Even if they do, it's not the query\njumbler's fault.\n\nI do find it odd that the proposed patch doesn't cause the *entire*\nlist to be skipped over. That seems like extra complexity and confusion\nto no benefit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 14 Mar 2022 11:23:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Mon, Mar 14, 2022 at 11:23:17AM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>\n> I do find it odd that the proposed patch doesn't cause the *entire*\n> list to be skipped over. That seems like extra complexity and confusion\n> to no benefit.\n\nThat's a bit surprising for me, I haven't even thought that folks could\nthink this is an odd behaviour. As I've mentioned above, the original\nidea was to give some clues about what was inside the collapsed array,\nbut if everyone finds it unnecessary I can of course change it.\n\n\n",
"msg_date": "Mon, 14 Mar 2022 16:33:46 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Dmitry Dolgov <9erthalion6@gmail.com> writes:\n> On Mon, Mar 14, 2022 at 11:23:17AM -0400, Tom Lane wrote:\n>> I do find it odd that the proposed patch doesn't cause the *entire*\n>> list to be skipped over. That seems like extra complexity and confusion\n>> to no benefit.\n\n> That's a bit surprising for me, I haven't even thought that folks could\n> think this is an odd behaviour. As I've mentioned above, the original\n> idea was to give some clues about what was inside the collapsed array,\n> but if everyone finds it unnecessary I can of course change it.\n\nBut if what we're doing is skipping over an all-Consts list, then the\nindividual Consts would be elided from the pg_stat_statements entry\nanyway, no? All that would remain is information about how many such\nConsts there were, which is exactly the information you want to drop.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 14 Mar 2022 11:38:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Mon, Mar 14, 2022 at 11:38:23AM -0400, Tom Lane wrote:\n> Dmitry Dolgov <9erthalion6@gmail.com> writes:\n> > On Mon, Mar 14, 2022 at 11:23:17AM -0400, Tom Lane wrote:\n> >> I do find it odd that the proposed patch doesn't cause the *entire*\n> >> list to be skipped over. That seems like extra complexity and confusion\n> >> to no benefit.\n>\n> > That's a bit surprising for me, I haven't even thought that folks could\n> > think this is an odd behaviour. As I've mentioned above, the original\n> > idea was to give some clues about what was inside the collapsed array,\n> > but if everyone finds it unnecessary I can of course change it.\n>\n> But if what we're doing is skipping over an all-Consts list, then the\n> individual Consts would be elided from the pg_stat_statements entry\n> anyway, no? All that would remain is information about how many such\n> Consts there were, which is exactly the information you want to drop.\n\nHm, yes, you're right. I guess I was thinking about this more like about\nshortening some text with ellipsis, but indeed no actual Consts will end\nup in the result anyway. Thanks for clarification, will modify the\npatch!\n\n\n",
"msg_date": "Mon, 14 Mar 2022 16:51:50 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Mon, Mar 14, 2022 at 04:51:50PM +0100, Dmitry Dolgov wrote:\n> > On Mon, Mar 14, 2022 at 11:38:23AM -0400, Tom Lane wrote:\n> > Dmitry Dolgov <9erthalion6@gmail.com> writes:\n> > > On Mon, Mar 14, 2022 at 11:23:17AM -0400, Tom Lane wrote:\n> > >> I do find it odd that the proposed patch doesn't cause the *entire*\n> > >> list to be skipped over. That seems like extra complexity and confusion\n> > >> to no benefit.\n> >\n> > > That's a bit surprising for me, I haven't even thought that folks could\n> > > think this is an odd behaviour. As I've mentioned above, the original\n> > > idea was to give some clues about what was inside the collapsed array,\n> > > but if everyone finds it unnecessary I can of course change it.\n> >\n> > But if what we're doing is skipping over an all-Consts list, then the\n> > individual Consts would be elided from the pg_stat_statements entry\n> > anyway, no? All that would remain is information about how many such\n> > Consts there were, which is exactly the information you want to drop.\n>\n> Hm, yes, you're right. I guess I was thinking about this more like about\n> shortening some text with ellipsis, but indeed no actual Consts will end\n> up in the result anyway. Thanks for clarification, will modify the\n> patch!\n\nHere is another iteration. Now the patch doesn't leave any trailing\nConsts in the normalized query, and contains more documentation. I hope\nit's getting better.",
"msg_date": "Sat, 26 Mar 2022 18:40:35 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sat, Mar 26, 2022 at 06:40:35PM +0100, Dmitry Dolgov wrote:\n> > On Mon, Mar 14, 2022 at 04:51:50PM +0100, Dmitry Dolgov wrote:\n> > > On Mon, Mar 14, 2022 at 11:38:23AM -0400, Tom Lane wrote:\n> > > Dmitry Dolgov <9erthalion6@gmail.com> writes:\n> > > > On Mon, Mar 14, 2022 at 11:23:17AM -0400, Tom Lane wrote:\n> > > >> I do find it odd that the proposed patch doesn't cause the *entire*\n> > > >> list to be skipped over. That seems like extra complexity and confusion\n> > > >> to no benefit.\n> > >\n> > > > That's a bit surprising for me, I haven't even thought that folks could\n> > > > think this is an odd behaviour. As I've mentioned above, the original\n> > > > idea was to give some clues about what was inside the collapsed array,\n> > > > but if everyone finds it unnecessary I can of course change it.\n> > >\n> > > But if what we're doing is skipping over an all-Consts list, then the\n> > > individual Consts would be elided from the pg_stat_statements entry\n> > > anyway, no? All that would remain is information about how many such\n> > > Consts there were, which is exactly the information you want to drop.\n> >\n> > Hm, yes, you're right. I guess I was thinking about this more like about\n> > shortening some text with ellipsis, but indeed no actual Consts will end\n> > up in the result anyway. Thanks for clarification, will modify the\n> > patch!\n>\n> Here is another iteration. Now the patch doesn't leave any trailing\n> Consts in the normalized query, and contains more documentation. I hope\n> it's getting better.\n\nHi,\n\nHere is the rebased version, with no other changes.",
"msg_date": "Sun, 24 Jul 2022 12:06:36 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Hello!\n\nUnfortunately the patch needs another rebase due to the recent split of guc.c (0a20ff54f5e66158930d5328f89f087d4e9ab400)\n\nI'm reviewing a patch on top of a previous commit and noticed a failed test:\n\n# Failed test 'no parameters missing from postgresql.conf.sample'\n# at t/003_check_guc.pl line 82.\n# got: '1'\n# expected: '0'\n# Looks like you failed 1 test of 3.\nt/003_check_guc.pl .............. \n\nThe new option has not been added to the postgresql.conf.sample\n\nPS: I would also like to have such a feature. It's hard to increase pg_stat_statements.max or lose some entries just because some ORM sends requests with a different number of parameters.\n\nregards, Sergei\n\n\n",
"msg_date": "Fri, 16 Sep 2022 21:25:13 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re:pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Fri, Sep 16, 2022 at 09:25:13PM +0300, Sergei Kornilov wrote:\n> Hello!\n>\n> Unfortunately the patch needs another rebase due to the recent split of guc.c (0a20ff54f5e66158930d5328f89f087d4e9ab400)\n>\n> I'm reviewing a patch on top of a previous commit and noticed a failed test:\n>\n> # Failed test 'no parameters missing from postgresql.conf.sample'\n> # at t/003_check_guc.pl line 82.\n> # got: '1'\n> # expected: '0'\n> # Looks like you failed 1 test of 3.\n> t/003_check_guc.pl ..............\n>\n> The new option has not been added to the postgresql.conf.sample\n>\n> PS: I would also like to have such a feature. It's hard to increase pg_stat_statements.max or lose some entries just because some ORM sends requests with a different number of parameters.\n\nThanks! I'll post the rebased version soon.\n\n\n",
"msg_date": "Sat, 24 Sep 2022 16:07:14 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sat, Sep 24, 2022 at 04:07:14PM +0200, Dmitry Dolgov wrote:\n> > On Fri, Sep 16, 2022 at 09:25:13PM +0300, Sergei Kornilov wrote:\n> > Hello!\n> >\n> > Unfortunately the patch needs another rebase due to the recent split of guc.c (0a20ff54f5e66158930d5328f89f087d4e9ab400)\n> >\n> > I'm reviewing a patch on top of a previous commit and noticed a failed test:\n> >\n> > # Failed test 'no parameters missing from postgresql.conf.sample'\n> > # at t/003_check_guc.pl line 82.\n> > # got: '1'\n> > # expected: '0'\n> > # Looks like you failed 1 test of 3.\n> > t/003_check_guc.pl ..............\n> >\n> > The new option has not been added to the postgresql.conf.sample\n> >\n> > PS: I would also like to have such a feature. It's hard to increase pg_stat_statements.max or lose some entries just because some ORM sends requests with a different number of parameters.\n>\n> Thanks! I'll post the rebased version soon.\n\nAnd here it is.",
"msg_date": "Sun, 25 Sep 2022 01:59:39 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On Sun, 25 Sept 2022 at 05:29, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Sat, Sep 24, 2022 at 04:07:14PM +0200, Dmitry Dolgov wrote:\n> > > On Fri, Sep 16, 2022 at 09:25:13PM +0300, Sergei Kornilov wrote:\n> > > Hello!\n> > >\n> > > Unfortunately the patch needs another rebase due to the recent split of guc.c (0a20ff54f5e66158930d5328f89f087d4e9ab400)\n> > >\n> > > I'm reviewing a patch on top of a previous commit and noticed a failed test:\n> > >\n> > > # Failed test 'no parameters missing from postgresql.conf.sample'\n> > > # at t/003_check_guc.pl line 82.\n> > > # got: '1'\n> > > # expected: '0'\n> > > # Looks like you failed 1 test of 3.\n> > > t/003_check_guc.pl ..............\n> > >\n> > > The new option has not been added to the postgresql.conf.sample\n> > >\n> > > PS: I would also like to have such a feature. It's hard to increase pg_stat_statements.max or lose some entries just because some ORM sends requests with a different number of parameters.\n> >\n> > Thanks! I'll post the rebased version soon.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n456fa635a909ee36f73ca84d340521bd730f265f ===\n=== applying patch\n./v9-0001-Prevent-jumbling-of-every-element-in-ArrayExpr.patch\n....\ncan't find file to patch at input line 746\nPerhaps you used the wrong -p or --strip option?\nThe text leading up to this was:\n--------------------------\n|diff --git a/src/backend/utils/misc/queryjumble.c\nb/src/backend/utils/misc/queryjumble.c\n|index a67487e5fe..063b4be725 100644\n|--- a/src/backend/utils/misc/queryjumble.c\n|+++ b/src/backend/utils/misc/queryjumble.c\n--------------------------\nNo file to patch. Skipping patch.\n8 out of 8 hunks ignored\ncan't find file to patch at input line 913\nPerhaps you used the wrong -p or --strip option?\nThe text leading up to this was:\n--------------------------\n|diff --git a/src/include/utils/queryjumble.h b/src/include/utils/queryjumble.h\n|index 3c2d9beab2..b50cc42d4e 100644\n|--- a/src/include/utils/queryjumble.h\n|+++ b/src/include/utils/queryjumble.h\n--------------------------\nNo file to patch. Skipping patch.\n\n[1] - http://cfbot.cputube.org/patch_41_2837.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 27 Jan 2023 20:15:29 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Fri, Jan 27, 2023 at 08:15:29PM +0530, vignesh C wrote:\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nThanks. I think this one should do the trick.",
"msg_date": "Sun, 29 Jan 2023 13:22:42 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Em dom., 29 de jan. de 2023 às 09:24, Dmitry Dolgov <9erthalion6@gmail.com>\nescreveu:\n\n> > On Fri, Jan 27, 2023 at 08:15:29PM +0530, vignesh C wrote:\n> > The patch does not apply on top of HEAD as in [1], please post a rebased\n> patch:\n>\n> Thanks. I think this one should do the trick.\n>\n\nThere is a typo on DOC part\n+ and it's length is larger than <varname> const_merge_threshold\n</varname>,\n+ then array elements will contribure nothing to the query\nidentifier.\n+ Thus the query will get the same identifier no matter how many\nconstants\n\nThat \"contribure\" should be \"contribute\"\n\nregards\nMarcos\n\nEm dom., 29 de jan. de 2023 às 09:24, Dmitry Dolgov <9erthalion6@gmail.com> escreveu:> On Fri, Jan 27, 2023 at 08:15:29PM +0530, vignesh C wrote:\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nThanks. I think this one should do the trick.There is a typo on DOC part+ and it's length is larger than <varname> const_merge_threshold </varname>,+ then array elements will contribure nothing to the query identifier.+ Thus the query will get the same identifier no matter how many constantsThat \"contribure\" should be \"contribute\"regardsMarcos",
"msg_date": "Sun, 29 Jan 2023 09:56:02 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sun, Jan 29, 2023 at 09:56:02AM -0300, Marcos Pegoraro wrote:\n> Em dom., 29 de jan. de 2023 �s 09:24, Dmitry Dolgov <9erthalion6@gmail.com>\n> escreveu:\n>\n> > > On Fri, Jan 27, 2023 at 08:15:29PM +0530, vignesh C wrote:\n> > > The patch does not apply on top of HEAD as in [1], please post a rebased\n> > patch:\n> >\n> > Thanks. I think this one should do the trick.\n> >\n>\n> There is a typo on DOC part\n> + and it's length is larger than <varname> const_merge_threshold\n> </varname>,\n> + then array elements will contribure nothing to the query\n> identifier.\n> + Thus the query will get the same identifier no matter how many\n> constants\n>\n> That \"contribure\" should be \"contribute\"\n\nIndeed, thanks for noticing.",
"msg_date": "Sun, 29 Jan 2023 14:32:19 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "This appears to have massive conflicts. Would you please rebase?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"¿Cómo puedes confiar en algo que pagas y que no ves,\ny no confiar en algo que te dan y te lo muestran?\" (Germán Poo)\n\n\n",
"msg_date": "Thu, 2 Feb 2023 15:07:27 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Thu, Feb 02, 2023 at 03:07:27PM +0100, Alvaro Herrera wrote:\n> This appears to have massive conflicts. Would you please rebase?\n\nSure, I was already mentally preparing myself to do so in the view of\nrecent changes in query jumbling. Will post soon.\n\n\n",
"msg_date": "Thu, 2 Feb 2023 16:05:54 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Thu, Feb 02, 2023 at 04:05:54PM +0100, Dmitry Dolgov wrote:\n> > On Thu, Feb 02, 2023 at 03:07:27PM +0100, Alvaro Herrera wrote:\n> > This appears to have massive conflicts. Would you please rebase?\n>\n> Sure, I was already mentally preparing myself to do so in the view of\n> recent changes in query jumbling. Will post soon.\n\nHere is the rebased version. To adapt to the latest changes, I've marked\nArrayExpr with custom_query_jumble to implement this functionality, but\ntried to make the actual merge logic relatively independent. Otherwise,\neverything is the same.",
"msg_date": "Sat, 4 Feb 2023 18:08:41 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On Sat, Feb 04, 2023 at 06:08:41PM +0100, Dmitry Dolgov wrote:\n> Here is the rebased version. To adapt to the latest changes, I've marked\n> ArrayExpr with custom_query_jumble to implement this functionality, but\n> tried to make the actual merge logic relatively independent. Otherwise,\n> everything is the same.\n\nI was quickly looking at this patch, so these are rough impressions.\n\n+ bool merged; /* whether or not the location was marked as\n+ not contributing to jumble */\n\nThis part of the patch is a bit disturbing.. We have node attributes\nto track if portions of a node should be ignored or have a location\nmarked, still this \"merged\" flag is used as an extension to track if a\nlocation should be done or not. Is that a concept that had better be\ncontrolled via a new node attribute?\n\n+--\n+-- Consts merging\n+--\n+CREATE TABLE test_merge (id int, data int);\n+-- IN queries\n+-- No merging\n\nWould it be better to split this set of tests into a new file? FWIW,\nI have a patch in baking process that refactors a bit the whole,\nbefore being able to extend it so as we have more coverage for\nnormalized utility queries, as of now the query strings stored by\npg_stat_statements don't reflect that even if the jumbling computation\nmarks the location of the Const nodes included in utility statements\n(partition bounds, queries of COPY, etc.). I should be able to send\nthat tomorrow, and my guess that you could take advantage of that\neven for this thread.\n--\nMichael",
"msg_date": "Sun, 5 Feb 2023 10:30:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sun, Feb 05, 2023 at 10:30:25AM +0900, Michael Paquier wrote:\n> On Sat, Feb 04, 2023 at 06:08:41PM +0100, Dmitry Dolgov wrote:\n> > Here is the rebased version. To adapt to the latest changes, I've marked\n> > ArrayExpr with custom_query_jumble to implement this functionality, but\n> > tried to make the actual merge logic relatively independent. Otherwise,\n> > everything is the same.\n>\n> I was quickly looking at this patch, so these are rough impressions.\n>\n> + bool merged; /* whether or not the location was marked as\n> + not contributing to jumble */\n>\n> This part of the patch is a bit disturbing.. We have node attributes\n> to track if portions of a node should be ignored or have a location\n> marked, still this \"merged\" flag is used as an extension to track if a\n> location should be done or not. Is that a concept that had better be\n> controlled via a new node attribute?\n\nGood question. I need to think a bit more if it's possible to leverage\nnode attributes mechanism, but at the moment I'm still inclined to\nextend LocationLen. The reason is that it doesn't exactly serve the\ntracking purpose, i.e. whether to capture a location (I have to update\nthe code commentary), it helps differentiate cases when locations A and\nD are obtained from merging A B C D instead of just being A and D.\n\nI'm thinking about this in the following way: the core jumbling logic is\nresponsible for deriving locations based on the input expressions; in\nthe case of merging we produce less locations; pgss have to represent\nthe result only using locations and has to be able to differentiate\nsimple locations and locations after merging.\n\n> +--\n> +-- Consts merging\n> +--\n> +CREATE TABLE test_merge (id int, data int);\n> +-- IN queries\n> +-- No merging\n>\n> Would it be better to split this set of tests into a new file? FWIW,\n> I have a patch in baking process that refactors a bit the whole,\n> before being able to extend it so as we have more coverage for\n> normalized utility queries, as of now the query strings stored by\n> pg_stat_statements don't reflect that even if the jumbling computation\n> marks the location of the Const nodes included in utility statements\n> (partition bounds, queries of COPY, etc.). I should be able to send\n> that tomorrow, and my guess that you could take advantage of that\n> even for this thread.\n\nSure, I'll take a look how I can benefit from your work, thanks.\n\n\n",
"msg_date": "Sun, 5 Feb 2023 12:33:46 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Dmitry Dolgov <9erthalion6@gmail.com> writes:\n> I'm thinking about this in the following way: the core jumbling logic is\n> responsible for deriving locations based on the input expressions; in\n> the case of merging we produce less locations; pgss have to represent\n> the result only using locations and has to be able to differentiate\n> simple locations and locations after merging.\n\nUh ... why? ISTM you're just going to elide all inside the IN,\nso why do you need more than a start and stop position?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 05 Feb 2023 11:02:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sun, Feb 05, 2023 at 11:02:32AM -0500, Tom Lane wrote:\n> Dmitry Dolgov <9erthalion6@gmail.com> writes:\n> > I'm thinking about this in the following way: the core jumbling logic is\n> > responsible for deriving locations based on the input expressions; in\n> > the case of merging we produce less locations; pgss have to represent\n> > the result only using locations and has to be able to differentiate\n> > simple locations and locations after merging.\n>\n> Uh ... why? ISTM you're just going to elide all inside the IN,\n> so why do you need more than a start and stop position?\n\nExactly, start and stop positions. But if there would be no information\nthat merging was applied, the following queries will look the same after\njumbling, right?\n\n -- input query\n SELECT * FROM test_merge WHERE id IN (1, 2);\n -- jumbling result, two LocationLen, for values 1 and 2\n SELECT * FROM test_merge WHERE id IN ($1, $2);\n\n -- input query\n SELECT * FROM test_merge WHERE id IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10);\n -- jumbling result, two LocationLen after merging, for values 1 and 10\n SELECT * FROM test_merge WHERE id IN (...);\n -- without remembering about merging the result would be\n SELECT * FROM test_merge WHERE id IN ($1, $2);\n\n\n",
"msg_date": "Sun, 5 Feb 2023 20:56:00 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Hello!\n\nUnfortunately, rebase is needed again due to recent changes in queryjumblefuncs ( 9ba37b2cb6a174b37fc51d0649ef73e56eae27fc )\n\nIt seems a little strange to me that with const_merge_threshold = 1, such a test case gives the same result as with const_merge_threshold = 2\n\nselect pg_stat_statements_reset();\nset const_merge_threshold to 1;\nselect * from test where i in (1,2,3);\nselect * from test where i in (1,2);\nselect * from test where i in (1);\nselect query, calls from pg_stat_statements order by query;\n\n query | calls \n-------------------------------------+-------\n select * from test where i in (...) | 2\n select * from test where i in ($1) | 1\n\nProbably const_merge_threshold = 1 should produce only \"i in (...)\"?\n\nconst_merge_threshold is \"the minimal length of an array\" (more or equal) or \"array .. length is larger than\" (not equals)? I think the documentation is ambiguous in this regard.\n\nI also noticed a typo in guc_tables.c: \"Sets the minimal numer of constants in an array\" -> number\n\nregards, Sergei\n\n\n",
"msg_date": "Tue, 07 Feb 2023 23:14:52 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re:pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On 07.02.23 21:14, Sergei Kornilov wrote:\n> It seems a little strange to me that with const_merge_threshold = 1, such a test case gives the same result as with const_merge_threshold = 2\n\nWhat is the point of making this a numeric setting? Either you want to \nmerge all values or you don't want to merge any values.\n\n\n",
"msg_date": "Thu, 9 Feb 2023 14:30:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Tue, Feb 07, 2023 at 11:14:52PM +0300, Sergei Kornilov wrote:\n> Hello!\n\nThanks for reviewing.\n\n> Unfortunately, rebase is needed again due to recent changes in queryjumblefuncs ( 9ba37b2cb6a174b37fc51d0649ef73e56eae27fc )\n\nYep, my favourite game, rebaseball. Will post a new version soon, after\nfiguring out all the recent questions.\n\n> It seems a little strange to me that with const_merge_threshold = 1, such a test case gives the same result as with const_merge_threshold = 2\n>\n> select pg_stat_statements_reset();\n> set const_merge_threshold to 1;\n> select * from test where i in (1,2,3);\n> select * from test where i in (1,2);\n> select * from test where i in (1);\n> select query, calls from pg_stat_statements order by query;\n>\n> query | calls\n> -------------------------------------+-------\n> select * from test where i in (...) | 2\n> select * from test where i in ($1) | 1\n>\n> Probably const_merge_threshold = 1 should produce only \"i in (...)\"?\n\nWell, it's not intentional, probably I need to be more careful with\noff-by-one. Although I agree to a certain extent with Peter questioning\nthe value of having numerical option here, let me think about this.\n\n> const_merge_threshold is \"the minimal length of an array\" (more or equal) or \"array .. length is larger than\" (not equals)? I think the documentation is ambiguous in this regard.\n>\n> I also noticed a typo in guc_tables.c: \"Sets the minimal numer of constants in an array\" -> number\n\nYep, I'll rephrase the documentation.\n\n\n",
"msg_date": "Thu, 9 Feb 2023 16:02:38 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Thu, Feb 09, 2023 at 02:30:34PM +0100, Peter Eisentraut wrote:\n> On 07.02.23 21:14, Sergei Kornilov wrote:\n> > It seems a little strange to me that with const_merge_threshold = 1, such a test case gives the same result as with const_merge_threshold = 2\n>\n> What is the point of making this a numeric setting? Either you want to\n> merge all values or you don't want to merge any values.\n\nAt least in theory the definition of \"too many constants\" is different\nfor different use cases and I see allowing to configure it as a way of\nreducing the level of surprise here. The main scenario for a numerical\nsetting would be to distinguish between normal usage with just a handful\nof constants (and the user expecting to see them represented in pgss)\nand some sort of outliers with thousands of constants in a query (e.g.\nas a defence mechanism for the infrastructure working with those\nmetrics). But I agree that it's not clear how much value is in that.\n\nNot having strong opinion about this I would be fine changing it to a\nboolean option (with an actual limit hidden internally) if everyone\nagrees it fits better.\n\n\n",
"msg_date": "Thu, 9 Feb 2023 16:12:26 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On 2023-Feb-09, Dmitry Dolgov wrote:\n\n> > On Thu, Feb 09, 2023 at 02:30:34PM +0100, Peter Eisentraut wrote:\n\n> > What is the point of making this a numeric setting? Either you want\n> > to merge all values or you don't want to merge any values.\n> \n> At least in theory the definition of \"too many constants\" is different\n> for different use cases and I see allowing to configure it as a way of\n> reducing the level of surprise here.\n\nI was thinking about this a few days ago and I agree that we don't\nnecessarily want to make it just a boolean thing; we may want to make it\nmore complex. One trivial idea is to make it group entries in powers of\n10: for 0-9 elements, you get one entry, and 10-99 you get a different\none, and so on:\n\n# group everything in a single bucket\nconst_merge_threshold = true / yes / on \n\n# group 0-9, 10-99, 100-999, 1000-9999\nconst_merge_treshold = powers\n\nIdeally the value would be represented somehow in the query text. For\nexample\n\n query | calls\n----------------------------------------------------------+-------\n select * from test where i in ({... 0-9 entries ...}) | 2\n select * from test where i in ({... 10-99 entries ...}) | 1\n\nWhat do you think? The jumble would have to know how to reduce all\nvalues within each power-of-ten group to one specific value, but I don't\nthink that should be particularly difficult.\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n\n\n",
"msg_date": "Thu, 9 Feb 2023 18:26:51 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Thu, Feb 09, 2023 at 06:26:51PM +0100, Alvaro Herrera wrote:\n> On 2023-Feb-09, Dmitry Dolgov wrote:\n>\n> > > On Thu, Feb 09, 2023 at 02:30:34PM +0100, Peter Eisentraut wrote:\n>\n> > > What is the point of making this a numeric setting? Either you want\n> > > to merge all values or you don't want to merge any values.\n> >\n> > At least in theory the definition of \"too many constants\" is different\n> > for different use cases and I see allowing to configure it as a way of\n> > reducing the level of surprise here.\n>\n> I was thinking about this a few days ago and I agree that we don't\n> necessarily want to make it just a boolean thing; we may want to make it\n> more complex. One trivial idea is to make it group entries in powers of\n> 10: for 0-9 elements, you get one entry, and 10-99 you get a different\n> one, and so on:\n>\n> # group everything in a single bucket\n> const_merge_threshold = true / yes / on\n>\n> # group 0-9, 10-99, 100-999, 1000-9999\n> const_merge_treshold = powers\n>\n> Ideally the value would be represented somehow in the query text. For\n> example\n>\n> query | calls\n> ----------------------------------------------------------+-------\n> select * from test where i in ({... 0-9 entries ...}) | 2\n> select * from test where i in ({... 10-99 entries ...}) | 1\n>\n> What do you think? The jumble would have to know how to reduce all\n> values within each power-of-ten group to one specific value, but I don't\n> think that should be particularly difficult.\n\nYeah, it sounds appealing and conveniently addresses the question of\nlosing the information about how many constants originally were there.\nNot sure if the example above would be the most natural way to represent\nit in the query text, but otherwise I'm going to try implementing this.\nStay tuned.\n\n\n",
"msg_date": "Thu, 9 Feb 2023 20:43:29 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Hi,\n\nOn 2/9/23 16:02, Dmitry Dolgov wrote:\n>> Unfortunately, rebase is needed again due to recent changes in queryjumblefuncs ( 9ba37b2cb6a174b37fc51d0649ef73e56eae27fc )\nI reviewed the last patch applied to some commit from Feb. 4th.\n>> It seems a little strange to me that with const_merge_threshold = 1, such a test case gives the same result as with const_merge_threshold = 2\n>>\n>> select pg_stat_statements_reset();\n>> set const_merge_threshold to 1;\n>> select * from test where i in (1,2,3);\n>> select * from test where i in (1,2);\n>> select * from test where i in (1);\n>> select query, calls from pg_stat_statements order by query;\n>>\n>> query | calls\n>> -------------------------------------+-------\n>> select * from test where i in (...) | 2\n>> select * from test where i in ($1) | 1\n>>\n>> Probably const_merge_threshold = 1 should produce only \"i in (...)\"?\n> Well, it's not intentional, probably I need to be more careful with\n> off-by-one. Although I agree to a certain extent with Peter questioning\n\nPlease add tests for all the corner cases. At least for (1) IN only \ncontains a single element and (2) const_merge_threshold = 1.\n\nBeyond that:\n\n- There's a comment about find_const_walker(). I cannot find that \nfunction anywhere. What am I missing?\n\n- What about renaming IsConstList() to IsMergableConstList().\n\n- Don't you intend to use the NUMERIC data column in SELECT * FROM \ntest_merge_numeric WHERE id IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)? \nOtherwise, the test is identical to previous test cases and you're not \nchecking for what happens with NUMERICs which are wrapped in FuncExpr \nbecause of the implicit coercion.\n\n- Don't we want to extend IsConstList() to allow a list of all \nimplicitly coerced constants? It's inconsistent that otherwise e.g. \nNUMERICs don't work.\n\n- Typo in /* The firsts merged constant */ (first not firsts)\n\n- Prepared statements are not supported as they contain INs with Param \ninstead of Const nodes. While less likely, I've seen applications that \nuse prepared statements in conjunction with queries generated through a \nUI which ended up with tons of prepared queries with different number of \nelements in the IN clause. Not necessarily something that must go into \nthis patch but maybe worth thinking about.\n\n- The setting name const_merge_threshold is not very telling without \nknowing the context. While being a little longer, what about \njumble_const_merge_threshold or queryid_const_merge_threshold or similar?\n\n- Why do we actually only want to merge constants? Why don't we ignore \nthe type of element in the IN and merge whatever is there? Is this \nbecause the original jumbling logic as of now only has support for \nconstants?\n\n- Ideally we would even remove duplicates. That would even improve \ncardinality estimation but I guess we don't want to spend the cycles on \ndoing so in the planner?\n\n- Why did you change palloc() to palloc0() for clocations array? The \nlength is initialized to 0 and FWICS RecordConstLocation() initializes \nall members. Seems to me like we don't have to spend these cycles.\n\n- Can't the logic at the end of IsConstList() not be simplified to:\n\n foreach(temp, elements)\n if (!IsA(lfirst(temp), Const))\n return false;\n\n // All elements are of type Const\n *firstConst = linitial_node(Const, elements);\n *lastConst = llast_node(Const, elements);\n return true;\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Sat, 11 Feb 2023 11:03:36 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sat, Feb 11, 2023 at 11:03:36AM +0100, David Geier wrote:\n> Hi,\n>\n> On 2/9/23 16:02, Dmitry Dolgov wrote:\n> > > Unfortunately, rebase is needed again due to recent changes in queryjumblefuncs ( 9ba37b2cb6a174b37fc51d0649ef73e56eae27fc )\n> I reviewed the last patch applied to some commit from Feb. 4th.\n\nThanks for looking. Few quick answers about high-level questions below,\nthe rest I'll incorporate in the new version.\n\n> - There's a comment about find_const_walker(). I cannot find that function\n> anywhere. What am I missing?\n>\n> [...]\n>\n> - Don't you intend to use the NUMERIC data column in SELECT * FROM\n> test_merge_numeric WHERE id IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)? Otherwise,\n> the test is identical to previous test cases and you're not checking for\n> what happens with NUMERICs which are wrapped in FuncExpr because of the\n> implicit coercion.\n>\n> - Don't we want to extend IsConstList() to allow a list of all implicitly\n> coerced constants? It's inconsistent that otherwise e.g. NUMERICs don't\n> work.\n>\n> [...]\n>\n> - Prepared statements are not supported as they contain INs with Param\n> instead of Const nodes. While less likely, I've seen applications that use\n> prepared statements in conjunction with queries generated through a UI which\n> ended up with tons of prepared queries with different number of elements in\n> the IN clause. Not necessarily something that must go into this patch but\n> maybe worth thinking about.\n\nThe original version of the patch was doing all of this, i.e. handling\nnumerics, Param nodes, RTE_VALUES. The commentary about\nfind_const_walker in tests is referring to a part of that, that was\ndealing with evaluation of expression to see if it could be reduced to a\nconstant.\n\nUnfortunately there was a significant push back from reviewers because\nof those features. That's why I've reduced the patch to it's minimally\nuseful version, having in mind re-implementing them as follow-up patches\nin the future. This is the reason as well why I left tests covering all\nthis missing functionality -- as breadcrumbs to already discovered\ncases, important for the future extensions.\n\n> - Why do we actually only want to merge constants? Why don't we ignore the\n> type of element in the IN and merge whatever is there? Is this because the\n> original jumbling logic as of now only has support for constants?\n>\n> - Ideally we would even remove duplicates. That would even improve\n> cardinality estimation but I guess we don't want to spend the cycles on\n> doing so in the planner?\n\nI believe these points are beyond the patch goals, as it's less clear\n(at least to me) if it's safe or desirable to do so.\n\n\n",
"msg_date": "Sat, 11 Feb 2023 11:47:07 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sat, Feb 11, 2023 at 11:47:07AM +0100, Dmitry Dolgov wrote:\n>\n> The original version of the patch was doing all of this, i.e. handling\n> numerics, Param nodes, RTE_VALUES. The commentary about\n> find_const_walker in tests is referring to a part of that, that was\n> dealing with evaluation of expression to see if it could be reduced to a\n> constant.\n>\n> Unfortunately there was a significant push back from reviewers because\n> of those features. That's why I've reduced the patch to it's minimally\n> useful version, having in mind re-implementing them as follow-up patches\n> in the future. This is the reason as well why I left tests covering all\n> this missing functionality -- as breadcrumbs to already discovered\n> cases, important for the future extensions.\n\nI'd like to elaborate on this a bit and remind about the origins of the\npatch, as it's lost somewhere in the beginning of the thread. The idea\nis not pulled out of thin air, everything is coming from our attempts to\nimprove one particular monitoring infrastructure in a real commercial\nsetting. Every covered use case and test in the original proposal was a\nresult of field trials, when some application-side library or ORM was\nresponsible for gigabytes of data in pgss, chocking the monitoring agent.\n\n\n",
"msg_date": "Sat, 11 Feb 2023 13:08:20 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Hi,\n\nOn 2/11/23 13:08, Dmitry Dolgov wrote:\n>> On Sat, Feb 11, 2023 at 11:47:07AM +0100, Dmitry Dolgov wrote:\n>>\n>> The original version of the patch was doing all of this, i.e. handling\n>> numerics, Param nodes, RTE_VALUES. The commentary about\n>> find_const_walker in tests is referring to a part of that, that was\n>> dealing with evaluation of expression to see if it could be reduced to a\n>> constant.\n>>\n>> Unfortunately there was a significant push back from reviewers because\n>> of those features. That's why I've reduced the patch to it's minimally\n>> useful version, having in mind re-implementing them as follow-up patches\n>> in the future. This is the reason as well why I left tests covering all\n>> this missing functionality -- as breadcrumbs to already discovered\n>> cases, important for the future extensions.\n> I'd like to elaborate on this a bit and remind about the origins of the\n> patch, as it's lost somewhere in the beginning of the thread. The idea\n> is not pulled out of thin air, everything is coming from our attempts to\n> improve one particular monitoring infrastructure in a real commercial\n> setting. Every covered use case and test in the original proposal was a\n> result of field trials, when some application-side library or ORM was\n> responsible for gigabytes of data in pgss, chocking the monitoring agent.\n\nThanks for the clarifications. I didn't mean to contend the usefulness \nof the patch and I wasn't aware that you already jumped through the \nloops of handling Param, etc. Seems like supporting only constants is a \ngood starting point. The only thing that is likely confusing for users \nis that NUMERICs (and potentially constants of other types) are \nunsupported. Wouldn't it be fairly simple to support them via something \nlike the following?\n\n is_const(element) || (is_coercion(element) && is_const(element->child))\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Wed, 15 Feb 2023 08:51:56 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Wed, Feb 15, 2023 at 08:51:56AM +0100, David Geier wrote:\n> Hi,\n>\n> On 2/11/23 13:08, Dmitry Dolgov wrote:\n> > > On Sat, Feb 11, 2023 at 11:47:07AM +0100, Dmitry Dolgov wrote:\n> > >\n> > > The original version of the patch was doing all of this, i.e. handling\n> > > numerics, Param nodes, RTE_VALUES. The commentary about\n> > > find_const_walker in tests is referring to a part of that, that was\n> > > dealing with evaluation of expression to see if it could be reduced to a\n> > > constant.\n> > >\n> > > Unfortunately there was a significant push back from reviewers because\n> > > of those features. That's why I've reduced the patch to it's minimally\n> > > useful version, having in mind re-implementing them as follow-up patches\n> > > in the future. This is the reason as well why I left tests covering all\n> > > this missing functionality -- as breadcrumbs to already discovered\n> > > cases, important for the future extensions.\n> > I'd like to elaborate on this a bit and remind about the origins of the\n> > patch, as it's lost somewhere in the beginning of the thread. The idea\n> > is not pulled out of thin air, everything is coming from our attempts to\n> > improve one particular monitoring infrastructure in a real commercial\n> > setting. Every covered use case and test in the original proposal was a\n> > result of field trials, when some application-side library or ORM was\n> > responsible for gigabytes of data in pgss, chocking the monitoring agent.\n>\n> Thanks for the clarifications. I didn't mean to contend the usefulness of\n> the patch and I wasn't aware that you already jumped through the loops of\n> handling Param, etc.\n\nNo worries, I just wanted to emphasize that we've already collected\nquite some number of use cases.\n\n> Seems like supporting only constants is a good starting\n> point. The only thing that is likely confusing for users is that NUMERICs\n> (and potentially constants of other types) are unsupported. Wouldn't it be\n> fairly simple to support them via something like the following?\n>\n> ��� is_const(element) || (is_coercion(element) && is_const(element->child))\n\nIt definitely makes sense to implement that, although I don't think it's\ngoing to be acceptable to do that via directly listing conditions an\nelement has to satisfy. It probably has to be more flexible, sice we\nwould like to extend it in the future. My plan is to address this in a\nfollow-up patch, when the main mechanism is approved. Would you agree\nwith this approach?\n\n\n",
"msg_date": "Fri, 17 Feb 2023 16:46:02 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Thu, Feb 09, 2023 at 08:43:29PM +0100, Dmitry Dolgov wrote:\n> > On Thu, Feb 09, 2023 at 06:26:51PM +0100, Alvaro Herrera wrote:\n> > On 2023-Feb-09, Dmitry Dolgov wrote:\n> >\n> > > > On Thu, Feb 09, 2023 at 02:30:34PM +0100, Peter Eisentraut wrote:\n> >\n> > > > What is the point of making this a numeric setting? Either you want\n> > > > to merge all values or you don't want to merge any values.\n> > >\n> > > At least in theory the definition of \"too many constants\" is different\n> > > for different use cases and I see allowing to configure it as a way of\n> > > reducing the level of surprise here.\n> >\n> > I was thinking about this a few days ago and I agree that we don't\n> > necessarily want to make it just a boolean thing; we may want to make it\n> > more complex. One trivial idea is to make it group entries in powers of\n> > 10: for 0-9 elements, you get one entry, and 10-99 you get a different\n> > one, and so on:\n> >\n> > # group everything in a single bucket\n> > const_merge_threshold = true / yes / on\n> >\n> > # group 0-9, 10-99, 100-999, 1000-9999\n> > const_merge_treshold = powers\n> >\n> > Ideally the value would be represented somehow in the query text. For\n> > example\n> >\n> > query | calls\n> > ----------------------------------------------------------+-------\n> > select * from test where i in ({... 0-9 entries ...}) | 2\n> > select * from test where i in ({... 10-99 entries ...}) | 1\n> >\n> > What do you think? The jumble would have to know how to reduce all\n> > values within each power-of-ten group to one specific value, but I don't\n> > think that should be particularly difficult.\n>\n> Yeah, it sounds appealing and conveniently addresses the question of\n> losing the information about how many constants originally were there.\n> Not sure if the example above would be the most natural way to represent\n> it in the query text, but otherwise I'm going to try implementing this.\n> Stay tuned.\n\nIt took me couple of evenings, here is what I've got:\n\n* The representation is not that far away from your proposal, I've\n settled on:\n\n SELECT * FROM test_merge WHERE id IN (... [10-99 entries])\n\n* To not reinvent the wheel, I've reused decimalLenght function from\n numutils, hence one more patch to make it available to reuse.\n\n* This approach resolves my concerns about letting people tuning\n the behaviour of merging, as now it's possible to distinguish between\n calls with different number of constants up to the power of 10. So\n I've decided to simplify the configuration and make the guc boolean to\n turn it off or on.\n\n* To separate queries with constants falling into different ranges\n (10-99, 100-999, etc), the order of magnitude is added into the jumble\n hash.\n\n* I've incorporated feedback from Sergei and David, as well as tried to\n make comments and documentation more clear.\n\nAny feedback is welcomed, thanks!",
"msg_date": "Fri, 17 Feb 2023 16:46:43 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Hi,\n\n>> Seems like supporting only constants is a good starting\n>> point. The only thing that is likely confusing for users is that NUMERICs\n>> (and potentially constants of other types) are unsupported. Wouldn't it be\n>> fairly simple to support them via something like the following?\n>>\n>> is_const(element) || (is_coercion(element) && is_const(element->child))\n> It definitely makes sense to implement that, although I don't think it's\n> going to be acceptable to do that via directly listing conditions an\n> element has to satisfy. It probably has to be more flexible, sice we\n> would like to extend it in the future. My plan is to address this in a\n> follow-up patch, when the main mechanism is approved. Would you agree\n> with this approach?\n\nI still think it's counterintuitive and I'm pretty sure people would \neven report this as a bug because not knowing about the difference in \ninternal representation you would expect NUMERICs to work the same way \nother constants work. If anything we would have to document it.\n\nCan't we do something pragmatic and have something like \nIsMergableInElement() which for now only supports constants and later \ncan be extended? Or what exactly do you mean by \"more flexible\"?\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 09:48:35 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Thu, Feb 23, 2023 at 09:48:35AM +0100, David Geier wrote:\n> Hi,\n>\n> > > Seems like supporting only constants is a good starting\n> > > point. The only thing that is likely confusing for users is that NUMERICs\n> > > (and potentially constants of other types) are unsupported. Wouldn't it be\n> > > fairly simple to support them via something like the following?\n> > >\n> > > ��� is_const(element) || (is_coercion(element) && is_const(element->child))\n> > It definitely makes sense to implement that, although I don't think it's\n> > going to be acceptable to do that via directly listing conditions an\n> > element has to satisfy. It probably has to be more flexible, sice we\n> > would like to extend it in the future. My plan is to address this in a\n> > follow-up patch, when the main mechanism is approved. Would you agree\n> > with this approach?\n>\n> I still think it's counterintuitive and I'm pretty sure people would even\n> report this as a bug because not knowing about the difference in internal\n> representation you would expect NUMERICs to work the same way other\n> constants work. If anything we would have to document it.\n>\n> Can't we do something pragmatic and have something like\n> IsMergableInElement() which for now only supports constants and later can be\n> extended? Or what exactly do you mean by \"more flexible\"?\n\nHere is how I see it (pls correct me if I'm wrong at any point). To\nsupport numerics as presented in the tests from this patch, we have to\ndeal with FuncExpr (the function converting a value into a numeric).\nHaving in mind only numerics, we would need to filter out any other\nFuncExpr (which already sounds dubious). Then we need to validate for\nexample that the function is immutable and have constant arguments,\nwhich is already implemented in evaluate_function and is a part of\neval_const_expression. There is nothing special about numerics at this\npoint, and this approach leads us back to eval_const_expression to a\ncertain degree. Do you see any other way?\n\nI'm thinking about Michael idea in this context, and want to see if it\nwould be possible to make the mechanism more flexible using some node\nattributes. But I see it only as a follow-up step, not a prerequisite.\n\n\n",
"msg_date": "Sun, 26 Feb 2023 11:46:19 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "So I was seeing that this patch needs a rebase according to cfbot.\n\nHowever it looks like the review feedback you're looking for is more\nof design questions. What jumbling is best to include in the feature\nset and which is best to add in later patches. It sounds like you've\ngotten conflicting feedback from initial reviews.\n\nIt does sound like the patch is pretty mature and you're actively\nresponding to feedback so if you got more authoritative feedback it\nmight even be committable now. It's already been two years of being\nrolled forward so it would be a shame to keep rolling it forward.\n\nOr is there some fatal problem that you're trying to work around and\nstill haven't found the magic combination that convinces any\ncommitters this is something we want? In which case perhaps we set\nthis patch returned? I don't get that impression myself though.\n\n\n",
"msg_date": "Tue, 14 Mar 2023 14:14:17 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Tue, Mar 14, 2023 at 02:14:17PM -0400, Gregory Stark (as CFM) wrote:\n> So I was seeing that this patch needs a rebase according to cfbot.\n\nYeah, folks are getting up to speed in with pgss improvements recently.\nThanks for letting me know.\n\n> However it looks like the review feedback you're looking for is more\n> of design questions. What jumbling is best to include in the feature\n> set and which is best to add in later patches. It sounds like you've\n> gotten conflicting feedback from initial reviews.\n>\n> It does sound like the patch is pretty mature and you're actively\n> responding to feedback so if you got more authoritative feedback it\n> might even be committable now. It's already been two years of being\n> rolled forward so it would be a shame to keep rolling it forward.\n\nYou got it about right. There is a balance to strike between\nimplementation, that would cover more useful cases, but has more\ndependencies (something like possibility of having multiple query id),\nand more minimalistic implementation that would actually be acceptable\nin the way it is now. But I haven't heard back from David about it, so I\nassume everybody is fine with the minimalistic approach.\n\n> Or is there some fatal problem that you're trying to work around and\n> still haven't found the magic combination that convinces any\n> committers this is something we want? In which case perhaps we set\n> this patch returned? I don't get that impression myself though.\n\nNothing like this on my side, although I'm not good at conjuring\ncommitting powers of the nature.\n\n\n",
"msg_date": "Tue, 14 Mar 2023 20:04:32 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Tue, Mar 14, 2023 at 08:04:32PM +0100, Dmitry Dolgov wrote:\n> > On Tue, Mar 14, 2023 at 02:14:17PM -0400, Gregory Stark (as CFM) wrote:\n> > So I was seeing that this patch needs a rebase according to cfbot.\n>\n> Yeah, folks are getting up to speed in with pgss improvements recently.\n> Thanks for letting me know.\n\nFollowing recent refactoring of pg_stat_statements tests, I've created a\nnew one for merging functionality in the patch. This should solve any\nconflicts.",
"msg_date": "Sun, 19 Mar 2023 13:27:34 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 01:27:34PM +0100, Dmitry Dolgov wrote:\n> + If this parameter is on, two queries with an array will get the same\n> + query identifier if the only difference between them is the number of\n> + constants, both numbers is of the same order of magnitude and greater or\n> + equal 10 (so the order of magnitude is greather than 1, it is not worth\n> + the efforts otherwise).\n\nIMHO this adds way too much complexity to something that most users would\nexpect to be an on/off switch. If I understand �lvaro's suggestion [0]\ncorrectly, he's saying that in addition to allowing \"on\" and \"off\", it\nmight be worth allowing something like \"powers\" to yield roughly the\nbehavior described above. I don't think he's suggesting that this \"powers\"\nbehavior should be the only available option. Also, it seems\ncounterintuitive that queries with fewer than 10 constants are not merged.\n\nIn the interest of moving this patch forward, I would suggest making it a\nsimple on/off switch in 0002 and moving the \"powers\" functionality to a new\n0003 patch. I think separating out the core part of this feature might\nhelp reviewers. As you can see, I got distracted by the complicated\nthreshold logic and ended up focusing my first round of review there.\n\n[0] https://postgr.es/m/20230209172651.cfgrebpyyr72h7fv%40alvherre.pgsql\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 3 Jul 2023 21:46:11 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Mon, Jul 03, 2023 at 09:46:11PM -0700, Nathan Bossart wrote:\n\nThanks for reviewing.\n\n> On Sun, Mar 19, 2023 at 01:27:34PM +0100, Dmitry Dolgov wrote:\n> > + If this parameter is on, two queries with an array will get the same\n> > + query identifier if the only difference between them is the number of\n> > + constants, both numbers is of the same order of magnitude and greater or\n> > + equal 10 (so the order of magnitude is greather than 1, it is not worth\n> > + the efforts otherwise).\n>\n> IMHO this adds way too much complexity to something that most users would\n> expect to be an on/off switch.\n\nThis documentation is exclusively to be precise about how does it work.\nUsers don't have to worry about all this, and pretty much turn it\non/off, as you've described. I agree though, I could probably write this\ntext a bit differently.\n\n> If I understand �lvaro's suggestion [0] correctly, he's saying that in\n> addition to allowing \"on\" and \"off\", it might be worth allowing\n> something like \"powers\" to yield roughly the behavior described above.\n> I don't think he's suggesting that this \"powers\" behavior should be\n> the only available option.\n\nIndependently of what �lvaro was suggesting, I find the \"powers\"\napproach more suitable, because it answers my own concerns about the\nprevious implementation. Having \"on\"/\"off\" values means we would have to\nscratch heads coming up with a one-size-fit-all default value, or to\nintroduce another option for the actual cut-off threshold. I would like\nto avoid both of those options, that's why I went with \"powers\" only.\n\n> Also, it seems counterintuitive that queries with fewer than 10\n> constants are not merged.\n\nWhy? What would be your intuition using this feature?\n\n> In the interest of moving this patch forward, I would suggest making it a\n> simple on/off switch in 0002 and moving the \"powers\" functionality to a new\n> 0003 patch. I think separating out the core part of this feature might\n> help reviewers. As you can see, I got distracted by the complicated\n> threshold logic and ended up focusing my first round of review there.\n\nI would disagree. As I've described above, to me \"powers\" seems to be a\nbetter fit, and the complicated logic is in fact reusing one already\nexisting function. Do those arguments sound convincing to you?\n\n\n",
"msg_date": "Tue, 4 Jul 2023 21:02:56 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nI've tested the patched on 17devel/master and it is my feeling - especially given the proliferation of the ORMs - that we need such thing in pgss. Thread already took almost 3 years, so it would be pity to waste so much development time of yours. Cfbot is green, and patch works very well for me. IMVHO commitfest status should be even set to ready-for-comitter.\r\n\r\nGiven the:\r\n\tSET query_id_const_merge = on;\r\n\tSELECT pg_stat_statements_reset();\r\n\tSELECT * FROM test WHERE a IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 11);\r\n\tSELECT * FROM test WHERE a IN (1, 2, 3);\r\n\tSELECT * FROM test WHERE a = ALL('{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}');\r\n\tSELECT * FROM test WHERE a = ANY (ARRAY[11,10,9,8,7,6,5,4,3,2,1]);\r\n\r\nThe patch results in:\r\n q | calls\r\n-----------------------------------------------------+-------\r\n SELECT * FROM test WHERE a = ALL($1) | 1\r\n SELECT pg_stat_statements_reset() | 1\r\n SELECT * FROM test WHERE a IN ($1, $2, $3) | 1\r\n SELECT * FROM test WHERE a IN (... [10-99 entries]) | 2\r\n\r\nOf course it's pity it doesn't collapse the below ones:\r\n\r\nSELECT * FROM (VALUES (1), (2), (3), (4), (5), (6), (7), (8), (9), (10), (11)) AS t (num);\r\nINSERT INTO dummy VALUES(1, 'text 1'),(2, 'text 2'),(3, 'text 3'),(4, 'text 3'),(5, 'text 3'),(6, 'text 3'),(7, 'text 3'),(8, 'text 3'),(9, 'text 3'),(10, 'text 3') ON CONFLICT (id) DO NOTHING;\r\nPREPARE s3(int[], int[], int[], int[], int[], int[], int[], int[], int[], int[], int[]) AS SELECT * FROM test WHERE \r\n\ta = ANY ($1::int[]) OR \r\n\ta = ANY ($2::int[]) OR\r\n[..]\r\n\ta = ANY ($11::int[]) ;\r\n\r\nbut given the convoluted thread history, it's understandable and as you stated - maybe in future.\r\n\r\nThere's one additional benefit to this patch: the pg_hint_plan extension seems to borrow pgss's generate_normalized_query(). So if that's changed in next major release, the pg_hint_plan hint table (transparent plan rewrite using table) will automatically benefit from generalization of the query string here (imagine fixing plans for ORM that generate N {1,1024} number of IN() array elements; today that would be N number of entries in the \"hint_plan.hints\" table).\n\nThe new status of this patch is: Needs review\n",
"msg_date": "Thu, 21 Sep 2023 12:10:09 +0000",
"msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "I've also tried the patch and I see the same results as Jakub, which\nmake sense to me. I did have issues getting it to apply, though: `git\nam` complains about a conflict, though patch itself was able to apply\nit.\n\n\n",
"msg_date": "Mon, 2 Oct 2023 23:24:06 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Hi, this is my first email to the pgsql hackers.\n\nI came across this email thread while looking at\nhttps://github.com/rails/rails/pull/49388 for Ruby on Rails one of the\npopular web application framework by replacing every query `in` clause\nwith `any` to reduce similar entries in `pg_stat_statements`.\n\nI want this to be solved on the PostgreSQL side, mainly because I want\nto avoid replacing\nevery in clause with any to reduce similar entries in pg_stat_statements.\n\nIt would be nice to have this patch reviewed.\n\nAs I'm not familiar with C and PostgreSQL source code, I'm not\nreviewing this patch myself,\nI applied this patch to my local PostgreSQL and the Active Record unit\ntests ran successfully.\n--\nYasuo Honda\n\n\n",
"msg_date": "Mon, 9 Oct 2023 10:46:43 +0900",
"msg_from": "Yasuo Honda <yasuo.honda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On Tue, Jul 04, 2023 at 09:02:56PM +0200, Dmitry Dolgov wrote:\n> On Mon, Jul 03, 2023 at 09:46:11PM -0700, Nathan Bossart wrote:\n>> IMHO this adds way too much complexity to something that most users would\n>> expect to be an on/off switch.\n> \n> This documentation is exclusively to be precise about how does it work.\n> Users don't have to worry about all this, and pretty much turn it\n> on/off, as you've described. I agree though, I could probably write this\n> text a bit differently.\n\nFWIW, I am going to side with Nathan on this one, but not completely\neither. I was looking at the patch and it brings too much complexity\nfor a monitoring feature in this code path. In my experience, I've\nseen people complain about IN/ANY never strimmed down to a single\nparameter in pg_stat_statements but I still have to hear from somebody\noutside this thread that they'd like to reduce an IN clause depending\non the number of items, or something else.\n\n>> If I understand Álvaro's suggestion [0] correctly, he's saying that in\n>> addition to allowing \"on\" and \"off\", it might be worth allowing\n>> something like \"powers\" to yield roughly the behavior described above.\n>> I don't think he's suggesting that this \"powers\" behavior should be\n>> the only available option.\n> \n> Independently of what Álvaro was suggesting, I find the \"powers\"\n> approach more suitable, because it answers my own concerns about the\n> previous implementation. Having \"on\"/\"off\" values means we would have to\n> scratch heads coming up with a one-size-fit-all default value, or to\n> introduce another option for the actual cut-off threshold. I would like\n> to avoid both of those options, that's why I went with \"powers\" only.\n\nNow, it doesn't mean that this approach with the \"powers\" will never\nhappen, but based on the set of opinions I am gathering on this thread\nI would suggest to rework the patch as follows:\n- First implement an on/off switch that reduces the lists in IN and/or\nANY to one parameter. Simply.\n- Second, refactor the powers routine.\n- Third, extend the on/off switch, or just implement a threshold with\na second switch.\n\nWhen it comes to my opinion, I am not seeing any objections to the\nfeature as a whole, and I'm OK with the first step. I'm also OK to\nkeep the door open for more improvements in controlling how these\nIN/ANY lists show up, but there could be more than just the number of\nitems as parameter (say the query size, different behaviors depending\non the number of clauses in queries, subquery context or CTEs/WITH,\netc. just to name a few things coming in mind).\n--\nMichael",
"msg_date": "Fri, 13 Oct 2023 17:07:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Fri, Oct 13, 2023 at 05:07:00PM +0900, Michael Paquier wrote:\n> Now, it doesn't mean that this approach with the \"powers\" will never\n> happen, but based on the set of opinions I am gathering on this thread\n> I would suggest to rework the patch as follows:\n> - First implement an on/off switch that reduces the lists in IN and/or\n> ANY to one parameter. Simply.\n> - Second, refactor the powers routine.\n> - Third, extend the on/off switch, or just implement a threshold with\n> a second switch.\n\nWell, if it will help move this patch forward, why not. To clarify, I'm\ngoing to split the current implementation into three patches, one for\neach point you've mentioned.\n\n> When it comes to my opinion, I am not seeing any objections to the\n> feature as a whole, and I'm OK with the first step. I'm also OK to\n> keep the door open for more improvements in controlling how these\n> IN/ANY lists show up, but there could be more than just the number of\n> items as parameter (say the query size, different behaviors depending\n> on the number of clauses in queries, subquery context or CTEs/WITH,\n> etc. just to name a few things coming in mind).\n\nInteresting point, but now it's my turn to have troubles imagining the\ncase, where list representation could be controlled depending on\nsomething else than the number of elements in it. Do you have any\nspecific example in mind?\n\n\n",
"msg_date": "Fri, 13 Oct 2023 15:35:19 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On Tue, Jul 04, 2023 at 09:02:56PM +0200, Dmitry Dolgov wrote:\n>> On Mon, Jul 03, 2023 at 09:46:11PM -0700, Nathan Bossart wrote:\n>> Also, it seems counterintuitive that queries with fewer than 10\n>> constants are not merged.\n> \n> Why? What would be your intuition using this feature?\n\nFor the \"powers\" setting, I would've expected queries with 0-9 constants to\nbe merged. Then 10-99, 100-999, 1000-9999, etc. I suppose there might be\nan argument for separating 0 from 1-9, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 13 Oct 2023 11:37:30 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Fri, Oct 13, 2023 at 03:35:19PM +0200, Dmitry Dolgov wrote:\n> > On Fri, Oct 13, 2023 at 05:07:00PM +0900, Michael Paquier wrote:\n> > Now, it doesn't mean that this approach with the \"powers\" will never\n> > happen, but based on the set of opinions I am gathering on this thread\n> > I would suggest to rework the patch as follows:\n> > - First implement an on/off switch that reduces the lists in IN and/or\n> > ANY to one parameter. Simply.\n> > - Second, refactor the powers routine.\n> > - Third, extend the on/off switch, or just implement a threshold with\n> > a second switch.\n>\n> Well, if it will help move this patch forward, why not. To clarify, I'm\n> going to split the current implementation into three patches, one for\n> each point you've mentioned.\n\nHere is what I had mind. The first patch implements the basic notion of\nmerging, and I guess everyone agrees on its usefulness. The second and\nthird implement merging into groups power of 10, which I still find\nuseful as well. The last one adds a lower threshold for merging on top\nof that. My intentions are to get the first one in, ideally I would love\nto see the second and third applied as well.\n\n> > When it comes to my opinion, I am not seeing any objections to the\n> > feature as a whole, and I'm OK with the first step. I'm also OK to\n> > keep the door open for more improvements in controlling how these\n> > IN/ANY lists show up, but there could be more than just the number of\n> > items as parameter (say the query size, different behaviors depending\n> > on the number of clauses in queries, subquery context or CTEs/WITH,\n> > etc. just to name a few things coming in mind).\n>\n> Interesting point, but now it's my turn to have troubles imagining the\n> case, where list representation could be controlled depending on\n> something else than the number of elements in it. Do you have any\n> specific example in mind?\n\nIn the current patch version I didn't add anything yet to address the\nquestion of having more parameters to tune constants merging. The main\nobstacle as I see it is that the information for that has to be\ncollected when jumbling various query nodes. Anything except information\nabout the ArrayExpr itself would have to be acquired when jumbling some\nother parts of the query, not directly related to the ArrayExpr. It\nseems to me this interdependency between otherwise unrelated nodes\noutweigh the value it brings, and would require some more elaborate (and\nmore invasive for the purpose of this patch) mechanism to implement.",
"msg_date": "Tue, 17 Oct 2023 10:15:41 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 10:15:41AM +0200, Dmitry Dolgov wrote:\n> In the current patch version I didn't add anything yet to address the\n> question of having more parameters to tune constants merging. The main\n> obstacle as I see it is that the information for that has to be\n> collected when jumbling various query nodes. Anything except information\n> about the ArrayExpr itself would have to be acquired when jumbling some\n> other parts of the query, not directly related to the ArrayExpr. It\n> seems to me this interdependency between otherwise unrelated nodes\n> outweigh the value it brings, and would require some more elaborate (and\n> more invasive for the purpose of this patch) mechanism to implement.\n\n typedef struct ArrayExpr\n {\n+\tpg_node_attr(custom_query_jumble)\n+\n\nHmm. I am not sure that this is the best approach\nimplementation-wise. Wouldn't it be better to invent a new\npg_node_attr (these can include parameters as well!), say\nquery_jumble_merge or query_jumble_agg_location that aggregates all\nthe parameters of a list to be considered as a single element. To put\nit short, we could also apply the same property to other parts of a\nparsed tree, and not only an ArrayExpr's list.\n\n /* GUC parameters */\n extern PGDLLIMPORT int compute_query_id;\n-\n+extern PGDLLIMPORT bool query_id_const_merge;\n\nNot much a fan of this addition as well for an in-core GUC. I would\nsuggest pushing the GUC layer to pg_stat_statements, maintaining the\ncomputation method to use as a field of JumbleState as I suspect that\nthis is something we should not enforce system-wide, but at\nextension-level instead.\n\n+\t/*\n+\t * Indicates the constant represents the beginning or the end of a merged\n+\t * constants interval.\n+\t */\n+\tbool\t\tmerged;\n\nNot sure that this is the best thing to do either. Instead of this\nextra boolean flag, could it be simpler if we switch LocationLen so as\nwe track the start position and the end position of a constant in a\nquery string, so as we'd use one LocationLen for a whole set of Const\nnodes in an ArrayExpr? Perhaps this could just be a refactoring piece\nof its own?\n\n+\t/*\n+\t * If the first expression is a constant, verify if the following elements\n+\t * are constants as well. If yes, the list is eligible for merging, and the\n+\t * order of magnitude need to be calculated.\n+\t */\n+\tif (IsA(firstExpr, Const))\n+\t{\n+\t\tforeach(temp, elements)\n+\t\t\tif (!IsA(lfirst(temp), Const))\n+\t\t\t\treturn false;\n\nThis path should be benchmarked, IMO.\n--\nMichael",
"msg_date": "Thu, 26 Oct 2023 09:08:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Thu, Oct 26, 2023 at 09:08:42AM +0900, Michael Paquier wrote:\n> typedef struct ArrayExpr\n> {\n> +\tpg_node_attr(custom_query_jumble)\n> +\n>\n> Hmm. I am not sure that this is the best approach\n> implementation-wise. Wouldn't it be better to invent a new\n> pg_node_attr (these can include parameters as well!), say\n> query_jumble_merge or query_jumble_agg_location that aggregates all\n> the parameters of a list to be considered as a single element. To put\n> it short, we could also apply the same property to other parts of a\n> parsed tree, and not only an ArrayExpr's list.\n\nSounds like an interesting idea, something like:\n\n typedef struct ArrayExpr\n {\n ...\n List\t *elements pg_node_attr(query_jumble_merge);\n\nto replace simple JUMBLE_NODE(elements) with more elaborated logic.\n\n> /* GUC parameters */\n> extern PGDLLIMPORT int compute_query_id;\n> -\n> +extern PGDLLIMPORT bool query_id_const_merge;\n>\n> Not much a fan of this addition as well for an in-core GUC. I would\n> suggest pushing the GUC layer to pg_stat_statements, maintaining the\n> computation method to use as a field of JumbleState as I suspect that\n> this is something we should not enforce system-wide, but at\n> extension-level instead.\n\nI also do not particularly like an extra GUC here, but as far as I can\ntell to make it pg_stat_statements GUC only it has to be something\nsimilar to EnableQueryId (e.g. EnableQueryConstMerging), that will be\ncalled from pgss. Does this sound better?\n\n> +\t/*\n> +\t * Indicates the constant represents the beginning or the end of a merged\n> +\t * constants interval.\n> +\t */\n> +\tbool\t\tmerged;\n>\n> Not sure that this is the best thing to do either. Instead of this\n> extra boolean flag, could it be simpler if we switch LocationLen so as\n> we track the start position and the end position of a constant in a\n> query string, so as we'd use one LocationLen for a whole set of Const\n> nodes in an ArrayExpr? Perhaps this could just be a refactoring piece\n> of its own?\n\nSounds interesting as well, but it seems to me there is a catch. I'll\ntry to elaborate, bear with me:\n\n* if the start and the end positions of a constant means the first and the\nlast character representing it, we need the textual length of the\nconstant in the query to be able to construct such a LocationLen. The\nlengths are calculated in pg_stat_statements later, not in JumbleQuery,\nand it uses parser for that. Doing all of this in JumbleQuery doesn't\nsound reasonable to me.\n\n* if instead we talk about the start and the end positions in a\nset of constants, that would mean locations of the first and the last\nconstants in the set, and everything seems fine. But for such\nLocationLen to represent a single constant (not a set of constants), it\nmeans that only the start position would be meaningful, the end position\nwill not be used.\n\nThe second approach is somewhat close to be simpler than the merge flag,\nbut assumes the ugliness for a single constant. What do you think about\nthis?\n\n> +\t/*\n> +\t * If the first expression is a constant, verify if the following elements\n> +\t * are constants as well. If yes, the list is eligible for merging, and the\n> +\t * order of magnitude need to be calculated.\n> +\t */\n> +\tif (IsA(firstExpr, Const))\n> +\t{\n> +\t\tforeach(temp, elements)\n> +\t\t\tif (!IsA(lfirst(temp), Const))\n> +\t\t\t\treturn false;\n>\n> This path should be benchmarked, IMO.\n\nI can do some benchmarking here, but of course it's going to be slower\nthan the baseline. The main idea behind the patch is to trade this\noverhead for the benefits in the future while processing pgss records,\nhoping that it's going to be worth it (and in those extreme cases I'm\ntrying to address it's definitely worth it).\n\n\n",
"msg_date": "Fri, 27 Oct 2023 17:02:44 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Fri, Oct 27, 2023 at 05:02:44PM +0200, Dmitry Dolgov wrote:\n> > On Thu, Oct 26, 2023 at 09:08:42AM +0900, Michael Paquier wrote:\n> > typedef struct ArrayExpr\n> > {\n> > +\tpg_node_attr(custom_query_jumble)\n> > +\n> >\n> > Hmm. I am not sure that this is the best approach\n> > implementation-wise. Wouldn't it be better to invent a new\n> > pg_node_attr (these can include parameters as well!), say\n> > query_jumble_merge or query_jumble_agg_location that aggregates all\n> > the parameters of a list to be considered as a single element. To put\n> > it short, we could also apply the same property to other parts of a\n> > parsed tree, and not only an ArrayExpr's list.\n>\n> Sounds like an interesting idea, something like:\n>\n> typedef struct ArrayExpr\n> {\n> ...\n> List\t *elements pg_node_attr(query_jumble_merge);\n>\n> to replace simple JUMBLE_NODE(elements) with more elaborated logic.\n>\n> > /* GUC parameters */\n> > extern PGDLLIMPORT int compute_query_id;\n> > -\n> > +extern PGDLLIMPORT bool query_id_const_merge;\n> >\n> > Not much a fan of this addition as well for an in-core GUC. I would\n> > suggest pushing the GUC layer to pg_stat_statements, maintaining the\n> > computation method to use as a field of JumbleState as I suspect that\n> > this is something we should not enforce system-wide, but at\n> > extension-level instead.\n>\n> I also do not particularly like an extra GUC here, but as far as I can\n> tell to make it pg_stat_statements GUC only it has to be something\n> similar to EnableQueryId (e.g. EnableQueryConstMerging), that will be\n> called from pgss. Does this sound better?\n\nFor clarity, here is what I had in mind for those two points.",
"msg_date": "Tue, 31 Oct 2023 10:03:07 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "On Tue, 31 Oct 2023 at 14:36, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Fri, Oct 27, 2023 at 05:02:44PM +0200, Dmitry Dolgov wrote:\n> > > On Thu, Oct 26, 2023 at 09:08:42AM +0900, Michael Paquier wrote:\n> > > typedef struct ArrayExpr\n> > > {\n> > > + pg_node_attr(custom_query_jumble)\n> > > +\n> > >\n> > > Hmm. I am not sure that this is the best approach\n> > > implementation-wise. Wouldn't it be better to invent a new\n> > > pg_node_attr (these can include parameters as well!), say\n> > > query_jumble_merge or query_jumble_agg_location that aggregates all\n> > > the parameters of a list to be considered as a single element. To put\n> > > it short, we could also apply the same property to other parts of a\n> > > parsed tree, and not only an ArrayExpr's list.\n> >\n> > Sounds like an interesting idea, something like:\n> >\n> > typedef struct ArrayExpr\n> > {\n> > ...\n> > List *elements pg_node_attr(query_jumble_merge);\n> >\n> > to replace simple JUMBLE_NODE(elements) with more elaborated logic.\n> >\n> > > /* GUC parameters */\n> > > extern PGDLLIMPORT int compute_query_id;\n> > > -\n> > > +extern PGDLLIMPORT bool query_id_const_merge;\n> > >\n> > > Not much a fan of this addition as well for an in-core GUC. I would\n> > > suggest pushing the GUC layer to pg_stat_statements, maintaining the\n> > > computation method to use as a field of JumbleState as I suspect that\n> > > this is something we should not enforce system-wide, but at\n> > > extension-level instead.\n> >\n> > I also do not particularly like an extra GUC here, but as far as I can\n> > tell to make it pg_stat_statements GUC only it has to be something\n> > similar to EnableQueryId (e.g. EnableQueryConstMerging), that will be\n> > called from pgss. Does this sound better?\n>\n> For clarity, here is what I had in mind for those two points.\n\nCFBot shows documentation build has failed at [1] with:\n[07:44:55.531] time make -s -j${BUILD_JOBS} -C doc\n[07:44:57.987] postgres.sgml:572: element xref: validity error : IDREF\nattribute linkend references an unknown ID\n\"guc-query-id-const-merge-threshold\"\n[07:44:58.179] make[2]: *** [Makefile:70: postgres-full.xml] Error 4\n[07:44:58.179] make[2]: *** Deleting file 'postgres-full.xml'\n[07:44:58.181] make[1]: *** [Makefile:8: all] Error 2\n[07:44:58.182] make: *** [Makefile:16: all] Error 2\n\n[1] - https://cirrus-ci.com/task/6688578378399744\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 6 Jan 2024 21:04:54 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sat, Jan 06, 2024 at 09:04:54PM +0530, vignesh C wrote:\n>\n> CFBot shows documentation build has failed at [1] with:\n> [07:44:55.531] time make -s -j${BUILD_JOBS} -C doc\n> [07:44:57.987] postgres.sgml:572: element xref: validity error : IDREF\n> attribute linkend references an unknown ID\n> \"guc-query-id-const-merge-threshold\"\n> [07:44:58.179] make[2]: *** [Makefile:70: postgres-full.xml] Error 4\n> [07:44:58.179] make[2]: *** Deleting file 'postgres-full.xml'\n> [07:44:58.181] make[1]: *** [Makefile:8: all] Error 2\n> [07:44:58.182] make: *** [Makefile:16: all] Error 2\n>\n> [1] - https://cirrus-ci.com/task/6688578378399744\n\nIndeed, after moving the configuration option to pgss I forgot to update\nits reference in the docs. Thanks for noticing, will update soon.\n\n\n",
"msg_date": "Mon, 8 Jan 2024 17:10:20 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Mon, Jan 08, 2024 at 05:10:20PM +0100, Dmitry Dolgov wrote:\n> > On Sat, Jan 06, 2024 at 09:04:54PM +0530, vignesh C wrote:\n> >\n> > CFBot shows documentation build has failed at [1] with:\n> > [07:44:55.531] time make -s -j${BUILD_JOBS} -C doc\n> > [07:44:57.987] postgres.sgml:572: element xref: validity error : IDREF\n> > attribute linkend references an unknown ID\n> > \"guc-query-id-const-merge-threshold\"\n> > [07:44:58.179] make[2]: *** [Makefile:70: postgres-full.xml] Error 4\n> > [07:44:58.179] make[2]: *** Deleting file 'postgres-full.xml'\n> > [07:44:58.181] make[1]: *** [Makefile:8: all] Error 2\n> > [07:44:58.182] make: *** [Makefile:16: all] Error 2\n> >\n> > [1] - https://cirrus-ci.com/task/6688578378399744\n>\n> Indeed, after moving the configuration option to pgss I forgot to update\n> its reference in the docs. Thanks for noticing, will update soon.\n\nHere is the fixed version.",
"msg_date": "Sat, 13 Jan 2024 15:05:38 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere was a CFbot test failure last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/2837/\n[2] https://cirrus-ci.com/task/6688578378399744\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 17:33:26 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Mon, Jan 22, 2024 at 05:33:26PM +1100, Peter Smith wrote:\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there was a CFbot test failure last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n>\n> ======\n> [1] https://commitfest.postgresql.org/46/2837/\n> [2] https://cirrus-ci.com/task/6688578378399744\n\nIt's the same failing pipeline Vignesh C was talking above. I've fixed\nthe issue in the latest patch version, but looks like it wasn't picked\nup yet (from what I understand, the latest build for this CF is 8 weeks\nold).\n\n\n",
"msg_date": "Mon, 22 Jan 2024 17:11:23 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Dmitry Dolgov <9erthalion6@gmail.com> writes:\n>> On Mon, Jan 22, 2024 at 05:33:26PM +1100, Peter Smith wrote:\n>> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n>> there was a CFbot test failure last time it was run [2]. Please have a\n>> look and post an updated version if necessary.\n>> \n>> ======\n>> [1] https://commitfest.postgresql.org/46/2837/\n>> [2] https://cirrus-ci.com/task/6688578378399744\n\n> It's the same failing pipeline Vignesh C was talking above. I've fixed\n> the issue in the latest patch version, but looks like it wasn't picked\n> up yet (from what I understand, the latest build for this CF is 8 weeks\n> old).\n\nPlease notice that at the moment, it's not being tested at all because\nof a patch-apply failure -- that's what the little triangular symbol\nmeans. The rest of the display concerns the test results from the\nlast successfully-applied patch version. (Perhaps that isn't a\nparticularly great UI design.)\n\nIf you click on the triangle you find out\n\n== Applying patches on top of PostgreSQL commit ID b0f0a9432d0b6f53634a96715f2666f6d4ea25a1 ===\n=== applying patch ./v17-0001-Prevent-jumbling-of-every-element-in-ArrayExpr.patch\npatching file contrib/pg_stat_statements/Makefile\nHunk #1 FAILED at 19.\n1 out of 1 hunk FAILED -- saving rejects to file contrib/pg_stat_statements/Makefile.rej\npatching file contrib/pg_stat_statements/expected/merging.out\npatching file contrib/pg_stat_statements/meson.build\n...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jan 2024 11:35:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Mon, Jan 22, 2024 at 11:35:22AM -0500, Tom Lane wrote:\n> Dmitry Dolgov <9erthalion6@gmail.com> writes:\n> >> On Mon, Jan 22, 2024 at 05:33:26PM +1100, Peter Smith wrote:\n> >> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> >> there was a CFbot test failure last time it was run [2]. Please have a\n> >> look and post an updated version if necessary.\n> >>\n> >> ======\n> >> [1] https://commitfest.postgresql.org/46/2837/\n> >> [2] https://cirrus-ci.com/task/6688578378399744\n>\n> > It's the same failing pipeline Vignesh C was talking above. I've fixed\n> > the issue in the latest patch version, but looks like it wasn't picked\n> > up yet (from what I understand, the latest build for this CF is 8 weeks\n> > old).\n>\n> Please notice that at the moment, it's not being tested at all because\n> of a patch-apply failure -- that's what the little triangular symbol\n> means. The rest of the display concerns the test results from the\n> last successfully-applied patch version. (Perhaps that isn't a\n> particularly great UI design.)\n>\n> If you click on the triangle you find out\n>\n> == Applying patches on top of PostgreSQL commit ID b0f0a9432d0b6f53634a96715f2666f6d4ea25a1 ===\n> === applying patch ./v17-0001-Prevent-jumbling-of-every-element-in-ArrayExpr.patch\n> patching file contrib/pg_stat_statements/Makefile\n> Hunk #1 FAILED at 19.\n> 1 out of 1 hunk FAILED -- saving rejects to file contrib/pg_stat_statements/Makefile.rej\n> patching file contrib/pg_stat_statements/expected/merging.out\n> patching file contrib/pg_stat_statements/meson.build\n\nOh, I see, thanks. Give me a moment, will fix this.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 18:07:27 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Mon, Jan 22, 2024 at 06:07:27PM +0100, Dmitry Dolgov wrote:\n> > Please notice that at the moment, it's not being tested at all because\n> > of a patch-apply failure -- that's what the little triangular symbol\n> > means. The rest of the display concerns the test results from the\n> > last successfully-applied patch version. (Perhaps that isn't a\n> > particularly great UI design.)\n> >\n> > If you click on the triangle you find out\n> >\n> > == Applying patches on top of PostgreSQL commit ID b0f0a9432d0b6f53634a96715f2666f6d4ea25a1 ===\n> > === applying patch ./v17-0001-Prevent-jumbling-of-every-element-in-ArrayExpr.patch\n> > patching file contrib/pg_stat_statements/Makefile\n> > Hunk #1 FAILED at 19.\n> > 1 out of 1 hunk FAILED -- saving rejects to file contrib/pg_stat_statements/Makefile.rej\n> > patching file contrib/pg_stat_statements/expected/merging.out\n> > patching file contrib/pg_stat_statements/meson.build\n>\n> Oh, I see, thanks. Give me a moment, will fix this.\n\nHere is it.",
"msg_date": "Mon, 22 Jan 2024 22:00:36 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Hi, I'm interested in this feature. It looks like these patches have\nsome conflicts.\n\nhttp://cfbot.cputube.org/patch_47_2837.log\n\nWould you rebase these patches?\n\nThanks,\n--\nYasuo Honda\n\nOn Sat, Mar 23, 2024 at 4:11 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > Oh, I see, thanks. Give me a moment, will fix this.\n>\n> Here is it.\n\n\n",
"msg_date": "Sat, 23 Mar 2024 16:13:44 +0900",
"msg_from": "Yasuo Honda <yasuo.honda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sat, Mar 23, 2024 at 04:13:44PM +0900, Yasuo Honda wrote:\n> Hi, I'm interested in this feature. It looks like these patches have\n> some conflicts.\n>\n> http://cfbot.cputube.org/patch_47_2837.log\n>\n> Would you rebase these patches?\n\nSure, I can rebase, give me a moment. If you don't want to wait, there\nis a base commit in the patch, against which it should be applied\nwithout issues, 0eb23285a2.\n\n\n",
"msg_date": "Sat, 23 Mar 2024 19:20:26 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Thanks for the information. I can apply these 4 patches from\n0eb23285a2 . I tested this branch from Ruby on Rails and it gets some\nunexpected behavior from my point of view.\nSetting pg_stat_statements.query_id_const_merge_threshold = 5 does not\nnormalize sql queries whose number of in clauses exceeds 5.\n\nHere are test steps.\nhttps://gist.github.com/yahonda/825ffccc4dcb58aa60e12ce33d25cd45#expected-behavior\n\nIt would be appreciated if I can get my understanding correct.\n--\nYasuo Honda\n\nOn Sun, Mar 24, 2024 at 3:20 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> Sure, I can rebase, give me a moment. If you don't want to wait, there\n> is a base commit in the patch, against which it should be applied\n> without issues, 0eb23285a2.\n\n\n",
"msg_date": "Sun, 24 Mar 2024 23:36:38 +0900",
"msg_from": "Yasuo Honda <yasuo.honda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sun, Mar 24, 2024 at 11:36:38PM +0900, Yasuo Honda wrote:\n> Thanks for the information. I can apply these 4 patches from\n> 0eb23285a2 . I tested this branch from Ruby on Rails and it gets some\n> unexpected behavior from my point of view.\n> Setting pg_stat_statements.query_id_const_merge_threshold = 5 does not\n> normalize sql queries whose number of in clauses exceeds 5.\n>\n> Here are test steps.\n> https://gist.github.com/yahonda/825ffccc4dcb58aa60e12ce33d25cd45#expected-behavior\n>\n> It would be appreciated if I can get my understanding correct.\n\n From what I understand out of the description this ruby script uses\nprepared statements, passing values as parameters, right? Unfortunately\nthe current version of the patch doesn't handle that, it works with\nconstants only [1]. The original incarnation of this feature was able to\nhandle that, but the implementation was considered to be not suitable --\nthus, to make some progress, it was left outside.\n\nThe plan is, if everything goes fine at some point, to do a follow-up\npatch to handle Params and the rest.\n\n[1]: https://www.postgresql.org/message-id/20230211104707.grsicemegr7d3mgh%40erthalion.local\n\n\n",
"msg_date": "Mon, 25 Mar 2024 17:35:27 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Yes. The script uses prepared statements because Ruby on Rails enables\nprepared statements by default for PostgreSQL databases.\n\nThen I tested this branch\nhttps://github.com/yahonda/postgres/tree/pg_stat_statements without\nusing prepared statements as follows and all of them do not normalize\nin clause values.\n\n- Disabled prepared statements by setting `prepared_statements: false`\nhttps://gist.github.com/yahonda/2c2d6ac7a955886a305750eecfd07c5e\n\n- Use ruby-pg\nhttps://gist.github.com/yahonda/2f0efb11ae888d8f6b27a07e0b833fdf\n\n- Use psql\nhttps://gist.github.com/yahonda/c830379b33d66a743aef159aa03d7e49\n\nI do not know why even if I use psql, the query column at\npg_stat_sql_statement shows it is like a prepared statement \"IN ($1,\n$2)\".\n\nOn Tue, Mar 26, 2024 at 1:35 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> From what I understand out of the description this ruby script uses\n> prepared statements, passing values as parameters, right? Unfortunately\n> the current version of the patch doesn't handle that, it works with\n> constants only [1]. The original incarnation of this feature was able to\n> handle that, but the implementation was considered to be not suitable --\n> thus, to make some progress, it was left outside.\n\n\n",
"msg_date": "Tue, 26 Mar 2024 16:21:46 +0900",
"msg_from": "Yasuo Honda <yasuo.honda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Tue, Mar 26, 2024 at 04:21:46PM +0900, Yasuo Honda wrote:\n> Yes. The script uses prepared statements because Ruby on Rails enables\n> prepared statements by default for PostgreSQL databases.\n>\n> Then I tested this branch\n> https://github.com/yahonda/postgres/tree/pg_stat_statements without\n> using prepared statements as follows and all of them do not normalize\n> in clause values.\n>\n> - Disabled prepared statements by setting `prepared_statements: false`\n> https://gist.github.com/yahonda/2c2d6ac7a955886a305750eecfd07c5e\n>\n> - Use ruby-pg\n> https://gist.github.com/yahonda/2f0efb11ae888d8f6b27a07e0b833fdf\n>\n> - Use psql\n> https://gist.github.com/yahonda/c830379b33d66a743aef159aa03d7e49\n>\n> I do not know why even if I use psql, the query column at\n> pg_stat_sql_statement shows it is like a prepared statement \"IN ($1,\n> $2)\".\n\nIt's a similar case: the column is defined as bigint, thus PostgreSQL\nhas to wrap every constant expression in a function expression that\nconverts its type to bigint. The current patch version doesn't try to\nreduce a FuncExpr into Const (event if the wrapped value is a Const),\nthus this array is not getting merged. If you replace bigint with an\nint, no type conversion would be required and merging logic will kick\nin.\n\nAgain, the original version of the patch was able to handle this case,\nbut it was stripped away to make the patch smaller in hope of moving\nforward. Anyway, thanks for reminding about how annoying the current\nhandling of constant arrays can look like in practice!\n\n\n",
"msg_date": "Tue, 26 Mar 2024 21:59:16 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Thanks for the useful info.\n\nRuby on Rails uses bigint as a default data type for the primary key\nand prepared statements have been enabled by default for PostgreSQL.\nI'm looking forward to these current patches being merged as a first\nstep and future versions of pg_stat_statements will support\nnormalizing bigint and prepared statements.\n\nOn Wed, Mar 27, 2024 at 6:00 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> It's a similar case: the column is defined as bigint, thus PostgreSQL\n> has to wrap every constant expression in a function expression that\n> converts its type to bigint. The current patch version doesn't try to\n> reduce a FuncExpr into Const (event if the wrapped value is a Const),\n> thus this array is not getting merged. If you replace bigint with an\n> int, no type conversion would be required and merging logic will kick\n> in.\n>\n> Again, the original version of the patch was able to handle this case,\n> but it was stripped away to make the patch smaller in hope of moving\n> forward. Anyway, thanks for reminding about how annoying the current\n> handling of constant arrays can look like in practice!\n\n\n",
"msg_date": "Wed, 27 Mar 2024 08:56:12 +0900",
"msg_from": "Yasuo Honda <yasuo.honda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Wed, Mar 27, 2024 at 08:56:12AM +0900, Yasuo Honda wrote:\n> Thanks for the useful info.\n>\n> Ruby on Rails uses bigint as a default data type for the primary key\n> and prepared statements have been enabled by default for PostgreSQL.\n> I'm looking forward to these current patches being merged as a first\n> step and future versions of pg_stat_statements will support\n> normalizing bigint and prepared statements.\n\nHere is the rebased version. In the meantime I'm going to experiment\nwith how to support more use cases in a way that will be more acceptable\nfor the community.",
"msg_date": "Thu, 4 Apr 2024 16:35:14 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Hi,\n\nIn <20240404143514.a26f7ttxrbdfc73a@erthalion.local>\n \"Re: pg_stat_statements and \"IN\" conditions\" on Thu, 4 Apr 2024 16:35:14 +0200,\n Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> Here is the rebased version.\n\nThanks. I'm not familiar with this code base but I've\nreviewed these patches because I'm interested in this\nfeature too.\n\n0001:\n\n> diff --git a/src/backend/nodes/queryjumblefuncs.c b/src/backend/nodes/queryjumblefuncs.c\n> index be823a7f8fa..e9473def361 100644\n> --- a/src/backend/nodes/queryjumblefuncs.c\n> +++ b/src/backend/nodes/queryjumblefuncs.c\n> \n> @@ -212,15 +233,67 @@ RecordConstLocation(JumbleState *jstate, int location)\n> ...\n> +static bool\n> +IsMergeableConstList(List *elements, Const **firstConst, Const **lastConst)\n> +{\n> +\tListCell *temp;\n> +\tNode\t *firstExpr = NULL;\n> +\n> +\tif (elements == NULL)\n\n\"elements == NIL\" will be better for List.\n\n> +static void\n> +_jumbleElements(JumbleState *jstate, List *elements)\n> +{\n> +\tConst *first, *last;\n> +\tif(IsMergeableConstList(elements, &first, &last))\n\nA space is missing between \"if\" and \"(\".\n\n> diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h\n> index aa727e722cc..cf4f900d4ed 100644\n> --- a/src/include/nodes/primnodes.h\n> +++ b/src/include/nodes/primnodes.h\n> @@ -1333,7 +1333,7 @@ typedef struct ArrayExpr\n> \t/* common type of array elements */\n> \tOid\t\t\telement_typeid pg_node_attr(query_jumble_ignore);\n> \t/* the array elements or sub-arrays */\n> -\tList\t *elements;\n> +\tList\t *elements pg_node_attr(query_jumble_merge);\n\nShould we also update the pg_node_attr() comment for\nquery_jumble_merge in nodes.h?\n\n\n0003:\n\n> diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c\n> index d7841b51cc9..00eec30feb1 100644\n> --- a/contrib/pg_stat_statements/pg_stat_statements.c\n> +++ b/contrib/pg_stat_statements/pg_stat_statements.c\n> ...\n> @@ -2883,12 +2886,22 @@ generate_normalized_query(JumbleState *jstate, const char *query,\n> \t\t/* The firsts merged constant */\n> \t\telse if (!skip)\n> \t\t{\n> +\t\t\tstatic const uint32 powers_of_ten[] = {\n> +\t\t\t\t1, 10, 100,\n> +\t\t\t\t1000, 10000, 100000,\n> +\t\t\t\t1000000, 10000000, 100000000,\n> +\t\t\t\t1000000000\n> +\t\t\t};\n> +\t\t\tint lower_merged = powers_of_ten[magnitude - 1];\n> +\t\t\tint upper_merged = powers_of_ten[magnitude];\n\nHow about adding a reverse function of decimalLength32() to\nnumutils.h and use it here?\n\n> -\t\t\tn_quer_loc += sprintf(norm_query + n_quer_loc, \"...\");\n> +\t\t\tn_quer_loc += sprintf(norm_query + n_quer_loc, \"... [%d-%d entries]\",\n> +\t\t\t\t\t\t\t\t lower_merged, upper_merged - 1);\n\nDo we still have enough space in norm_query for this change?\nIt seems that norm_query expects up to 10 additional bytes\nper jstate->clocations[i].\n\n\nIt seems that we can merge 0001, 0003 and 0004 to one patch.\n(Sorry. I haven't read all discussions yet. If we already\ndiscuss this, sorry for this noise.)\n\n\nThanks,\n-- \nkou\n\n\n",
"msg_date": "Mon, 15 Apr 2024 18:09:29 +0900 (JST)",
"msg_from": "Sutou Kouhei <kou@clear-code.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Mon, Apr 15, 2024 at 06:09:29PM +0900, Sutou Kouhei wrote:\n>\n> Thanks. I'm not familiar with this code base but I've\n> reviewed these patches because I'm interested in this\n> feature too.\n\nThanks for the review! The commentaries for the first patch make sense\nto me, will apply.\n\n> 0003:\n>\n> > diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c\n> > index d7841b51cc9..00eec30feb1 100644\n> > --- a/contrib/pg_stat_statements/pg_stat_statements.c\n> > +++ b/contrib/pg_stat_statements/pg_stat_statements.c\n> > ...\n> > @@ -2883,12 +2886,22 @@ generate_normalized_query(JumbleState *jstate, const char *query,\n> > \t\t/* The firsts merged constant */\n> > \t\telse if (!skip)\n> > \t\t{\n> > +\t\t\tstatic const uint32 powers_of_ten[] = {\n> > +\t\t\t\t1, 10, 100,\n> > +\t\t\t\t1000, 10000, 100000,\n> > +\t\t\t\t1000000, 10000000, 100000000,\n> > +\t\t\t\t1000000000\n> > +\t\t\t};\n> > +\t\t\tint lower_merged = powers_of_ten[magnitude - 1];\n> > +\t\t\tint upper_merged = powers_of_ten[magnitude];\n>\n> How about adding a reverse function of decimalLength32() to\n> numutils.h and use it here?\n\nI was pondering that at some point, but eventually decided to keep it\nthis way, because:\n\n* the use case is quite specific, I can't image it's being used anywhere\n else\n\n* it would not be strictly reverse, as the transformation itself is not\n reversible in the strict sense\n\n> > -\t\t\tn_quer_loc += sprintf(norm_query + n_quer_loc, \"...\");\n> > +\t\t\tn_quer_loc += sprintf(norm_query + n_quer_loc, \"... [%d-%d entries]\",\n> > +\t\t\t\t\t\t\t\t lower_merged, upper_merged - 1);\n>\n> Do we still have enough space in norm_query for this change?\n> It seems that norm_query expects up to 10 additional bytes\n> per jstate->clocations[i].\n\nAs far as I understand there should be enough space, because we're going\nto replace at least 10 constants with one merge record. But it's a good\npoint, this should be called out in the commentary explaining why 10\nadditional bytes are added.\n\n> It seems that we can merge 0001, 0003 and 0004 to one patch.\n> (Sorry. I haven't read all discussions yet. If we already\n> discuss this, sorry for this noise.)\n\nThere is a certain disagreement about which portion of this feature\nmakes sense to go with first, thus I think keeping all options open is a\ngood idea. In the end a committer can squash the patches if needed.\n\n\n",
"msg_date": "Tue, 23 Apr 2024 10:18:15 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Tue, Apr 23, 2024 at 10:18:15AM +0200, Dmitry Dolgov wrote:\n> > On Mon, Apr 15, 2024 at 06:09:29PM +0900, Sutou Kouhei wrote:\n> >\n> > Thanks. I'm not familiar with this code base but I've\n> > reviewed these patches because I'm interested in this\n> > feature too.\n>\n> Thanks for the review! The commentaries for the first patch make sense\n> to me, will apply.\n\nHere is the new version. It turned out you were right about memory for\nthe normalized query, if the number of constants goes close to INT_MAX,\nthere were indeed not enough allocated. I've added a fix for this on top\nof the applied changes, and also improved readability for\npg_stat_statements part.",
"msg_date": "Sun, 12 May 2024 13:38:55 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "Hello\n\nThis feature will improve my monitoring. Even in patch 0001. I think there are many other people in the thread who think this is useful. So maybe we should move it forward? Any complaints about the overall design? I see in the discussion it was mentioned that it would be good to measure performance difference.\n\nPS: patch cannot be applied at this time, it needs another rebase.\n\nregards, Sergei\n\n\n",
"msg_date": "Sun, 11 Aug 2024 19:54:05 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sun, Aug 11, 2024 at 07:54:05PM +0300, Sergei Kornilov wrote:\n>\n> This feature will improve my monitoring. Even in patch 0001. I think there are many other people in the thread who think this is useful. So maybe we should move it forward? Any complaints about the overall design? I see in the discussion it was mentioned that it would be good to measure performance difference.\n>\n> PS: patch cannot be applied at this time, it needs another rebase.\n\nYeah, it seems like most people are fine with the first patch and the\nsimplest approach. I'm going to post a rebased version and a short\nthread summary soon.\n\n\n",
"msg_date": "Sun, 11 Aug 2024 21:34:55 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
},
{
"msg_contents": "> On Sun, Aug 11, 2024 at 09:34:55PM GMT, Dmitry Dolgov wrote:\n> > On Sun, Aug 11, 2024 at 07:54:05PM +0300, Sergei Kornilov wrote:\n> >\n> > This feature will improve my monitoring. Even in patch 0001. I think there are many other people in the thread who think this is useful. So maybe we should move it forward? Any complaints about the overall design? I see in the discussion it was mentioned that it would be good to measure performance difference.\n> >\n> > PS: patch cannot be applied at this time, it needs another rebase.\n>\n> Yeah, it seems like most people are fine with the first patch and the\n> simplest approach. I'm going to post a rebased version and a short\n> thread summary soon.\n\nOk, here is the rebased version. If anyone would like to review them, below is\nthe short summary of the thread. Currently the patch series contains 4 changes:\n\n* 0001-Prevent-jumbling-of-every-element-in-ArrayExpr.patch\n\n Implements the simplest way to handle constant arrays, if the array contains\n only constants it will be reduced. This is the basis, if I read it correctly\n Nathan and Michael expressed that they're mostly fine with this one.\n\n Michael seems to be skeptical about the \"merged\" flag in the LocationLen\n struct, but from what I see the proposed alternative has problems as well.\n There was also a note that the loop over constants has to be benchmarked, but\n it's not entirely clear for me in which dimentions to benchmark (i.e. what\n are the expectations). For both I'm waiting on a reply to my questions.\n\n* 0002-Reusable-decimalLength-functions.patch\n\n A small refactoring to make already existing \"powers\" functonality reusable\n for following patches.\n\n* 0003-Merge-constants-in-ArrayExpr-into-groups.patch\n\n Makes handling of constant arrays smarter by taking into account number of\n elements in the array. This way records are merged into groups power of 10,\n i.e. arrays with length 55 will land in a group 10-99, with lenght 555 in a\n group 100-999 etc. This was proposed by Alvaro, and personally I like this\n approach, because it remediates the issue of one-size-fits-all for the static\n threshold. But there are opinions that this introduces too much complexity.\n\n* 0004-Introduce-query_id_const_merge_threshold.patch\n\n Fine tuning for the previous patch, makes only arrays with the length over a\n certain threshold to be reduced.\n\nOn top of that Yasuo Honda and Jakub Wartak have provided a couple of practical\nexamples, where handling of constant arrays has to be improved. David Geier\npointed out some examples that might be confusing as well. All those are\ndefinitely worth addressing, but out of scope of this patch for now.",
"msg_date": "Tue, 13 Aug 2024 22:06:13 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_statements and \"IN\" conditions"
}
] |
[
{
"msg_contents": "I am finally trying to move from python2.7 to python 3.x\n\nfor both 3.7 and 3.8 I have (attached log):\n\n2020-08-12 18:35:47.433 CEST [10418] pg_regress/python3/ltree_plpython \nERROR: incompatible library \"/pub/devel/postgresql/prov\na38/postgresql-12.4-1.x86_64/build/tmp_install/usr/lib/postgresql/plpython3.dll\": \nmissing magic block\n2020-08-12 18:35:47.433 CEST [10418] pg_regress/python3/ltree_plpython \nHINT: Extension libraries are required to use the PG_MO\nDULE_MAGIC macro.\n2020-08-12 18:35:47.433 CEST [10418] pg_regress/python3/ltree_plpython \nSTATEMENT: CREATE EXTENSION ltree_plpython3u CASCADE;\n2020-08-12 18:35:47.433 CEST [10418] pg_regress/python3/ltree_plpython \nERROR: language \"plpython3u\" does not exist\n2020-08-12 18:35:47.433 CEST [10418] pg_regress/python3/ltree_plpython \nHINT: Use CREATE EXTENSION to load the language into th\ne database.\n2020-08-12 18:35:47.433 CEST [10418] pg_regress/python3/ltree_plpython \nSTATEMENT: CREATE FUNCTION test1(val ltree) RETURNS int\n LANGUAGE plpython3u\n TRANSFORM FOR TYPE ltree\n AS $$\n plpy.info(repr(val))\n return len(val)\n $$;\n\n\nOnly the python tests fail\n\n $ grep FAIL postgresql-12.4-1-check.log\ntest python3/hstore_plpython ... FAILED 423 ms\ntest python3/jsonb_plpython ... FAILED 172 ms\ntest python3/ltree_plpython ... FAILED 163 ms\n\nnever had problem with python2.7\n\nSuggestion ?\n\nRegards\nMarco",
"msg_date": "Wed, 12 Aug 2020 20:31:06 +0200",
"msg_from": "Marco Atzeri <marco.atzeri@gmail.com>",
"msg_from_op": true,
"msg_subject": "ltree_plpython failure test on Cygwin for 12.4 test"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen developing patches I find it fairly painful that I cannot re-indent\npatches with pgindent without also seeing a lot of indentation changes\nin unmodified parts of files. It is easy enough ([1]) to only re-indent\nfiles that I have modified, but there's often a lot of independent\nindentation changes in the files that I did modified.\n\nI e.g. just re-indented patch 0001 of my GetSnapshotData() series and\nmost of the hunks were entirely unrelated. Despite the development\nwindow for 14 having only relatively recently opened. Based on my\nexperience it tends to get worse over time.\n\n\nIs there any reason we don't just automatically run pgindent regularly?\nLike once a week? And also update typedefs.list automatically, while\nwe're at it?\n\nCurrently the yearly pgindent runs are somewhat painful for patches that\ndidn't make it into the release, but if we were to reindent on a more\nregular basis, that should be much less the case. It'd also help newer\ndevelopers who we sometimes tell to use pgindent - which doesn't really\nwork.\n\nGreetings,\n\nAndres Freund\n\n[1] ./src/tools/pgindent/pgindent $(git diff-tree --no-commit-id --name-only -r upstream/master..HEAD|grep -v src/test|grep -v READ ME|grep -v typedefs.list)\n\n\n",
"msg_date": "Wed, 12 Aug 2020 15:34:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2020-Aug-12, Andres Freund wrote:\n\n> Is there any reason we don't just automatically run pgindent regularly?\n> Like once a week? And also update typedefs.list automatically, while\n> we're at it?\n\nSeconded.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 12 Aug 2020 18:53:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi Andres,\n\nOn Wed, Aug 12, 2020 at 3:34 PM Andres Freund wrote:\n>\n> Hi,\n>\n> When developing patches I find it fairly painful that I cannot re-indent\n> patches with pgindent without also seeing a lot of indentation changes\n> in unmodified parts of files. It is easy enough ([1]) to only re-indent\n> files that I have modified, but there's often a lot of independent\n> indentation changes in the files that I did modified.\n>\n> I e.g. just re-indented patch 0001 of my GetSnapshotData() series and\n> most of the hunks were entirely unrelated. Despite the development\n> window for 14 having only relatively recently opened. Based on my\n> experience it tends to get worse over time.\n\nHow bad was it right after branching 13? I wonder if we have any\nempirical measure of badness over time -- assuming there was a point in\nthe recent past where everything was good, and the bad just crept in.\n\n>\n>\n> Is there any reason we don't just automatically run pgindent regularly?\n> Like once a week? And also update typedefs.list automatically, while\n> we're at it?\n\nYou know what's better than weekly? Every check-in. I for one would love\nit if we can just format the entire codebase, and ensure that new\ncheck-ins are also formatted. We _do_ need some form of continuous\nintegration to catch us when we have fallen short (again, once HEAD\nreaches a \"known good\" state, it's conceivably cheap to keep it in the\ngood state.\n\nCheers,\nJesse\n\n\n",
"msg_date": "Wed, 12 Aug 2020 16:08:50 -0700",
"msg_from": "Jesse Zhang <sbjesse@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-12 16:08:50 -0700, Jesse Zhang wrote:\n> On Wed, Aug 12, 2020 at 3:34 PM Andres Freund wrote:\n> >\n> > Hi,\n> >\n> > When developing patches I find it fairly painful that I cannot re-indent\n> > patches with pgindent without also seeing a lot of indentation changes\n> > in unmodified parts of files. It is easy enough ([1]) to only re-indent\n> > files that I have modified, but there's often a lot of independent\n> > indentation changes in the files that I did modified.\n> >\n> > I e.g. just re-indented patch 0001 of my GetSnapshotData() series and\n> > most of the hunks were entirely unrelated. Despite the development\n> > window for 14 having only relatively recently opened. Based on my\n> > experience it tends to get worse over time.\n> \n> How bad was it right after branching 13? I wonder if we have any\n> empirical measure of badness over time -- assuming there was a point in\n> the recent past where everything was good, and the bad just crept in.\n\nWell, just after branching it was perfect, because pgindent was\ncustomarily is run just before branching. After that it incrementally\ngets worse.\n\n\n> > Is there any reason we don't just automatically run pgindent regularly?\n> > Like once a week? And also update typedefs.list automatically, while\n> > we're at it?\n> \n> You know what's better than weekly? Every check-in. I for one would love\n> it if we can just format the entire codebase, and ensure that new\n> check-ins are also formatted. We _do_ need some form of continuous\n> integration to catch us when we have fallen short (again, once HEAD\n> reaches a \"known good\" state, it's conceivably cheap to keep it in the\n> good state.\n\nUnfortunately that is, with the current tooling, not entirely trivial to\ndo so completely. The way we generate the list of known typedefs\nunfortunately depends on the platform a build is run on. Therefore the\nbuildfarm collects a number of the generated list of typedefs from\ndifferent platforms, and then we use that combined list to run pgindent.\n\nWe surely can improve further, but I think having any automation around\nthis already would be a huge step.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Aug 2020 16:23:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Jesse Zhang <sbjesse@gmail.com> writes:\n> On Wed, Aug 12, 2020 at 3:34 PM Andres Freund wrote:\n>> Is there any reason we don't just automatically run pgindent regularly?\n>> Like once a week? And also update typedefs.list automatically, while\n>> we're at it?\n\n> You know what's better than weekly? Every check-in.\n\nI'm not in favor of unsupervised pgindent runs, really. It can do a lot\nof damage to code that was written without thinking about it --- in\nparticular, it'll make a hash of comment blocks that were manually\nformatted and not protected with dashes.\n\nIf the workflow is commit first and re-indent later, then we can always\nrevert the pgindent commit and clean things up manually; but an auto\nre-indent during commit wouldn't provide that history.\n\nI do like the idea of more frequent, smaller pgindent runs instead of\ndoing just one giant run per year. With the giant runs it's necessary\nto invest a fair amount of time eyeballing all the changes; if we did it\nevery couple weeks then the pain would be a lot less.\n\nAnother idea would be to have a bot that runs pgindent *without*\ncommitting the results, and emails the people who have made commits\ninto files that changed, saying \"if you don't like these diffs then\nyou'd better clean up your commit before it happens for real\". With\nsome warning like that, it might be okay to do automatic reindenting\na little bit later. Plus, nagging committers who habitually commit\nimproperly-indented code might persuade them to clean up their acts ;-)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Aug 2020 19:47:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Unfortunately that is, with the current tooling, not entirely trivial to\n> do so completely. The way we generate the list of known typedefs\n> unfortunately depends on the platform a build is run on. Therefore the\n> buildfarm collects a number of the generated list of typedefs from\n> different platforms, and then we use that combined list to run pgindent.\n\nYeah, it's hard to see how to avoid that given that the set of typedefs\nprovided by system headers is not fixed. (Windows vs not Windows is the\nworst case of course, but Unixen aren't all alike either.)\n\nAnother gotcha that we have to keep our eyes on is that sometimes the\nprocess finds spurious names that we don't want to treat as typedefs\nbecause it results in messing up too much code. There's a reject list\nin pgindent that takes care of those, but it has to be maintained\nmanually. So I'm not sure how that could work in conjunction with\nunsupervised reindents --- by the time you notice a problem, the git\nhistory will already be cluttered with bogus reindentations.\n\nMaybe the secret is to not allow automated adoption of new typedefs.list\nentries, but to require somebody to add entries to that file by hand,\neven if they're basing it on the buildfarm results. (This would\nencourage the habit some people have adopted of updating typedefs.list\nalong with commits that add typedefs. I've never done that, but would\nbe willing to change if there's good motivation to.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Aug 2020 19:57:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Wed, Aug 12, 2020 at 06:53:25PM -0400, Alvaro Herrera wrote:\n> On 2020-Aug-12, Andres Freund wrote:\n>> Is there any reason we don't just automatically run pgindent regularly?\n>> Like once a week? And also update typedefs.list automatically, while\n>> we're at it?\n> \n> Seconded.\n\nThirded.\n--\nMichael",
"msg_date": "Thu, 13 Aug 2020 10:29:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Wed, Aug 12, 2020 at 07:47:01PM -0400, Tom Lane wrote:\n> Jesse Zhang <sbjesse@gmail.com> writes:\n> > On Wed, Aug 12, 2020 at 3:34 PM Andres Freund wrote:\n> >> Is there any reason we don't just automatically run pgindent regularly?\n> >> Like once a week? And also update typedefs.list automatically, while\n> >> we're at it?\n> \n> > You know what's better than weekly? Every check-in.\n> \n> I'm not in favor of unsupervised pgindent runs, really. It can do a lot\n> of damage to code that was written without thinking about it --- in\n> particular, it'll make a hash of comment blocks that were manually\n> formatted and not protected with dashes.\n> \n> If the workflow is commit first and re-indent later, then we can always\n> revert the pgindent commit and clean things up manually; but an auto\n> re-indent during commit wouldn't provide that history.\n\nThere are competing implementations of assuring pgindent-cleanliness at every\ncheck-in:\n\n1. After each push, an automated followup commit appears, restoring\n pgindent-cleanliness.\n2. \"git push\" results in a commit that melds pgindent changes into what the\n committer tried to push.\n3. \"git push\" fails, for the master branch, if the pushed commit disrupts\n pgindent-cleanliness.\n\nWere you thinking of (2)? (1) doesn't have the lack-of-history problem, but\nit does have the unexpected-damage problem, and it makes gitweb noisier. (3)\nhas neither problem, and I'd prefer it over (1), (2), or $SUBJECT.\n\nRegarding typedefs.list, I would use the in-tree one, like you discussed here:\n\nOn Wed, Aug 12, 2020 at 07:57:29PM -0400, Tom Lane wrote:\n> Maybe the secret is to not allow automated adoption of new typedefs.list\n> entries, but to require somebody to add entries to that file by hand,\n> even if they're basing it on the buildfarm results. (This would\n> encourage the habit some people have adopted of updating typedefs.list\n> along with commits that add typedefs. I've never done that, but would\n> be willing to change if there's good motivation to.)\n\n\n",
"msg_date": "Wed, 12 Aug 2020 20:53:08 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Wed, Aug 12, 2020 at 07:47:01PM -0400, Tom Lane wrote:\n>> If the workflow is commit first and re-indent later, then we can always\n>> revert the pgindent commit and clean things up manually; but an auto\n>> re-indent during commit wouldn't provide that history.\n\n> There are competing implementations of assuring pgindent-cleanliness at every\n> check-in:\n\n> 1. After each push, an automated followup commit appears, restoring\n> pgindent-cleanliness.\n> 2. \"git push\" results in a commit that melds pgindent changes into what the\n> committer tried to push.\n> 3. \"git push\" fails, for the master branch, if the pushed commit disrupts\n> pgindent-cleanliness.\n\n> Were you thinking of (2)?\n\nI was objecting to (2). (1) would perhaps work. (3) could be pretty\ndarn annoying, especially if it blocks a time-critical security patch.\n\n> Regarding typedefs.list, I would use the in-tree one, like you discussed here:\n\nYeah, after thinking about that more, it seems like automated\ntypedefs.list updates would be far riskier than automated reindent\nbased on the existing typedefs.list. The latter could at least be\nexpected not to change code unrelated to the immediate commit.\ntypedefs.list updates need some amount of adult supervision.\n\n(I'd still vote for nag-mail to the committer whose work got reindented,\nin case the bot made things a lot worse.)\n\nI hadn't thought about the angle of HEAD versus back-branch patches,\nbut that does seem like something to ponder. The back branches don't\nhave the same pgindent rules necessarily, plus the patch versions\nmight be different in more than just whitespace. My own habit when\nback-patching has been to indent the HEAD patch per-current-rules and\nthen preserve that layout as much as possible in the back branches,\nbut I doubt we could get a tool to do that with any reliability.\n\nOf course, there's also the possibility of forcibly reindenting\nall the active back branches to current rules. But I think we've\nrejected that idea already, because it would cause so much pain\nfor forks that are following a back branch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Aug 2020 00:08:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 12:08:36AM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Wed, Aug 12, 2020 at 07:47:01PM -0400, Tom Lane wrote:\n> >> If the workflow is commit first and re-indent later, then we can always\n> >> revert the pgindent commit and clean things up manually; but an auto\n> >> re-indent during commit wouldn't provide that history.\n> \n> > There are competing implementations of assuring pgindent-cleanliness at every\n> > check-in:\n> \n> > 1. After each push, an automated followup commit appears, restoring\n> > pgindent-cleanliness.\n> > 2. \"git push\" results in a commit that melds pgindent changes into what the\n> > committer tried to push.\n> > 3. \"git push\" fails, for the master branch, if the pushed commit disrupts\n> > pgindent-cleanliness.\n> \n> > Were you thinking of (2)?\n> \n> I was objecting to (2). (1) would perhaps work. (3) could be pretty\n> darn annoying,\n\nRight. I think of three use cases here:\n\na) I'm a committer who wants to push clean code. I want (3).\nb) I'm a committer who wants to ignore pgindent. I get some email complaints\n under (1), which I ignore. Under (3), I'm forced to become (a).\nc) I'm reading the history. I want (3).\n\n> I hadn't thought about the angle of HEAD versus back-branch patches,\n> but that does seem like something to ponder. The back branches don't\n> have the same pgindent rules necessarily, plus the patch versions\n> might be different in more than just whitespace. My own habit when\n> back-patching has been to indent the HEAD patch per-current-rules and\n> then preserve that layout as much as possible in the back branches,\n> but I doubt we could get a tool to do that with any reliability.\n\nSimilar habit here. Another advantage of master-only is a guarantee against\ndisrupting time-critical patches. (It would be ugly to push back branches and\nsort out the master push later, but it doesn't obstruct the mission.)\n\n\n",
"msg_date": "Wed, 12 Aug 2020 21:26:55 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> ... Another advantage of master-only is a guarantee against\n> disrupting time-critical patches. (It would be ugly to push back branches and\n> sort out the master push later, but it doesn't obstruct the mission.)\n\nHm, doesn't it? I had the idea that \"git push\" is atomic --- either all\nthe per-branch commits succeed, or they all fail. I might be wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Aug 2020 01:14:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 01:14:33AM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > ... Another advantage of master-only is a guarantee against\n> > disrupting time-critical patches. (It would be ugly to push back branches and\n> > sort out the master push later, but it doesn't obstruct the mission.)\n> \n> Hm, doesn't it? I had the idea that \"git push\" is atomic --- either all\n> the per-branch commits succeed, or they all fail. I might be wrong.\n\nAtomicity is good. I just meant that you could issue something like \"git push\norigin $(cd .git/refs/heads && ls REL*)\" to defer the complaint about master.\n\n\n",
"msg_date": "Wed, 12 Aug 2020 22:21:37 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Wed, Aug 12, 2020 at 10:14 PM Tom Lane wrote:\n>\n> Noah Misch <noah@leadboat.com> writes:\n> > ... Another advantage of master-only is a guarantee against\n> > disrupting time-critical patches. (It would be ugly to push back branches and\n> > sort out the master push later, but it doesn't obstruct the mission.)\n>\n> Hm, doesn't it? I had the idea that \"git push\" is atomic --- either all\n> the per-branch commits succeed, or they all fail. I might be wrong.\n\nAs of Git 2.28, atomic pushes are not turned on by default. That means\n\"git push my-remote foo bar\" _can_ result in partial success. I'm that\nparanoid who does \"git push --atomic my-remote foo bar fizz\".\n\nCheers,\nJesse\n\n\n",
"msg_date": "Wed, 12 Aug 2020 22:24:41 -0700",
"msg_from": "Jesse Zhang <sbjesse@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 6:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Noah Misch <noah@leadboat.com> writes:\n> > On Wed, Aug 12, 2020 at 07:47:01PM -0400, Tom Lane wrote:\n> >> If the workflow is commit first and re-indent later, then we can always\n> >> revert the pgindent commit and clean things up manually; but an auto\n> >> re-indent during commit wouldn't provide that history.\n>\n> > There are competing implementations of assuring pgindent-cleanliness at\n> every\n> > check-in:\n>\n> > 1. After each push, an automated followup commit appears, restoring\n> > pgindent-cleanliness.\n> > 2. \"git push\" results in a commit that melds pgindent changes into what\n> the\n> > committer tried to push.\n> > 3. \"git push\" fails, for the master branch, if the pushed commit disrupts\n> > pgindent-cleanliness.\n>\n\nThere's another option here as well, that is a bit \"softer\", is to use a\npre-commit hook.\n\nThat is, it's a hook that runs on the committers machine prior to the\ncommit. This hook can then yell \"hey, you need to run pgindent before\ncommitting this\", but it gives the committer the ability to do --no-verify\nand commit anyway (thus won't block things that are urgent).\n\nSince it allows a simple bypass, and very much relies on the committer to\nremember to install the hook in their local repository, this is not a\nguarantee in any way. So it might need to be done together with something\nelse in the background doing like a daily job or so, but it might make that\nbackground work be smaller and fewer changes.\n\nThis obviously only works in the case where we can rely on the committers\nto remember to install such a hook, but given the few committers that we do\nhave, I think we can certainly get that up to an \"acceptable rate\" fairly\neasily. FWIW, this is similar to what we do in the pgweb, pgeu and a few\nother repositories, to ensure python styleguides are followed.\n\n\n> Were you thinking of (2)?\n>\n> I was objecting to (2). (1) would perhaps work. (3) could be pretty\n> darn annoying, especially if it blocks a time-critical security patch.\n>\n\nFWIW, I agree that (2) seems like a really bad option. In that suddenly a\ncommitter has committed something they were not aware of.\n\n\n\n>\n> > Regarding typedefs.list, I would use the in-tree one, like you discussed\n> here:\n>\n> Yeah, after thinking about that more, it seems like automated\n> typedefs.list updates would be far riskier than automated reindent\n> based on the existing typedefs.list. The latter could at least be\n> expected not to change code unrelated to the immediate commit.\n> typedefs.list updates need some amount of adult supervision.\n>\n> (I'd still vote for nag-mail to the committer whose work got reindented,\n> in case the bot made things a lot worse.)\n>\n\nYeah, I'm definitely not a big fan of automated commits, regardless of if\nit's just indent or both indent+typedef. It's happened at least once, and I\nthink more than once, that we've had to basically hard reset the upstream\nrepository and clean things up after automated commits have gone bonkers\n(hi, Bruce!). Having an automated system do the whole flow of\nchange->commit->push definitely invites this type of problem.\n\nThere are many solutions that do such things but that do it in the \"github\nworkflow\" way, which is they do change -> commit -> create pull request,\nand then somebody eyeballs that pullrequest and approves it. We don't have\nPRs, but we could either have a script that simply sends out a patch to a\nmailinglist, or we could have a script that maintains a separate branch\nwhich is auto-pgindented, and then have a committer squash-merge that\nbranch after having reviewed it.\n\nMaybe something like that in combination with a pre-commit hook per above.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Aug 13, 2020 at 6:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Noah Misch <noah@leadboat.com> writes:\n> On Wed, Aug 12, 2020 at 07:47:01PM -0400, Tom Lane wrote:\n>> If the workflow is commit first and re-indent later, then we can always\n>> revert the pgindent commit and clean things up manually; but an auto\n>> re-indent during commit wouldn't provide that history.\n\n> There are competing implementations of assuring pgindent-cleanliness at every\n> check-in:\n\n> 1. After each push, an automated followup commit appears, restoring\n> pgindent-cleanliness.\n> 2. \"git push\" results in a commit that melds pgindent changes into what the\n> committer tried to push.\n> 3. \"git push\" fails, for the master branch, if the pushed commit disrupts\n> pgindent-cleanliness.There's another option here as well, that is a bit \"softer\", is to use a pre-commit hook.That is, it's a hook that runs on the committers machine prior to the commit. This hook can then yell \"hey, you need to run pgindent before committing this\", but it gives the committer the ability to do --no-verify and commit anyway (thus won't block things that are urgent).Since it allows a simple bypass, and very much relies on the committer to remember to install the hook in their local repository, this is not a guarantee in any way. So it might need to be done together with something else in the background doing like a daily job or so, but it might make that background work be smaller and fewer changes.This obviously only works in the case where we can rely on the committers to remember to install such a hook, but given the few committers that we do have, I think we can certainly get that up to an \"acceptable rate\" fairly easily. FWIW, this is similar to what we do in the pgweb, pgeu and a few other repositories, to ensure python styleguides are followed.\n> Were you thinking of (2)?\n\nI was objecting to (2). (1) would perhaps work. (3) could be pretty\ndarn annoying, especially if it blocks a time-critical security patch.FWIW, I agree that (2) seems like a really bad option. In that suddenly a committer has committed something they were not aware of. \n\n> Regarding typedefs.list, I would use the in-tree one, like you discussed here:\n\nYeah, after thinking about that more, it seems like automated\ntypedefs.list updates would be far riskier than automated reindent\nbased on the existing typedefs.list. The latter could at least be\nexpected not to change code unrelated to the immediate commit.\ntypedefs.list updates need some amount of adult supervision.\n\n(I'd still vote for nag-mail to the committer whose work got reindented,\nin case the bot made things a lot worse.)Yeah, I'm definitely not a big fan of automated commits, regardless of if it's just indent or both indent+typedef. It's happened at least once, and I think more than once, that we've had to basically hard reset the upstream repository and clean things up after automated commits have gone bonkers (hi, Bruce!). Having an automated system do the whole flow of change->commit->push definitely invites this type of problem.There are many solutions that do such things but that do it in the \"github workflow\" way, which is they do change -> commit -> create pull request, and then somebody eyeballs that pullrequest and approves it. We don't have PRs, but we could either have a script that simply sends out a patch to a mailinglist, or we could have a script that maintains a separate branch which is auto-pgindented, and then have a committer squash-merge that branch after having reviewed it.Maybe something like that in combination with a pre-commit hook per above.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 13 Aug 2020 09:47:48 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> There's another option here as well, that is a bit \"softer\", is to use a\n> pre-commit hook.\n\nYeah, +1 on a pre-commit idea to help address this. I certainly agree\nwith Andres that it's quite annoying to deal with commits coming in that\naren't indented properly but are in some file that I'm working on.\n\n> This obviously only works in the case where we can rely on the committers\n> to remember to install such a hook, but given the few committers that we do\n> have, I think we can certainly get that up to an \"acceptable rate\" fairly\n> easily. FWIW, this is similar to what we do in the pgweb, pgeu and a few\n> other repositories, to ensure python styleguides are followed.\n\nYeah, no guarantee, but definitely seems like it'd be a good\nimprovement.\n\n> > I was objecting to (2). (1) would perhaps work. (3) could be pretty\n> > darn annoying, especially if it blocks a time-critical security patch.\n> \n> FWIW, I agree that (2) seems like a really bad option. In that suddenly a\n> committer has committed something they were not aware of.\n\nYeah, I dislike (2) a lot too.\n\n> Yeah, I'm definitely not a big fan of automated commits, regardless of if\n> it's just indent or both indent+typedef. It's happened at least once, and I\n> think more than once, that we've had to basically hard reset the upstream\n> repository and clean things up after automated commits have gone bonkers\n> (hi, Bruce!). Having an automated system do the whole flow of\n> change->commit->push definitely invites this type of problem.\n\nAgreed, automated commits seems terribly risky.\n\n> There are many solutions that do such things but that do it in the \"github\n> workflow\" way, which is they do change -> commit -> create pull request,\n> and then somebody eyeballs that pullrequest and approves it. We don't have\n> PRs, but we could either have a script that simply sends out a patch to a\n> mailinglist, or we could have a script that maintains a separate branch\n> which is auto-pgindented, and then have a committer squash-merge that\n> branch after having reviewed it.\n> \n> Maybe something like that in combination with a pre-commit hook per above.\n\nSo, in our world, wouldn't this translate to 'make cfbot complain'?\n\nI'm definitely a fan of the idea of having cfbot flag these and then we\nmaybe get to a point where it's not the committers dealing with fixing\npatches that weren't pgindent'd properly, it's the actual patch\nsubmitters being nagged about it...\n\nThanks,\n\nStephen",
"msg_date": "Thu, 13 Aug 2020 12:30:44 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> So, in our world, wouldn't this translate to 'make cfbot complain'?\n\n> I'm definitely a fan of the idea of having cfbot flag these and then we\n> maybe get to a point where it's not the committers dealing with fixing\n> patches that weren't pgindent'd properly, it's the actual patch\n> submitters being nagged about it...\n\nMeh. Asking all submitters to install pgindent is a bit of a burden.\nMoreover, sometimes it's better to provide a patch that deliberately\nhasn't reindented existing code, for ease of review (say, when you're\nadding if() { ... } around some big hunk of code). I think getting\ncommitters to do this as part of commit is a better workflow.\n\n(Admittedly, since I've been doing that for a long time, I don't\nfind it to be a burden. I suppose some committers do.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Aug 2020 12:50:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 6:30 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> * Magnus Hagander (magnus@hagander.net) wrote:\n>\n> > There are many solutions that do such things but that do it in the\n> \"github\n> > workflow\" way, which is they do change -> commit -> create pull request,\n> > and then somebody eyeballs that pullrequest and approves it. We don't\n> have\n> > PRs, but we could either have a script that simply sends out a patch to a\n> > mailinglist, or we could have a script that maintains a separate branch\n> > which is auto-pgindented, and then have a committer squash-merge that\n> > branch after having reviewed it.\n> >\n> > Maybe something like that in combination with a pre-commit hook per\n> above.\n>\n> So, in our world, wouldn't this translate to 'make cfbot complain'?\n>\n> I'm definitely a fan of the idea of having cfbot flag these and then we\n> maybe get to a point where it's not the committers dealing with fixing\n> patches that weren't pgindent'd properly, it's the actual patch\n> submitters being nagged about it...\n>\n\nWerll, that's one thing, but what I was thinking of here was more of an\nautomated branch maintained for the committers, not for the individual\npatch owners.\n\nThat is:\n1. Whenever a patch is pushed on master on the main repo a process kicked\noff (or maybe wait 5 minutes to coalesce multiple patches if there are)\n2. This process checks out master, and runs pgindent on it\n3. When done, this gets committed to a new branch (or just overwrites an\nexisting branch of course, we don't need to maintain history here) like\n\"master-indented\". This branch can be in a different repo, but one that\nstarts out as a clone of the main one\n4. A committer (any committer) can then on regular basis examine the\ndifferences by fetch + diff. If they're happy with it, cherry pick it in.\nIf not, figure out what needs to be done to adjust it.\n\nStep 4 can be done at whatever interval we prefer, and we can have\nsomething to nag us if head has been \"off-indent\" for too long.\n\nThis would be the backup for things that aren't indented during patch\ncommit, not other things.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Aug 13, 2020 at 6:30 PM Stephen Frost <sfrost@snowman.net> wrote:\n* Magnus Hagander (magnus@hagander.net) wrote:\n> There are many solutions that do such things but that do it in the \"github\n> workflow\" way, which is they do change -> commit -> create pull request,\n> and then somebody eyeballs that pullrequest and approves it. We don't have\n> PRs, but we could either have a script that simply sends out a patch to a\n> mailinglist, or we could have a script that maintains a separate branch\n> which is auto-pgindented, and then have a committer squash-merge that\n> branch after having reviewed it.\n> \n> Maybe something like that in combination with a pre-commit hook per above.\n\nSo, in our world, wouldn't this translate to 'make cfbot complain'?\n\nI'm definitely a fan of the idea of having cfbot flag these and then we\nmaybe get to a point where it's not the committers dealing with fixing\npatches that weren't pgindent'd properly, it's the actual patch\nsubmitters being nagged about it...Werll, that's one thing, but what I was thinking of here was more of an automated branch maintained for the committers, not for the individual patch owners.That is:1. Whenever a patch is pushed on master on the main repo a process kicked off (or maybe wait 5 minutes to coalesce multiple patches if there are)2. This process checks out master, and runs pgindent on it3. When done, this gets committed to a new branch (or just overwrites an existing branch of course, we don't need to maintain history here) like \"master-indented\". This branch can be in a different repo, but one that starts out as a clone of the main one4. A committer (any committer) can then on regular basis examine the differences by fetch + diff. If they're happy with it, cherry pick it in. If not, figure out what needs to be done to adjust it.Step 4 can be done at whatever interval we prefer, and we can have something to nag us if head has been \"off-indent\" for too long.This would be the backup for things that aren't indented during patch commit, not other things. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 13 Aug 2020 18:58:50 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Wed, Aug 12, 2020 at 7:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm not in favor of unsupervised pgindent runs, really. It can do a lot\n> of damage to code that was written without thinking about it --- in\n> particular, it'll make a hash of comment blocks that were manually\n> formatted and not protected with dashes.\n\nNo committer should be committing code without thinking about\npgindent. If some are, they need to up their game.\n\nI am not sure whether weekly or after-every-commit pgindent runs is a\ngood idea, but I think we should try to do it once a month or so. It's\ntoo annoying otherwise. I could go either way on the question of\nautomation.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 13 Aug 2020 15:36:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 12:30 PM Stephen Frost <sfrost@snowman.net> wrote:\n> So, in our world, wouldn't this translate to 'make cfbot complain'?\n\nThis seems like it would be useful, but we'd have to figure out what\nto do about typedefs.list. If the patch is indented with the current\none (which is auto-generated by the entire build farm, remember) it's\nlikely to mess up a patch that's otherwise properly formatted. We'd\neither need to insist that people include updates to typedefs.list in\nthe patch, or else have the cfbot take a stab at doing those updates\nitself.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 13 Aug 2020 15:39:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I am not sure whether weekly or after-every-commit pgindent runs is a\n> good idea, but I think we should try to do it once a month or so. It's\n> too annoying otherwise. I could go either way on the question of\n> automation.\n\nThe traditional reason for not doing pgindent too often has been that\nit'd cause more work for people who have to rebase their patches over\npgindent's results. If we want to do it more often, then in order to\nrespond to that concern, I think we need to do it really often ---\nnot necessarily quite continuously, but often enough that pgindent\nis only changing recently-committed code. In this way, it'd be likely\nthat anyone with a patch touching that same code would only need to\nrebase once not twice. The approaches involving an automated run\ngive a guarantee of that, otherwise we don't have a guarantee; but\nas long as it's not many days delay I think it wouldn't be bad.\n\nIntervals on the order of a month seem likely to be the worst of\nboth worlds from this standpoint --- too long for people to wait\nbefore rebasing their patch, yet short enough that they'd have\nto do so repeatedly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Aug 2020 15:47:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 3:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The traditional reason for not doing pgindent too often has been that\n> it'd cause more work for people who have to rebase their patches over\n> pgindent's results. If we want to do it more often, then in order to\n> respond to that concern, I think we need to do it really often ---\n> not necessarily quite continuously, but often enough that pgindent\n> is only changing recently-committed code. In this way, it'd be likely\n> that anyone with a patch touching that same code would only need to\n> rebase once not twice. The approaches involving an automated run\n> give a guarantee of that, otherwise we don't have a guarantee; but\n> as long as it's not many days delay I think it wouldn't be bad.\n>\n> Intervals on the order of a month seem likely to be the worst of\n> both worlds from this standpoint --- too long for people to wait\n> before rebasing their patch, yet short enough that they'd have\n> to do so repeatedly.\n\nYeah, I get the point. It depends somewhat on how often you think\npeople will rebase. The main thing against more frequent pgindent runs\nis that it clutters the history. If done manually, it's also a lot of\nwork.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 13 Aug 2020 15:50:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-13 12:50:16 -0400, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > So, in our world, wouldn't this translate to 'make cfbot complain'?\n> \n> > I'm definitely a fan of the idea of having cfbot flag these and then we\n> > maybe get to a point where it's not the committers dealing with fixing\n> > patches that weren't pgindent'd properly, it's the actual patch\n> > submitters being nagged about it...\n> \n> Meh. Asking all submitters to install pgindent is a bit of a burden.\n\n+1. We could improve on that by slurping it into src/tools though. If\nthere were a 'make patchprep' target, it'd be a lot more realistic.\n\n\nBut even so, it'd robably further inrease the rate of needing to\nconstantly rebase lingering patches if cfbot considered indentation\nissues failures. E.g. because of typedefs.list updates etc. So I'm\nagainst that, even if we had a patchprep target that didn't need\nexternal tools.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 13 Aug 2020 13:06:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Aug 13, 2020 at 12:30 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > So, in our world, wouldn't this translate to 'make cfbot complain'?\n> \n> This seems like it would be useful, but we'd have to figure out what\n> to do about typedefs.list. If the patch is indented with the current\n> one (which is auto-generated by the entire build farm, remember) it's\n> likely to mess up a patch that's otherwise properly formatted. We'd\n> either need to insist that people include updates to typedefs.list in\n> the patch, or else have the cfbot take a stab at doing those updates\n> itself.\n\nFor my 2c, anyway, I like the idea of having folks update the typedefs\nthemselves when they've got a patch that needs a new typedef to be\nindented correctly. Having cfbot try to do that seems unlikely to work\nwell.\n\nI also didn't mean to imply that we'd push back and ask for a rebase due\nto indentation changes, but at the same time, I question if it's really\nthat realistic a concern- either whomever posted the patch ran pgindent\non it, or they didn't, and I doubt cfbot's check of that would change\nwithout there being a conflict between the patch and something that got\ncommitted anyway.\n\nI also disagree that it's that much of a burden to ask people who are\nalready hacking on PG to install pgindent.\n\nAll that said, seems that others feel differently and while I still\nthink it's a pretty reasonable idea to have cfbot check, if no one\nagrees with me, that's fine too. Having the pre-commit hook would help\nwith the downstream issue of pgindent pain from unrelated incorrect\nindentation, so at least dealing with the patch author not properly\nindenting to start with would be just on the bits the patch is already\nmodifying, which is a lot better.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 13 Aug 2020 16:16:28 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2020-Aug-13, Stephen Frost wrote:\n\n> For my 2c, anyway, I like the idea of having folks update the typedefs\n> themselves when they've got a patch that needs a new typedef to be\n> indented correctly.\n\nWell, let's for starters encourage committers to update typedefs.\nPersonally I've stayed away from it for each commit just because we've\nhistorically not done it, but I can easily change that. Plus, it cannot\nharm.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 14 Aug 2020 16:22:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2020-Aug-13, Magnus Hagander wrote:\n\n> That is:\n> 1. Whenever a patch is pushed on master on the main repo a process kicked\n> off (or maybe wait 5 minutes to coalesce multiple patches if there are)\n> 2. This process checks out master, and runs pgindent on it\n> 3. When done, this gets committed to a new branch (or just overwrites an\n> existing branch of course, we don't need to maintain history here) like\n> \"master-indented\". This branch can be in a different repo, but one that\n> starts out as a clone of the main one\n> 4. A committer (any committer) can then on regular basis examine the\n> differences by fetch + diff. If they're happy with it, cherry pick it in.\n> If not, figure out what needs to be done to adjust it.\n\nSounds good -- for branch master.\n\nYesterday I tried to indent some patch across all branches, only to\ndiscover that I'm lacking the pg_bsd_indent necessary for the older\nones. I already have two, but apparently I'd need *four* different\nversions with current branches (1.3, 2.0, 2.1, 2.1.1)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 14 Aug 2020 16:26:51 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Aug 15, 2020 at 1:57 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Aug-13, Magnus Hagander wrote:\n>\n> > That is:\n> > 1. Whenever a patch is pushed on master on the main repo a process kicked\n> > off (or maybe wait 5 minutes to coalesce multiple patches if there are)\n> > 2. This process checks out master, and runs pgindent on it\n> > 3. When done, this gets committed to a new branch (or just overwrites an\n> > existing branch of course, we don't need to maintain history here) like\n> > \"master-indented\". This branch can be in a different repo, but one that\n> > starts out as a clone of the main one\n> > 4. A committer (any committer) can then on regular basis examine the\n> > differences by fetch + diff. If they're happy with it, cherry pick it in.\n> > If not, figure out what needs to be done to adjust it.\n>\n> Sounds good -- for branch master.\n>\n> Yesterday I tried to indent some patch across all branches, only to\n> discover that I'm lacking the pg_bsd_indent necessary for the older\n> ones. I already have two, but apparently I'd need *four* different\n> versions with current branches (1.3, 2.0, 2.1, 2.1.1)\n>\n\nFWIW, for back-branches, I just do similar to what Tom said above [1]\n(\"My own habit when back-patching has been to indent the HEAD patch\nper-current-rules and then preserve that layout as much as possible in\nthe back branches\"). If we want we can maintain all the required\nversions of pg_bsd_indent but as of now, I am not doing so and thought\nthat following some approximation rule (do it for HEAD and try my best\nto maintain the layout for back-branches) is good enough.\n\n[1] - https://www.postgresql.org/message-id/397020.1597291716%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 15 Aug 2020 09:04:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2020-08-13 00:34, Andres Freund wrote:\n> I e.g. just re-indented patch 0001 of my GetSnapshotData() series and\n> most of the hunks were entirely unrelated. Despite the development\n> window for 14 having only relatively recently opened. Based on my\n> experience it tends to get worse over time.\n\nDo we have a sense of why poorly-indented code gets committed? I think \nsome of the indentation rules are hard to follow manually. (pgperltidy \nis worse.)\n\nAlso, since pgindent gets run eventually anyway, it's not really that \nimportant to get the indentation right the first time. I suspect the \ngoal of most authors and committers is to write readable code rather \nthan to divine the exact pgindent output.\n\nI think as a start, we could just issue a guidelines that all committed \ncode should follow pgindent. That has never really been a guideline, so \nit's not surprising that it's not followed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 15 Aug 2020 13:47:41 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-15 13:47:41 +0200, Peter Eisentraut wrote:\n> On 2020-08-13 00:34, Andres Freund wrote:\n> > I e.g. just re-indented patch 0001 of my GetSnapshotData() series and\n> > most of the hunks were entirely unrelated. Despite the development\n> > window for 14 having only relatively recently opened. Based on my\n> > experience it tends to get worse over time.\n> \n> Do we have a sense of why poorly-indented code gets committed? I think some\n> of the indentation rules are hard to follow manually. (pgperltidy is\n> worse.)\n> \n> Also, since pgindent gets run eventually anyway, it's not really that\n> important to get the indentation right the first time. I suspect the goal\n> of most authors and committers is to write readable code rather than to\n> divine the exact pgindent output.\n\nOne thing is that some here are actively against manually adding entries\nto typedefs.list. Which then means that code gets oddly indented if you\nuse pgindent. I personally try to make the predictable updates to\ntypedefs.list, which then at least allows halfway sensibly indenting my\nown changes.\n\n\n> I think as a start, we could just issue a guidelines that all committed code\n> should follow pgindent. That has never really been a guideline, so it's not\n> surprising that it's not followed.\n\nWithout a properly indented baseline that's hard to do, because it'll\ncause damage all over. So I don't think we easily can start just there -\nwe'd first need to re-indent everything.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 15 Aug 2020 09:59:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> One thing is that some here are actively against manually adding entries\n> to typedefs.list.\n\nI've been of the opinion that it's pointless to do so under the current\nregime where (a) only a few people do that and (b) we only officially\nre-indent once a year anyway. When I want to manually run pgindent,\nI always pull down a fresh typedefs.list from the buildfarm, which is\nreasonably up-to-date regardless of what people added or didn't add,\nand then add any new typedefs from my current patch to that out-of-tree\ncopy.\n\nNow, if we switch to a regime where we're trying to keep the tree in\nmore nearly correctly-indented shape all the time, it would make sense\nto revisit that. I'm not saying that it's unreasonable to want to have\nthe in-tree typedefs.list track reality more closely --- only that doing\nso in a half-baked way won't be very helpful.\n\n>> I think as a start, we could just issue a guidelines that all committed code\n>> should follow pgindent. That has never really been a guideline, so it's not\n>> surprising that it's not followed.\n\n> Without a properly indented baseline that's hard to do, because it'll\n> cause damage all over. So I don't think we easily can start just there -\n> we'd first need to re-indent everything.\n\nWell, we can certainly do a tree-wide re-indent anytime we're ready.\nI doubt it would be very painful right now, with so little new work\nsince the last run.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 15 Aug 2020 13:44:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Aug 15, 2020 at 01:44:34PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Without a properly indented baseline that's hard to do, because it'll\n> > cause damage all over. So I don't think we easily can start just there -\n> > we'd first need to re-indent everything.\n> \n> Well, we can certainly do a tree-wide re-indent anytime we're ready.\n> I doubt it would be very painful right now, with so little new work\n> since the last run.\n\nUh, I thought Tom was saying we need to reindent all branches, which\nwould be a big change for those tracking forks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 17 Aug 2020 13:54:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-17 13:54:15 -0400, Bruce Momjian wrote:\n> On Sat, Aug 15, 2020 at 01:44:34PM -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > Without a properly indented baseline that's hard to do, because it'll\n> > > cause damage all over. So I don't think we easily can start just there -\n> > > we'd first need to re-indent everything.\n> > \n> > Well, we can certainly do a tree-wide re-indent anytime we're ready.\n> > I doubt it would be very painful right now, with so little new work\n> > since the last run.\n> \n> Uh, I thought Tom was saying we need to reindent all branches, which\n> would be a big change for those tracking forks.\n\nI don't think he said that? He said *if* we were to reindent all\nbranches, forks would probably have an issue. We're already reindenting\nHEAD on a regular basis (just very infrequent), so it can't be a\nfundamental issue.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Aug 2020 11:05:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Sat, Aug 15, 2020 at 01:44:34PM -0400, Tom Lane wrote:\n>> Well, we can certainly do a tree-wide re-indent anytime we're ready.\n>> I doubt it would be very painful right now, with so little new work\n>> since the last run.\n\n> Uh, I thought Tom was saying we need to reindent all branches, which\n> would be a big change for those tracking forks.\n\nNo, I'm not for reindenting the back branches in general. There was\nsome discussion about whether to try to indent back-patched patches\nto meet the conventions of the specific back branch, but I doubt\nthat that's worth troubling over. I think we really only care\nabout whether HEAD is fully consistently indented.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Aug 2020 14:05:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-15 13:44:34 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > One thing is that some here are actively against manually adding entries\n> > to typedefs.list.\n> \n> I've been of the opinion that it's pointless to do so under the current\n> regime where (a) only a few people do that and (b) we only officially\n> re-indent once a year anyway. When I want to manually run pgindent,\n> I always pull down a fresh typedefs.list from the buildfarm, which is\n> reasonably up-to-date regardless of what people added or didn't add,\n> and then add any new typedefs from my current patch to that out-of-tree\n> copy.\n\nWell, properly indenting new code still is worthwhile. And once you go\nthrough the trouble of adding the typedefs locally, I don't really see\nthe reason not to also include them in the commit. Sure it'll not help\nmuch with tree-wide re-indents, but with individual files it can still\nmake life less painful.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Aug 2020 11:11:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-08-15 13:44:34 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> One thing is that some here are actively against manually adding entries\n>>> to typedefs.list.\n\n>> I've been of the opinion that it's pointless to do so under the current\n>> regime where (a) only a few people do that and (b) we only officially\n>> re-indent once a year anyway. When I want to manually run pgindent,\n>> I always pull down a fresh typedefs.list from the buildfarm, which is\n>> reasonably up-to-date regardless of what people added or didn't add,\n>> and then add any new typedefs from my current patch to that out-of-tree\n>> copy.\n\n> Well, properly indenting new code still is worthwhile. And once you go\n> through the trouble of adding the typedefs locally, I don't really see\n> the reason not to also include them in the commit.\n\nYeah, I'm quite religious about making sure my commits have been through\npgindent already (mainly to reduce subsequent \"git blame\" noise).\nHowever, relying on manual updates to the in-tree typedefs.list only\nworks if every committer is equally religious about it. They're not;\nelse we'd not be having this discussion. The workflow I describe above\nis not dependent on how careful everybody else is, which is why I\nprefer it.\n\nI think that the main new idea that's come out of this thread so far\nis that very frequent reindents would be as good, maybe better, as\nonce-a-year reindents in terms of the amount of rebasing pain they\ncause for not-yet-committed patches. If we can fix it so that any\nmis-indented commit is quickly corrected, then rebasing would only be\nneeded in places that were changed anyway. So it seems like that\nwould be OK as a new project policy if we can make it happen.\n\nHowever, I don't see any way to make it happen like that without\nmore-or-less automated reindents and typedefs.list updates,\nand that remains a bit scary.\n\nI did just have an idea that seems to ameliorate the scariness\na good bit: allow the reindent bot to auto-commit its results\nonly if the only lines it's changing are ones that were touched\nby commits since the last auto-reindent. Otherwise punt and ask\na human to review the results. Not sure how hard that is to\nimplement, though.\n\nAnother good safety check would be to not proceed unless the latest\ntypedefs list available from the buildfarm is newer than the last\ncommit --- then we won't mess up recent commits whose typedefs are\nnot in the list yet.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Aug 2020 15:25:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Fri, Aug 14, 2020 at 10:26 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2020-Aug-13, Magnus Hagander wrote:\n>\n> > That is:\n> > 1. Whenever a patch is pushed on master on the main repo a process kicked\n> > off (or maybe wait 5 minutes to coalesce multiple patches if there are)\n> > 2. This process checks out master, and runs pgindent on it\n> > 3. When done, this gets committed to a new branch (or just overwrites an\n> > existing branch of course, we don't need to maintain history here) like\n> > \"master-indented\". This branch can be in a different repo, but one that\n> > starts out as a clone of the main one\n> > 4. A committer (any committer) can then on regular basis examine the\n> > differences by fetch + diff. If they're happy with it, cherry pick it in.\n> > If not, figure out what needs to be done to adjust it.\n>\n> Sounds good -- for branch master.\n>\n\nSo mostly for testing, I've set up a job that does this.\n\nBasically it runs every 15 minutes and if there is a new commit on master\nit will rebase onto the latest master and run pgindent on it. This then\ngets pushed up to a separate repo (postgresql-pgindent.git on\ngit.postgresql.org), and can be viewed there.\n\nTo see the current state of pgindent, view:\nhttps://git.postgresql.org/gitweb/?p=postgresql-pgindent.git;a=commitdiff;h=master-pgindent\n\n(or decorate as wanted to see for example a raw patch format)\n\nIf a committer wants to use it directly, just \"git remote add\" the\npostgresql-pgindent.git and then cherry-pick the branch tip into your own\nrepository, and push. Well, actually that will right now fail because the\ncommit is made by \"Automatic pgindent\" which is not an approved committer,\nbut if we want to do this as a more regular thing, we can certainly fix\nthat.\n\nNote that the only thing the job does is run pgindent. It does not attempt\nto do anything with the typedef list at this point.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Aug 14, 2020 at 10:26 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2020-Aug-13, Magnus Hagander wrote:\n\n> That is:\n> 1. Whenever a patch is pushed on master on the main repo a process kicked\n> off (or maybe wait 5 minutes to coalesce multiple patches if there are)\n> 2. This process checks out master, and runs pgindent on it\n> 3. When done, this gets committed to a new branch (or just overwrites an\n> existing branch of course, we don't need to maintain history here) like\n> \"master-indented\". This branch can be in a different repo, but one that\n> starts out as a clone of the main one\n> 4. A committer (any committer) can then on regular basis examine the\n> differences by fetch + diff. If they're happy with it, cherry pick it in.\n> If not, figure out what needs to be done to adjust it.\n\nSounds good -- for branch master.So mostly for testing, I've set up a job that does this.Basically it runs every 15 minutes and if there is a new commit on master it will rebase onto the latest master and run pgindent on it. This then gets pushed up to a separate repo (postgresql-pgindent.git on git.postgresql.org), and can be viewed there.To see the current state of pgindent, view:https://git.postgresql.org/gitweb/?p=postgresql-pgindent.git;a=commitdiff;h=master-pgindent(or decorate as wanted to see for example a raw patch format)If a committer wants to use it directly, just \"git remote add\" the postgresql-pgindent.git and then cherry-pick the branch tip into your own repository, and push. Well, actually that will right now fail because the commit is made by \"Automatic pgindent\" which is not an approved committer, but if we want to do this as a more regular thing, we can certainly fix that.Note that the only thing the job does is run pgindent. It does not attempt to do anything with the typedef list at this point.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 21 Aug 2020 18:50:40 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "As a not very frequent submitter, on all the patches that I submit I\nkeep running into this problem. I have two open patchsets for\nlibpq[1][2] both of which currently include the same initial \"run\npgindent\" patch in addition to the actual patch, just so I can\nactually run it on my own patch because a9e9a9f caused formatting to\ntotally be off in the libpq directory. And there's lots of other\nchanges that pgindent wants to make, which are visible on the job that\nMagnus has set up.\n\nTo me a master branch that pgindent never complains about sounds\namazing! And I personally think rejection of unindented pushes and\ncfbot complaining about unindented patches would be a very good thing,\nbecause that seems to be the only solution that could achieve that.\n\nHaving cfbot complain also doesn't sound like a crazy burden for\nsubmitters. Many open-source projects have CI complaining if code\nformatting does not pass automatic formatting tools. As long as there\nis good documentation on how to install and run pgindent I don't think\nit should be a big problem. A link to those docs could even be\nincluded in the failing CI job its error message. A pre-commit hook\nthat submitters/committers could install would be super useful too.\nSince right now I sometimes forget to run pgindent, especially since\nthere's no editor integration (that I know of) for pgindent.\n\nSide-question: What's the reason why pgindent is used instead of some\nmore \"modern\" code formatter that doesn't require keeping\ntypedefs.list up to date for good looking output? (e.g. uncrustify or\nclang-format) Because that would also allow for easy editor\nintegration.\n\n[1]: https://commitfest.postgresql.org/41/3511/\n[2]: https://commitfest.postgresql.org/41/3679/\n\n\n",
"msg_date": "Fri, 20 Jan 2023 10:43:50 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 10:43:50AM +0100, Jelte Fennema wrote:\n> Side-question: What's the reason why pgindent is used instead of some\n> more \"modern\" code formatter that doesn't require keeping\n> typedefs.list up to date for good looking output? (e.g. uncrustify or\n> clang-format) Because that would also allow for easy editor\n> integration.\n\nGood question. Our last big pgindent dicussion was in 2017, where I\nsaid:\n\n\thttps://www.postgresql.org/message-id/flat/20170612213525.GA4074%40momjian.us#a96eac96c147ebcc1de86fe2356a160d\n\t\n\tUnderstood. You would think that with the number of open-source\n\tprograms written in C that there would be more interest in C formatting\n\ttools. Is the Postgres community the only ones with specific\n\trequirements, or is it just that we settled on an older tool and can't\n\teasily change? I have reviewed the C formatting options a few times\n\tover the years and every time the other options were worse than what we\n\thad.\n\nWe also discussed it in 2011, and this email was key for me:\n\n\thttps://www.postgresql.org/message-id/flat/201106220218.p5M2InB08144%40momjian.us#096cbcf02cb58c7d6c49bc79d2c79317\n\t\n\tI am excited Andrew has done this. It has been on my TODO list for a\n\twhile --- I was hoping someday we could switch to GNU indent but gave up\n\tafter the GNU indent report from Greg Stark that exactly matched my\n\texperience years ago:\n\t\n\t\thttp://archives.postgresql.org/pgsql-hackers/2011-04/msg01436.php\n\t\n\tBasically, GNU indent has new bugs, but bugs that are harder to work\n\taround than the BSD indent bugs.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Fri, 20 Jan 2023 11:42:33 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Jelte Fennema <postgres@jeltef.nl> writes:\n> To me a master branch that pgindent never complains about sounds\n> amazing! And I personally think rejection of unindented pushes and\n> cfbot complaining about unindented patches would be a very good thing,\n> because that seems to be the only solution that could achieve that.\n\nThe core problem here is that requiring that would translate to\nrequiring every code contributor to have a working copy of pg_bsd_indent.\nMaybe that's not a huge lift, but it'd be YA obstacle to new contributors,\nand we don't need any more of those.\n\nYeah, if we switched to some other tool maybe we could reduce the size\nof that problem. But as Bruce replied, we've not found another one that\n(a) can be coaxed to make output comparable to what we're accustomed to\nand (b) seems decently well maintained.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Jan 2023 12:09:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-20 12:09:05 -0500, Tom Lane wrote:\n> Jelte Fennema <postgres@jeltef.nl> writes:\n> > To me a master branch that pgindent never complains about sounds\n> > amazing! And I personally think rejection of unindented pushes and\n> > cfbot complaining about unindented patches would be a very good thing,\n> > because that seems to be the only solution that could achieve that.\n> \n> The core problem here is that requiring that would translate to\n> requiring every code contributor to have a working copy of pg_bsd_indent.\n\nWouldn't just every committer suffice?\n\n\n> Maybe that's not a huge lift, but it'd be YA obstacle to new contributors,\n> and we don't need any more of those.\n> \n> Yeah, if we switched to some other tool maybe we could reduce the size\n> of that problem. But as Bruce replied, we've not found another one that\n> (a) can be coaxed to make output comparable to what we're accustomed to\n> and (b) seems decently well maintained.\n\nOne question around this is how much change we'd accept. clang-format for\nexample is well maintained and can get somewhat close to our style - but\nthere are things that can't easily be approximated.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Jan 2023 09:58:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-20 12:09:05 -0500, Tom Lane wrote:\n>> The core problem here is that requiring that would translate to\n>> requiring every code contributor to have a working copy of pg_bsd_indent.\n\n> Wouldn't just every committer suffice?\n\nNot if we have cfbot complaining about it.\n\n(Another problem here is that there's a sizable subset of committers\nwho clearly just don't care, and I'm not sure we can convince them to.)\n\n>> Yeah, if we switched to some other tool maybe we could reduce the size\n>> of that problem. But as Bruce replied, we've not found another one that\n>> (a) can be coaxed to make output comparable to what we're accustomed to\n>> and (b) seems decently well maintained.\n\n> One question around this is how much change we'd accept. clang-format for\n> example is well maintained and can get somewhat close to our style - but\n> there are things that can't easily be approximated.\n\nIf somebody wants to invest the effort in seeing how close we can get\nand what the remaining delta would look like, we could have a discussion\nabout whether it's an acceptable change. I don't think anyone has\ntried with clang-format.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Jan 2023 13:19:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-01-20 Fr 13:19, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2023-01-20 12:09:05 -0500, Tom Lane wrote:\n>>> The core problem here is that requiring that would translate to\n>>> requiring every code contributor to have a working copy of pg_bsd_indent.\n>> Wouldn't just every committer suffice?\n> Not if we have cfbot complaining about it.\n>\n> (Another problem here is that there's a sizable subset of committers\n> who clearly just don't care, and I'm not sure we can convince them to.)\n\n\nI think we could do better with some automation tooling for committers\nhere. One low-risk and simple change would be to provide a\nnon-destructive mode for pgindent that would show you the changes if any\nit would make. That could be worked into a git pre-commit hook that\ncommitters could deploy. I can testify to the usefulness of such hooks -\nI have one that while not perfect has saved me on at least two occasions\nfrom forgetting to bump the catalog version.\n\nI'll take a look at fleshing this out, for my own if no-one else's use.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 21 Jan 2023 08:26:05 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-01-21 Sa 08:26, Andrew Dunstan wrote:\n> On 2023-01-20 Fr 13:19, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> On 2023-01-20 12:09:05 -0500, Tom Lane wrote:\n>>>> The core problem here is that requiring that would translate to\n>>>> requiring every code contributor to have a working copy of pg_bsd_indent.\n>>> Wouldn't just every committer suffice?\n>> Not if we have cfbot complaining about it.\n>>\n>> (Another problem here is that there's a sizable subset of committers\n>> who clearly just don't care, and I'm not sure we can convince them to.)\n>\n> I think we could do better with some automation tooling for committers\n> here. One low-risk and simple change would be to provide a\n> non-destructive mode for pgindent that would show you the changes if any\n> it would make. That could be worked into a git pre-commit hook that\n> committers could deploy. I can testify to the usefulness of such hooks -\n> I have one that while not perfect has saved me on at least two occasions\n> from forgetting to bump the catalog version.\n>\n> I'll take a look at fleshing this out, for my own if no-one else's use.\n>\n>\n\nHere's a quick patch for this. I have it in mind to use like this in a\npre-commit hook:\n\n # only do this on master\n test `git rev-parse --abbrev-ref HEAD` = \"master\" || exit 0\n\n src/tools/pgindent/pg_indent --silent `git diff --cached --name-only` || \\\n\n { echo \"Need a pgindent run\" >&2 ; exit 1; }\n\n\nThe committer could then run\n\n src/tools/pgindent/pg_indent --show-diff `git diff --cached --name-only`\n\nto see what changes it thinks are needed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 21 Jan 2023 10:00:31 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-01-21 Sa 10:00, Andrew Dunstan wrote:\n> On 2023-01-21 Sa 08:26, Andrew Dunstan wrote:\n>> On 2023-01-20 Fr 13:19, Tom Lane wrote:\n>>> Andres Freund <andres@anarazel.de> writes:\n>>>> On 2023-01-20 12:09:05 -0500, Tom Lane wrote:\n>>>>> The core problem here is that requiring that would translate to\n>>>>> requiring every code contributor to have a working copy of pg_bsd_indent.\n>>>> Wouldn't just every committer suffice?\n>>> Not if we have cfbot complaining about it.\n>>>\n>>> (Another problem here is that there's a sizable subset of committers\n>>> who clearly just don't care, and I'm not sure we can convince them to.)\n>> I think we could do better with some automation tooling for committers\n>> here. One low-risk and simple change would be to provide a\n>> non-destructive mode for pgindent that would show you the changes if any\n>> it would make. That could be worked into a git pre-commit hook that\n>> committers could deploy. I can testify to the usefulness of such hooks -\n>> I have one that while not perfect has saved me on at least two occasions\n>> from forgetting to bump the catalog version.\n>>\n>> I'll take a look at fleshing this out, for my own if no-one else's use.\n>>\n>>\n> Here's a quick patch for this. I have it in mind to use like this in a\n> pre-commit hook:\n>\n> # only do this on master\n> test `git rev-parse --abbrev-ref HEAD` = \"master\" || exit 0\n>\n> src/tools/pgindent/pg_indent --silent `git diff --cached --name-only` || \\\n>\n> { echo \"Need a pgindent run\" >&2 ; exit 1; }\n>\n>\n> The committer could then run\n>\n> src/tools/pgindent/pg_indent --show-diff `git diff --cached --name-only`\n>\n> to see what changes it thinks are needed.\n>\n>\nThis time with patch.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 21 Jan 2023 10:02:15 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-01-21 Sa 10:02, Andrew Dunstan wrote:\n> On 2023-01-21 Sa 10:00, Andrew Dunstan wrote:\n>> On 2023-01-21 Sa 08:26, Andrew Dunstan wrote:\n>>> On 2023-01-20 Fr 13:19, Tom Lane wrote:\n>>>> Andres Freund <andres@anarazel.de> writes:\n>>>>> On 2023-01-20 12:09:05 -0500, Tom Lane wrote:\n>>>>>> The core problem here is that requiring that would translate to\n>>>>>> requiring every code contributor to have a working copy of pg_bsd_indent.\n>>>>> Wouldn't just every committer suffice?\n>>>> Not if we have cfbot complaining about it.\n>>>>\n>>>> (Another problem here is that there's a sizable subset of committers\n>>>> who clearly just don't care, and I'm not sure we can convince them to.)\n>>> I think we could do better with some automation tooling for committers\n>>> here. One low-risk and simple change would be to provide a\n>>> non-destructive mode for pgindent that would show you the changes if any\n>>> it would make. That could be worked into a git pre-commit hook that\n>>> committers could deploy. I can testify to the usefulness of such hooks -\n>>> I have one that while not perfect has saved me on at least two occasions\n>>> from forgetting to bump the catalog version.\n>>>\n>>> I'll take a look at fleshing this out, for my own if no-one else's use.\n>>>\n>>>\n>> Here's a quick patch for this. I have it in mind to use like this in a\n>> pre-commit hook:\n>>\n>> # only do this on master\n>> test `git rev-parse --abbrev-ref HEAD` = \"master\" || exit 0\n>>\n>> src/tools/pgindent/pg_indent --silent `git diff --cached --name-only` || \\\n>>\n>> { echo \"Need a pgindent run\" >&2 ; exit 1; }\n>>\n>>\n>> The committer could then run\n>>\n>> src/tools/pgindent/pg_indent --show-diff `git diff --cached --name-only`\n>>\n>> to see what changes it thinks are needed.\n>>\n>>\n> This time with patch.\n>\n>\n... with typo fixed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 21 Jan 2023 10:24:02 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n>>> I think we could do better with some automation tooling for committers\n>>> here. One low-risk and simple change would be to provide a\n>>> non-destructive mode for pgindent that would show you the changes if any\n>>> it would make. That could be worked into a git pre-commit hook that\n>>> committers could deploy. I can testify to the usefulness of such hooks -\n>>> I have one that while not perfect has saved me on at least two occasions\n>>> from forgetting to bump the catalog version.\n\nThat sounds like a good idea from here. I do not think we want a\nmandatory commit filter, but if individual committers care to work\nthis into their process in some optional way, great! I can think\nof ways I'd use it during patch development, too.\n\n(One reason not to want a mandatory filter is that you might wish\nto apply pgindent as a separate commit, so that you can then\nput that commit into .git-blame-ignore-revs. This could be handy\nfor example when a patch needs to change the nesting level of a lot\nof pre-existing code, without making changes in it otherwise.)\n\n>> This time with patch.\n\n> ... with typo fixed.\n\nLooks reasonable, but you should also update\nsrc/tools/pgindent/pgindent.man, which AFAICT is our only\ndocumentation for pgindent switches. (Is it time for a\n--help option in pgindent?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Jan 2023 11:10:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "... btw, can we get away with making the diff run be \"diff -upd\"\nnot just \"diff -u\"? I find diff output for C files noticeably\nmore useful with those options, but I'm unsure about their\nportability.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Jan 2023 11:24:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 10:43:50AM +0100, Jelte Fennema wrote:\n> Side-question: What's the reason why pgindent is used instead of some\n> more \"modern\" code formatter that doesn't require keeping\n> typedefs.list up to date for good looking output? (e.g. uncrustify or\n> clang-format) Because that would also allow for easy editor\n> integration.\n\nOne reason the typedef list is required is a quirk of the C syntax. \nMost languages have a lexer/scanner, which tokenizes, and a parser,\nwhich parses. The communication is usually one-way, lexer to parser. \nFor C, typedefs require the parser to feed new typedefs back into the\nlexer:\n\n\thttp://calculist.blogspot.com/2009/02/c-typedef-parsing-problem.html\n\nBSD indent doesn't have that feedback mechanism, probably because it\ndoesn't fully parse the C file. Therefore, we have to supply typedefs\nmanually, and for Postgres we pull them from debug-enabled binaries in\nour buildfarm. The problem with that is you often import typedefs from\nsystem headers, and the typedefs apply to all C files, not just the ones\nwere the typdefs are visible.\n\nI don't see uncrustify or clang-format supporting typedef lists so maybe\nthey implemented this feedback loop. It would be good to see if we can\nget either of these tools to match our formatting.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Sat, 21 Jan 2023 12:30:32 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Jan 21, 2023 at 9:30 AM Bruce Momjian <bruce@momjian.us> wrote:\n> I don't see uncrustify or clang-format supporting typedef lists so maybe\n> they implemented this feedback loop. It would be good to see if we can\n> get either of these tools to match our formatting.\n\nI personally use clang-format for Postgres development work, since it\nintegrates nicely with my text editor, and can be configured to\nproduce approximately the same result as pgindent (certainly good\nenough when working on a patch that's still far from a committable\nstate). I'm fairly sure that clang-format has access to a full AST\nfrom the clang compiler, which is the ideal approach - at least in\ntheory.\n\nIn practice this approach tends to run into problems when the relevant\nAST isn't available. For example, if there's code that only builds on\nWindows, maybe it won't work at all (at least on my Linux system).\nThis doesn't really bother me currently, since I only rely on\nclang-format as a first pass sort of thing. Maybe I could figure out a\nbetter way to deal with such issues, but right now I don't have much\nincentive to.\n\nAnother advantage of clang-format is that it's a known quantity. For\nexample there is direct support for it built into meson, with bells\nand whistles such as CI support:\n\nhttps://mesonbuild.com/Code-formatting.html\n\nMy guess is that moving to clang-format would require giving up some\nflexibility, but getting much better integration with text editors and\ntools like meson in return. It would probably make it practical to\nhave much stronger rules about how committed code must be indented --\nrules that are practical, and can actually be enforced. That trade-off\nseems likely to be worth it in my view, though it's not something that\nI feel too strongly about.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 21 Jan 2023 12:21:03 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> In practice this approach tends to run into problems when the relevant\n> AST isn't available. For example, if there's code that only builds on\n> Windows, maybe it won't work at all (at least on my Linux system).\n\nHmm, that could be a deal-breaker. It's not going to be acceptable\nto have to pgindent different parts of the system on different platforms\n... at least not unless we can segregate them on the file level, and\neven that would have a large PITA factor.\n\nStill, we won't know unless someone makes a serious experiment with it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Jan 2023 15:59:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Jan 21, 2023 at 12:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm, that could be a deal-breaker. It's not going to be acceptable\n> to have to pgindent different parts of the system on different platforms\n> ... at least not unless we can segregate them on the file level, and\n> even that would have a large PITA factor.\n\nIt's probably something that could be worked around. My remarks are\nbased on some dim memories of dealing with the tool before I arrived\nat a configuration that works well enough for me. Importantly,\nclang-format doesn't require you to futz around with Makefiles or\nobjdump or anything like that -- that's a huge plus. It doesn't seem\nto impose any requirements on how I build Postgres at all (I generally\nuse gcc, not clang).\n\nEven if these kinds of issues proved to be a problem for the person\ntasked with running clang-format against the whole tree periodically,\nthey still likely wouldn't affect most of us. It's quite convenient to\nuse clang-format from an editor -- it can be invoked very\nincrementally, against a small range of lines at a time. It's pretty\nmuch something that I can treat like the built-in indent for my\neditor. It's vastly different to the typical pgindent workflow.\n\n> Still, we won't know unless someone makes a serious experiment with it.\n\nThere is one thing about clang-format that I find mildly infuriating:\nit can indent function declarations in the way that I want it to, and\nit can indent variable declarations in the way that I want it to. It\njust can't do both at the same time, because they're both controlled\nby AlignConsecutiveDeclarations.\n\nOf course the way that I want to do things is (almost by definition)\nthe pgindent way, at least right now -- it's not necessarily about my\nfixed preferences (though it can be hard to tell!). It's really not\nsurprising that clang-format cannot quite perfectly simulate pgindent.\nHow flexible can we be about stuff like that? Obviously there is no\nclear answer right now.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 21 Jan 2023 14:05:41 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Of course the way that I want to do things is (almost by definition)\n> the pgindent way, at least right now -- it's not necessarily about my\n> fixed preferences (though it can be hard to tell!). It's really not\n> surprising that clang-format cannot quite perfectly simulate pgindent.\n> How flexible can we be about stuff like that? Obviously there is no\n> clear answer right now.\n\nI don't feel wedded to every last detail of what pgindent does (and\nespecially not the bugs). But I think if the new tool is not a pretty\nclose match we'll be in for years of back-patching pain. We have made\nchanges in pgindent itself in the past, and the patching consequences\nweren't *too* awful, but the changes weren't very big either.\n\nAs I said upthread, this is really impossible to answer without a\nconcrete proposal of how to configure clang-format and a survey of\nwhat diffs we'd wind up with.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Jan 2023 17:20:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-21 14:05:41 -0800, Peter Geoghegan wrote:\n> On Sat, Jan 21, 2023 at 12:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Hmm, that could be a deal-breaker. It's not going to be acceptable\n> > to have to pgindent different parts of the system on different platforms\n> > ... at least not unless we can segregate them on the file level, and\n> > even that would have a large PITA factor.\n\nUnless I miss something, I don't think clang-format actually does that level\nof C parsing - you can't pass include paths etc, so it really can't.\n\n\n> It's probably something that could be worked around. My remarks are\n> based on some dim memories of dealing with the tool before I arrived\n> at a configuration that works well enough for me.\n\nCould you share your .clang-format?\n\n\n\n> > Still, we won't know unless someone makes a serious experiment with it.\n> \n> There is one thing about clang-format that I find mildly infuriating:\n> it can indent function declarations in the way that I want it to, and\n> it can indent variable declarations in the way that I want it to. It\n> just can't do both at the same time, because they're both controlled\n> by AlignConsecutiveDeclarations.\n> \n> Of course the way that I want to do things is (almost by definition)\n> the pgindent way, at least right now -- it's not necessarily about my\n> fixed preferences (though it can be hard to tell!). It's really not\n> surprising that clang-format cannot quite perfectly simulate pgindent.\n> How flexible can we be about stuff like that? Obviously there is no\n> clear answer right now.\n\nI personally find the current indentation of variables assignment deeply\nunhelpful - but changing it would be a very noisy change.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 21 Jan 2023 14:43:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-21 17:20:35 -0500, Tom Lane wrote:\n> I don't feel wedded to every last detail of what pgindent does (and\n> especially not the bugs). But I think if the new tool is not a pretty\n> close match we'll be in for years of back-patching pain. We have made\n> changes in pgindent itself in the past, and the patching consequences\n> weren't *too* awful, but the changes weren't very big either.\n\nPerhaps we could backpatch formatting changes in a way that doesn't\ninconvenience forks too much. One way to deal with such changes is to\n\n1) revert the re-indent commits in $backbranch\n2) merge $backbranch-with-revert into $forkbranch\n3) re-indent $forkbranch\n\nAfter that future changes should be mergable again.\n\nCertainly doesn't do away with the pain entirely, but it does make it perhaps\nbearable\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 21 Jan 2023 14:47:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Jan 21, 2023 at 2:43 PM Andres Freund <andres@anarazel.de> wrote:\n> Unless I miss something, I don't think clang-format actually does that level\n> of C parsing - you can't pass include paths etc, so it really can't.\n\nIt's hard to keep track of, since I also use clangd, which is\ninfluenced by .clang-format for certain completions. It clearly does\nplenty of stuff that requires an AST, since it requires a\ncompile_commands.json. You're the LLVM committer, not me.\n\nAttached is my .clang-format, since you asked for it. It was\noriginally based on stuff that both you and Peter E posted several\nyears back, I believe. Plus the timescaledb one in one or two places.\nI worked a couple of things out through trial and error. It's\nrelatively hard to follow the documentation, and there have been\nfeatures added to newer LLVM versions.\n\n-- \nPeter Geoghegan",
"msg_date": "Sat, 21 Jan 2023 15:32:45 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Attached are a .clang-format file and an uncrustify.cfg file that I\nwhipped up to match the current style at least somewhat.\n\nI was unable to get either of them to produce the weird alignment of\ndeclarations that pgindent outputs. Maybe that's just because I don't\nunderstand what this alignment is supposed to do. Because to me the\ncurrent alignment seems completely random most of the time (and like\nAndres said also not very unhelpful). For clang-format you should use\nat least clang-format 15, otherwise it has some bugs in the alignment\nlogic.\n\nOne thing that clang-format really really wants to change in the\ncodebase, is making it consistent on what to do when\narguments/parameters don't fit on a single line: You can either choose\nto minimally pack everything on the minimum number of lines, or to put\nall arguments on their own separate line. Uncrustify is a lot less\nstrict about that and will leave most things as they currently are.\n\nOverall it seems that uncrustify is a lot more customizable, whether\nthat's a good thing is debatable. Personally I think the fewer\npossible debates I have about codestyle the better, so I usually like\nthe tools with fewer options better. But if the customizability allows\nfor closer matching of existing codestyle then it might be worth the\nextra debates and effort in customization in this case.\n\n> 1) revert the re-indent commits in $backbranch\n> 2) merge $backbranch-with-revert into $forkbranch\n> 3) re-indent $forkbranch\n\nThe git commands to achieve this are something like the following.\nI've used such git commands in the past to make big automatic\nrefactors much easier to get in and it has worked quite well so far.\n\ngit checkout forkbranch\ngit rebase {commit-before-re-indent}\n# now, get many merge conflicts\ngit rebase {re-indent-commit}\n# keep your own changes (its --theirs instead of --ours because rebase\nflips it around)\ngit checkout --theirs .\nrun-new-reindent\ngit add .\ngit rebase --continue\n\nOn Sat, 21 Jan 2023 at 23:47, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-01-21 17:20:35 -0500, Tom Lane wrote:\n> > I don't feel wedded to every last detail of what pgindent does (and\n> > especially not the bugs). But I think if the new tool is not a pretty\n> > close match we'll be in for years of back-patching pain. We have made\n> > changes in pgindent itself in the past, and the patching consequences\n> > weren't *too* awful, but the changes weren't very big either.\n>\n> Perhaps we could backpatch formatting changes in a way that doesn't\n> inconvenience forks too much. One way to deal with such changes is to\n>\n> 1) revert the re-indent commits in $backbranch\n> 2) merge $backbranch-with-revert into $forkbranch\n> 3) re-indent $forkbranch\n>\n> After that future changes should be mergable again.\n>\n> Certainly doesn't do away with the pain entirely, but it does make it perhaps\n> bearable\n>\n> Greetings,\n>\n> Andres Freund",
"msg_date": "Sun, 22 Jan 2023 00:39:25 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Jan 21, 2023 at 3:39 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n> I was unable to get either of them to produce the weird alignment of\n> declarations that pgindent outputs. Maybe that's just because I don't\n> understand what this alignment is supposed to do. Because to me the\n> current alignment seems completely random most of the time (and like\n> Andres said also not very unhelpful). For clang-format you should use\n> at least clang-format 15, otherwise it has some bugs in the alignment\n> logic.\n\nReally? I have been using 14, which is quite recent. Did you just\nfigure this out recently? If this is true, then it's certainly\ndiscouraging.\n\nI don't have a problem with the current pgindent alignment of function\nparameters, so not sure what you mean about that. It *was* terrible\nprior to commit e3860ffa, but that was back in 2017 (pg_bsd_indent 2.0\nfixed that problem).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 21 Jan 2023 19:19:15 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Jan 21, 2023 at 2:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> There is one thing about clang-format that I find mildly infuriating:\n> it can indent function declarations in the way that I want it to, and\n> it can indent variable declarations in the way that I want it to. It\n> just can't do both at the same time, because they're both controlled\n> by AlignConsecutiveDeclarations.\n\nLooks like I'm not the only one that doesn't like this behavior:\n\nhttps://github.com/llvm/llvm-project/issues/55605\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 21 Jan 2023 20:08:34 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Jan 21, 2023 at 08:08:34PM -0800, Peter Geoghegan wrote:\n> On Sat, Jan 21, 2023 at 2:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > There is one thing about clang-format that I find mildly infuriating:\n> > it can indent function declarations in the way that I want it to, and\n> > it can indent variable declarations in the way that I want it to. It\n> > just can't do both at the same time, because they're both controlled\n> > by AlignConsecutiveDeclarations.\n> \n> Looks like I'm not the only one that doesn't like this behavior:\n> \n> https://github.com/llvm/llvm-project/issues/55605\n\nWow, that is very weird. When I work with other open source projects, I\nam regularly surprised by their low quality requirements.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Sat, 21 Jan 2023 23:21:22 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-21 23:21:22 -0500, Bruce Momjian wrote:\n> On Sat, Jan 21, 2023 at 08:08:34PM -0800, Peter Geoghegan wrote:\n> > On Sat, Jan 21, 2023 at 2:05 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > There is one thing about clang-format that I find mildly infuriating:\n> > > it can indent function declarations in the way that I want it to, and\n> > > it can indent variable declarations in the way that I want it to. It\n> > > just can't do both at the same time, because they're both controlled\n> > > by AlignConsecutiveDeclarations.\n> > \n> > Looks like I'm not the only one that doesn't like this behavior:\n> > \n> > https://github.com/llvm/llvm-project/issues/55605\n> \n> Wow, that is very weird. When I work with other open source projects, I\n> am regularly surprised by their low quality requirements.\n\nI don't think not fulfulling precisely our needs is the same as low quality\nrequirements.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 21 Jan 2023 23:05:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-21 15:32:45 -0800, Peter Geoghegan wrote:\n> Attached is my .clang-format, since you asked for it. It was\n> originally based on stuff that both you and Peter E posted several\n> years back, I believe. Plus the timescaledb one in one or two places.\n> I worked a couple of things out through trial and error. It's\n> relatively hard to follow the documentation, and there have been\n> features added to newer LLVM versions.\n\nReformatting with your clang-format end up with something like:\n\nPeter's:\n 2234 files changed, 334753 insertions(+), 289772 deletions(-)\n\nJelte's:\n 2236 files changed, 357500 insertions(+), 306815 deletions(-)\n\nMine (modified to reduce this):\n 2226 files changed, 261538 insertions(+), 256039 deletions(-)\n\n\nWhich is all at least an order of magnitude too much.\n\nJelte's uncrustify:\n 1773 files changed, 121722 insertions(+), 125369 deletions(-)\n\nbetter, but still not great. Also had to prevent a file files it choked on\nfrom getting reindented.\n\n\nI think the main issue with either is that our variable definition indentation\njust can't be emulated by the tools as-is.\n\nSome tools can indent variable definitions so that the variable name starts on\nthe same column. Some can limit that for too long type names. But so far I\nhaven't seen one that cn make that column be column +12. They all look to\nother surrounding types.\n\n\nI hate that variable name indentation with a fiery passion. But switching away\nfrom that intermixed with a lot of other changes isn't going to be fun.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 22 Jan 2023 01:49:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-01-21 Sa 11:10, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>>>> I think we could do better with some automation tooling for committers\n>>>> here. One low-risk and simple change would be to provide a\n>>>> non-destructive mode for pgindent that would show you the changes if any\n>>>> it would make. That could be worked into a git pre-commit hook that\n>>>> committers could deploy. I can testify to the usefulness of such hooks -\n>>>> I have one that while not perfect has saved me on at least two occasions\n>>>> from forgetting to bump the catalog version.\n> That sounds like a good idea from here. I do not think we want a\n> mandatory commit filter, but if individual committers care to work\n> this into their process in some optional way, great! I can think\n> of ways I'd use it during patch development, too.\n\n\nYes, it's intended for use at committers' discretion. We have no way of\nforcing use of a git hook on committers, although we could reject pushes\nthat offend against certain rules. For the reasons you give below that's\nnot a good idea. A pre-commit hook can be avoided by using `git commit\n-n` and there's are similar option/hook for `git merge`.\n\n\n>\n> (One reason not to want a mandatory filter is that you might wish\n> to apply pgindent as a separate commit, so that you can then\n> put that commit into .git-blame-ignore-revs. This could be handy\n> for example when a patch needs to change the nesting level of a lot\n> of pre-existing code, without making changes in it otherwise.)\n\n\nAgreed.\n\n\n> Looks reasonable, but you should also update\n> src/tools/pgindent/pgindent.man, which AFAICT is our only\n> documentation for pgindent switches. (Is it time for a\n> --help option in pgindent?)\n>\n> \t\t\t\n\n\nYes, see revised patch.\n\n\n> ... btw, can we get away with making the diff run be \"diff -upd\"\n> not just \"diff -u\"? I find diff output for C files noticeably\n> more useful with those options, but I'm unsure about their\n> portability.\n\n\nI think they are available on Linux, MacOS and FBSD, and on Windows (if\nanyone's actually using it for this) it's likely to be Gnu diff. So I\nthink that's probably enough coverage.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 22 Jan 2023 10:01:13 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n>> ... btw, can we get away with making the diff run be \"diff -upd\"\n>> not just \"diff -u\"? I find diff output for C files noticeably\n>> more useful with those options, but I'm unsure about their\n>> portability.\n\n> I think they are available on Linux, MacOS and FBSD, and on Windows (if\n> anyone's actually using it for this) it's likely to be Gnu diff. So I\n> think that's probably enough coverage.\n\nI checked NetBSD as well, and it has all three too.\n\nPatch looks good to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Jan 2023 11:18:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sat, Jan 21, 2023 at 3:39 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n>> ... For clang-format you should use\n>> at least clang-format 15, otherwise it has some bugs in the alignment\n>> logic.\n\n> Really? I have been using 14, which is quite recent. Did you just\n> figure this out recently? If this is true, then it's certainly\n> discouraging.\n\nIndeed. What that points to is a future where different contributors\nget different results depending on what clang version they have\ninstalled --- and it's not going to be practical to insist that\neverybody have the same version, because AFAICT clang-format is tied\nto clang itself. So that sounds a bit unappetizing.\n\nOne of the few advantages of the current tool situation is that at any\ntime there's just one agreed-on version of pgindent and pgperltidy.\nI've not heard push-back about our policy that you should use\nperltidy version 20170521, because that's not especially connected\nto any other part of one's system. Maybe the same would hold for\nuncrustify, but it's never going to work for pieces of the clang\necosystem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Jan 2023 11:35:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "> But so far I haven't seen one that can make that\n> column be column +12.\n\nThanks for clarifying what the current variable declaration indention\nrule is. Indeed neither uncrustify or clang-format seem to support\nthat. Getting uncrustify to support it might not be too difficult, but\nthe question remains if we even want that.\n\n> But switching away from that intermixed with a lot of other changes isn't going to be fun.\n\nI don't think the amount of pain is really much lower if we reformat\n10,000 or 300,000 lines of code, without automation both would be\nquite painful. But the git commands I shared in my previous email\nshould alleviate most of that pain.\n\n> I don't have a problem with the current pgindent alignment of function\n> parameters, so not sure what you mean about that.\n\nFunction parameter alignment is fine with pgindent imho, but this +12\ncolumn variable declaration thing I personally think is quite weird.\n\n> Really? I have been using 14, which is quite recent. Did you just\n> figure this out recently? If this is true, then it's certainly\n> discouraging.\n\nIt seems this was due to my Ubuntu 22.04 install having clang-format\n14.0.0. After\nupdating it to 14.0.6 by using the official llvm provided packages, I\ndon't have this\nissue on clang-format-14 anymore. To be clear this was an issue in alignment of\nvariable declarations not function parameters.\n\nBut I agree with Tom Lane that this makes clear that whatever tool we\npick we'll need\nto pick a specific version, just like we do now with perltidy. And\nindeed I'm not sure\nhow easy that is with clang. Installing a specific uncrustify version\nis pretty easy btw,\nthe compilation from source is quite quick.\n\n\n",
"msg_date": "Sun, 22 Jan 2023 18:20:49 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "But I do think this discussion about other formatting tools\nis distracting from the main pain point I wanted to discuss:\nour current formatting tool is not run consistently enough.\nThe only thing that another tool will change in this\nregard is that there is no need to update typedefs.list.\nIt doesn't seem like that's such a significant difference\nthat it would change the solution to the first problem.\n\nWhen reading the emails in this discussion from 2 years ago\nit seems like the respondents wouldn't mind updating the\ntypedefs.list manually. And proposed approach number 3\nseemed to have support overall, i.e. fail a push to master\nwhen pgindent was not run on the new commit. Would\nit make sense to simply try that approach and see if\nthere's any big issues with it?\n\n> (Another problem here is that there's a sizable subset of committers\n> who clearly just don't care, and I'm not sure we can convince them to.)\n\nMy guess would be that the main reason is simply\nbecause committers forget it sometimes because\nthere's no automation complaining about it.\n\nOn Sun, 22 Jan 2023 at 18:20, Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> > But so far I haven't seen one that can make that\n> > column be column +12.\n>\n> Thanks for clarifying what the current variable declaration indention\n> rule is. Indeed neither uncrustify or clang-format seem to support\n> that. Getting uncrustify to support it might not be too difficult, but\n> the question remains if we even want that.\n>\n> > But switching away from that intermixed with a lot of other changes isn't going to be fun.\n>\n> I don't think the amount of pain is really much lower if we reformat\n> 10,000 or 300,000 lines of code, without automation both would be\n> quite painful. But the git commands I shared in my previous email\n> should alleviate most of that pain.\n>\n> > I don't have a problem with the current pgindent alignment of function\n> > parameters, so not sure what you mean about that.\n>\n> Function parameter alignment is fine with pgindent imho, but this +12\n> column variable declaration thing I personally think is quite weird.\n>\n> > Really? I have been using 14, which is quite recent. Did you just\n> > figure this out recently? If this is true, then it's certainly\n> > discouraging.\n>\n> It seems this was due to my Ubuntu 22.04 install having clang-format\n> 14.0.0. After\n> updating it to 14.0.6 by using the official llvm provided packages, I\n> don't have this\n> issue on clang-format-14 anymore. To be clear this was an issue in alignment of\n> variable declarations not function parameters.\n>\n> But I agree with Tom Lane that this makes clear that whatever tool we\n> pick we'll need\n> to pick a specific version, just like we do now with perltidy. And\n> indeed I'm not sure\n> how easy that is with clang. Installing a specific uncrustify version\n> is pretty easy btw,\n> the compilation from source is quite quick.\n\n\n",
"msg_date": "Sun, 22 Jan 2023 19:14:24 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Jelte Fennema <postgres@jeltef.nl> writes:\n> When reading the emails in this discussion from 2 years ago\n> it seems like the respondents wouldn't mind updating the\n> typedefs.list manually. And proposed approach number 3\n> seemed to have support overall, i.e. fail a push to master\n> when pgindent was not run on the new commit. Would\n> it make sense to simply try that approach and see if\n> there's any big issues with it?\n\nI will absolutely not accept putting in something that fails pushes\non this basis. There are too many cases where pgindent purity is\nnot an overriding issue. I mentioned a counterexample just upthread:\neven if you are as anal as you could be about indentation, you might\nprefer to separate a logic-changing patch from the ensuing mechanical\nreindentation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Jan 2023 14:20:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Maybe I'm not understanding your issue correctly, but for such\na case you could push two commits at the same time. Apart\nfrom that \"git diff -w\" will hide any whitespace changes so I'm\nnot I personally wouldn't consider it important to split such\nchanges across commits.\n\n\n",
"msg_date": "Sun, 22 Jan 2023 22:19:10 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-22 22:19:10 +0100, Jelte Fennema wrote:\n> Maybe I'm not understanding your issue correctly, but for such\n> a case you could push two commits at the same time.\n\nRight.\n\n\n> Apart from that \"git diff -w\" will hide any whitespace changes so I'm not I\n> personally wouldn't consider it important to split such changes across\n> commits.\n\nI do think it's important. For one, the changes made by pgindent et al aren't\njust whitespace ones. But I think it's also important to be able to see the\nactual changes made in a patch precisely - lots of spurious whitespace changes\ncould indicate a problem.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 22 Jan 2023 14:02:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-22 18:20:49 +0100, Jelte Fennema wrote:\n> > But switching away from that intermixed with a lot of other changes isn't going to be fun.\n> \n> I don't think the amount of pain is really much lower if we reformat\n> 10,000 or 300,000 lines of code, without automation both would be\n> quite painful. But the git commands I shared in my previous email\n> should alleviate most of that pain.\n\nIt's practically not possible to review a 300k line change. And perhaps I'm\nparanoid, but I would have a problem with a commit in the history that's\npractically not reviewable.\n\n\n> > I don't have a problem with the current pgindent alignment of function\n> > parameters, so not sure what you mean about that.\n> \n> Function parameter alignment is fine with pgindent imho, but this +12\n> column variable declaration thing I personally think is quite weird.\n\nI strongly dislike it, I rarely get it right by hand - but it does have some\nbenefit over aligning variable names based on the length of the type names as\nuncrustify/clang-format: In their approach an added local variable can cause\nall the other variables to be re-indented (and their initial value possibly\nwrapped). The fixed alignment doesn't have that issue.\n\nPersonally I think the cost of trying to align local variable names is way way\nhigher than the benefit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 22 Jan 2023 14:38:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I strongly dislike it, I rarely get it right by hand - but it does have some\n> benefit over aligning variable names based on the length of the type names as\n> uncrustify/clang-format: In their approach an added local variable can cause\n> all the other variables to be re-indented (and their initial value possibly\n> wrapped). The fixed alignment doesn't have that issue.\n\nYeah. That's one of my biggest gripes about pgperltidy: if you insert\nanother assignment in a series of assignments, it is very likely to\nreformat all the adjacent assignments because it thinks it's cool to\nmake all the equal signs line up. That's just awful. You can either\nrun pgperltidy on new code before committing, and accept that the feature\npatch will touch a lot of lines it's not making real changes to (thereby\ndirtying the \"git blame\" history) or not do so and thereby commit code\nthat's not passing tidiness checks. Let's *not* adopt any style that\ncauses similar things to start happening in our C code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Jan 2023 17:47:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Jelte Fennema <postgres@jeltef.nl> writes:\n> Maybe I'm not understanding your issue correctly, but for such\n> a case you could push two commits at the same time.\n\nI don't know that much about git commit hooks, but do they really\nonly check the final state of a series of commits?\n\nIn any case, I'm still down on the idea of checking this in a\ncommit hook because of the complexity and lack of transparency\nof such a check. If you think your commit is correctly indented,\nbut the hook (running on somebody else's machine) disagrees,\nhow are you going to debug that? I don't want to get into such\na situation, especially since Murphy's law guarantees that it\nwould mainly bite people under time pressure, like when pushing\na security fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Jan 2023 18:14:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-22 18:20:49 +0100, Jelte Fennema wrote:\n>> I don't think the amount of pain is really much lower if we reformat\n>> 10,000 or 300,000 lines of code, without automation both would be\n>> quite painful. But the git commands I shared in my previous email\n>> should alleviate most of that pain.\n\n> It's practically not possible to review a 300k line change. And perhaps I'm\n> paranoid, but I would have a problem with a commit in the history that's\n> practically not reviewable.\n\nAs far as that goes, if you had concern then you could run the indentation\ntool locally and confirm you got matching results. But this does point up\nthat the processes Jelte suggested all depend critically on indentation\nresults being 100% reproducible by anybody.\n\nSo the more I think about it the less excited I am about depending on\nclang-format, because version skew in peoples' clang installations seems\ninevitable, and there's good reason to fear that that would show up\nas varying indentation results.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Jan 2023 18:28:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sun, Jan 22, 2023 at 3:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So the more I think about it the less excited I am about depending on\n> clang-format, because version skew in peoples' clang installations seems\n> inevitable, and there's good reason to fear that that would show up\n> as varying indentation results.\n\nI have to admit that the way that I was thinking about this was\ncolored by the way that I use clang-format today. I only now realize\nhow different my requirements are to the requirements that we'd have\nfor any tool that gets run against the tree in bulk. In particular, I\ndidn't realize how annoying the non-additive nature of certain\nvariable alignment rules might be until you pointed it out today\n(seems obvious now!).\n\nIn my experience clang-format really shines when you need to quickly\nindent code that is indented in some way that looks completely wrong\n-- it does quite a lot better than pgindent when that's your starting\npoint. It has a reasonable way of balancing competing goals like\nmaximum number of columns (a soft maximum) and how function parameters\nare displayed, which pgindent can't do. It also avoids allowing a\nfunction parameter from a function declaration with its type name on\nits own line, before the variable name -- also beyond the capabilities\nof pgindent IIRC.\n\nFeatures like that make it very useful as a first pass thing, where\nall the bells and whistles have little real downside. Running\nclang-format and then running pgindent tends to produce better results\nthan just running pgindent, at least when working on a new patch.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 22 Jan 2023 15:52:18 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-22 18:28:27 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-01-22 18:20:49 +0100, Jelte Fennema wrote:\n> >> I don't think the amount of pain is really much lower if we reformat\n> >> 10,000 or 300,000 lines of code, without automation both would be\n> >> quite painful. But the git commands I shared in my previous email\n> >> should alleviate most of that pain.\n> \n> > It's practically not possible to review a 300k line change. And perhaps I'm\n> > paranoid, but I would have a problem with a commit in the history that's\n> > practically not reviewable.\n> \n> As far as that goes, if you had concern then you could run the indentation\n> tool locally and confirm you got matching results.\n\nOf course, but I somehow feel a change of formatting should be reviewable to\nat least some degree. Even if it's just to make sure that the tool didn't have\na bug and cause some subtle behavioural change.\n\n\n> So the more I think about it the less excited I am about depending on\n> clang-format, because version skew in peoples' clang installations seems\n> inevitable, and there's good reason to fear that that would show up\n> as varying indentation results.\n\nOne thing that I like about clang-format is that it's possible to treat it\nabout our include order rules (which does find some \"irregularities). But of\ncourse that's not enough.\n\n\nIf we decide to move to another tool, I think it might be worth to remove a\nfew of the pg_bsd_indent options, that other tools won't be able to emulate,\nfirst. E.g. -di12 -> -di4 would remove a *lot* of the noise from a move to\nanother tool. And be much easier to write manually, but ... :)\n\n\n\nI think I've proposed this before, but I still think that as long as we rely\non pg_bsd_indent, we should have it be part of our source tree and\nautomatically built. It's no wonder that barely anybody indents their\npatches, given that it requires building pg_bsd_ident in a separate repo (but\nreferencing our sourc etree), putting the binary in path, etc.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 22 Jan 2023 16:15:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think I've proposed this before, but I still think that as long as we rely\n> on pg_bsd_indent, we should have it be part of our source tree and\n> automatically built. It's no wonder that barely anybody indents their\n> patches, given that it requires building pg_bsd_ident in a separate repo (but\n> referencing our sourc etree), putting the binary in path, etc.\n\nHmm ... right offhand, the only objection I can see is that the\npg_bsd_indent files use the BSD 4-clause license, which is not ours.\nHowever, didn't UCB grant a blanket exception years ago that said\nthat people could treat that as the 3-clause license? If we could\nget past the license question, I agree that doing what you suggest\nwould be superior to the current situation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Jan 2023 19:28:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sun, Jan 22, 2023 at 07:28:42PM -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> I think I've proposed this before, but I still think that as long as we rely\n>> on pg_bsd_indent, we should have it be part of our source tree and\n>> automatically built. It's no wonder that barely anybody indents their\n>> patches, given that it requires building pg_bsd_ident in a separate repo (but\n>> referencing our sourc etree), putting the binary in path, etc.\n> \n> Hmm ... right offhand, the only objection I can see is that the\n> pg_bsd_indent files use the BSD 4-clause license, which is not ours.\n> However, didn't UCB grant a blanket exception years ago that said\n> that people could treat that as the 3-clause license? If we could\n> get past the license question, I agree that doing what you suggest\n> would be superior to the current situation.\n\n+1.\n--\nMichael",
"msg_date": "Mon, 23 Jan 2023 09:37:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-01-22 Su 18:14, Tom Lane wrote:\n> Jelte Fennema <postgres@jeltef.nl> writes:\n>> Maybe I'm not understanding your issue correctly, but for such\n>> a case you could push two commits at the same time.\n> I don't know that much about git commit hooks, but do they really\n> only check the final state of a series of commits?\n\n\nThe pre-commit hook is literally run every time you do `git commit`. But\nit's only run on your local instance and only if you have enabled it.\nIt's not project-wide.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 22 Jan 2023 19:50:10 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-22 19:50:10 -0500, Andrew Dunstan wrote:\n> On 2023-01-22 Su 18:14, Tom Lane wrote:\n> > Jelte Fennema <postgres@jeltef.nl> writes:\n> >> Maybe I'm not understanding your issue correctly, but for such\n> >> a case you could push two commits at the same time.\n> > I don't know that much about git commit hooks, but do they really\n> > only check the final state of a series of commits?\n> \n> \n> The pre-commit hook is literally run every time you do `git commit`. But\n> it's only run on your local instance and only if you have enabled it.\n> It's not project-wide.\n\nThere's different hooks. Locally, I think pre-push would be better suited to\nthis than pre-commit (I often save WIP work in local branches, it'd be pretty\nannoying if some indentation thing swore at me).\n\nBut there's also hooks like pre-receive, that allow doing validation on the\nserver side. Which obviously would be project wide...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 22 Jan 2023 17:03:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-22 19:28:42 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I think I've proposed this before, but I still think that as long as we rely\n> > on pg_bsd_indent, we should have it be part of our source tree and\n> > automatically built. It's no wonder that barely anybody indents their\n> > patches, given that it requires building pg_bsd_ident in a separate repo (but\n> > referencing our sourc etree), putting the binary in path, etc.\n> \n> Hmm ... right offhand, the only objection I can see is that the\n> pg_bsd_indent files use the BSD 4-clause license, which is not ours.\n> However, didn't UCB grant a blanket exception years ago that said\n> that people could treat that as the 3-clause license?\n\nYep:\nhttps://www.freebsd.org/copyright/license/\n\n\nNOTE: The copyright of UC Berkeley’s Berkeley Software Distribution (\"BSD\") source has been updated. The copyright addendum may be found at ftp://ftp.cs.berkeley.edu/pub/4bsd/README.Impt.License.Change and is included below.\n\n July 22, 1999\n\n To All Licensees, Distributors of Any Version of BSD:\n\n As you know, certain of the Berkeley Software Distribution (\"BSD\") source code files require that further distributions of products containing all or portions of the software, acknowledge within their advertising materials that such products contain software developed by UC Berkeley and its contributors.\n\n Specifically, the provision reads:\n\n * 3. All advertising materials mentioning features or use of this software\n * must display the following acknowledgement:\n * This product includes software developed by the University of\n * California, Berkeley and its contributors.\"\n\n Effective immediately, licensees and distributors are no longer required to include the acknowledgement within advertising materials. Accordingly, the foregoing paragraph of those BSD Unix files containing it is hereby deleted in its entirety.\n\n William Hoskins\n Director, Office of Technology Licensing\n University of California, Berkeley\n\n\nI did check, and the FTP bit is still downloadable. A bit awkward though, now\nthat browsers have/are removing ftp support.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 22 Jan 2023 17:10:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-22 19:28:42 -0500, Tom Lane wrote:\n>> Hmm ... right offhand, the only objection I can see is that the\n>> pg_bsd_indent files use the BSD 4-clause license, which is not ours.\n>> However, didn't UCB grant a blanket exception years ago that said\n>> that people could treat that as the 3-clause license?\n\n> Yep:\n> https://www.freebsd.org/copyright/license/\n\nCool. I'll take a look at doing this later (probably after the current\nCF) unless somebody beats me to it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Jan 2023 20:29:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "I whipped up a pre-commit hook which automatically runs pgindent on the\nchanged files in the commit. It won't add any changes automatically, but\ninstead it fails the commit if it made any changes. That way you can add\nthem manually if you want. Or if you don't, you can simply run git commit\nagain without adding the changes. (or you can use the --no-verify flag of\ngit commit to skip the hook completely)\n\nIt did require adding some extra flags to pgindent. While it only required\nthe --staged-only and --fail-on-changed flags, the --changed-only flag\nwas easy to add and seemed generally useful.\n\nI also attached a patch which adds the rules for formatting pgindent\nitself to the .editorconfig file.\n\n> Locally, I think pre-push would be better suited to\n> this than pre-commit (I often save WIP work in local branches, it'd be pretty\n> annoying if some indentation thing swore at me).\n\nI personally prefer pre-commit hooks, since then I don't have to\ngo back and change some commit I made some time ago. And I\nthink with the easy opt-out that this hook has it would work fine\nfor my own workflow. But it shouldn't be hard to also include a\npre-push hook too if you want that after all. Then people can\nchoose to install the hook that they prefer.\n\nOn Mon, 23 Jan 2023 at 02:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-01-22 19:28:42 -0500, Tom Lane wrote:\n> >> Hmm ... right offhand, the only objection I can see is that the\n> >> pg_bsd_indent files use the BSD 4-clause license, which is not ours.\n> >> However, didn't UCB grant a blanket exception years ago that said\n> >> that people could treat that as the 3-clause license?\n>\n> > Yep:\n> > https://www.freebsd.org/copyright/license/\n>\n> Cool. I'll take a look at doing this later (probably after the current\n> CF) unless somebody beats me to it.\n>\n> regards, tom lane",
"msg_date": "Mon, 23 Jan 2023 11:44:11 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-01-22 Su 20:03, Andres Freund wrote:\n> Hi,\n>\n> On 2023-01-22 19:50:10 -0500, Andrew Dunstan wrote:\n>> On 2023-01-22 Su 18:14, Tom Lane wrote:\n>>> Jelte Fennema <postgres@jeltef.nl> writes:\n>>>> Maybe I'm not understanding your issue correctly, but for such\n>>>> a case you could push two commits at the same time.\n>>> I don't know that much about git commit hooks, but do they really\n>>> only check the final state of a series of commits?\n>>\n>> The pre-commit hook is literally run every time you do `git commit`. But\n>> it's only run on your local instance and only if you have enabled it.\n>> It's not project-wide.\n> There's different hooks. Locally, I think pre-push would be better suited to\n> this than pre-commit (I often save WIP work in local branches, it'd be pretty\n> annoying if some indentation thing swore at me).\n\n\nYes, me too, so I currently have a filter in my hook that ignores local\nWIP branches. The problem with pre-push is that by the time you're\npushing you have already committed and you would have to go back and\nundo some stuff to fix it. Probably 99 times out of 100 I'd prefer to\ncommit indented code off the bat rather than make a separate indentation\ncommit. But this really illustrates my point: how you do it is up to you.\n\n\n>\n> But there's also hooks like pre-receive, that allow doing validation on the\n> server side. Which obviously would be project wide...\n>\n\nYes, but I think it's been demonstrated (again) that there's no\nconsensus in using those for this purpose.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 23 Jan 2023 05:56:20 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-01-22 Su 11:18, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> ... btw, can we get away with making the diff run be \"diff -upd\"\n>>> not just \"diff -u\"? I find diff output for C files noticeably\n>>> more useful with those options, but I'm unsure about their\n>>> portability.\n>> I think they are available on Linux, MacOS and FBSD, and on Windows (if\n>> anyone's actually using it for this) it's likely to be Gnu diff. So I\n>> think that's probably enough coverage.\n> I checked NetBSD as well, and it has all three too.\n>\n> Patch looks good to me.\n>\n> \t\t\t\n\n\nThanks, pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 23 Jan 2023 07:11:53 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-01-23 Mo 05:44, Jelte Fennema wrote:\n> I whipped up a pre-commit hook which automatically runs pgindent on the\n> changed files in the commit. It won't add any changes automatically, but\n> instead it fails the commit if it made any changes. That way you can add\n> them manually if you want. Or if you don't, you can simply run git commit\n> again without adding the changes. (or you can use the --no-verify flag of\n> git commit to skip the hook completely)\n>\n> It did require adding some extra flags to pgindent. While it only required\n> the --staged-only and --fail-on-changed flags, the --changed-only flag\n> was easy to add and seemed generally useful.\n\n\nPlease see the changes to pgindent I committed about the same time I got\nyour email. I don't think we need your new flags, as it's possible (and\nalways has been) to provide pgindent with a list of files to be\nindented. Instead of having pgindent run `git diff --name-only ...` the\ngit hook can do it and pass the results to pgindent in its command line.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 23 Jan 2023 09:07:03 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Indeed the flags you added are enough. Attached is a patch\nthat adds an updated pre-commit hook with the same behaviour\nas the one before. I definitely think having a pre-commit hook\nin the repo is beneficial, since writing one that works in all\ncases definitely takes some time.\n\n> as it's possible (and\n> always has been) to provide pgindent with a list of files to be\n> indented.\n\nI guess I didn't realise this was a feature that existed, because\nnone of the documentation mentioned it.\n\nOn Mon, 23 Jan 2023 at 15:07, Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2023-01-23 Mo 05:44, Jelte Fennema wrote:\n> > I whipped up a pre-commit hook which automatically runs pgindent on the\n> > changed files in the commit. It won't add any changes automatically, but\n> > instead it fails the commit if it made any changes. That way you can add\n> > them manually if you want. Or if you don't, you can simply run git commit\n> > again without adding the changes. (or you can use the --no-verify flag of\n> > git commit to skip the hook completely)\n> >\n> > It did require adding some extra flags to pgindent. While it only required\n> > the --staged-only and --fail-on-changed flags, the --changed-only flag\n> > was easy to add and seemed generally useful.\n>\n>\n> Please see the changes to pgindent I committed about the same time I got\n> your email. I don't think we need your new flags, as it's possible (and\n> always has been) to provide pgindent with a list of files to be\n> indented. Instead of having pgindent run `git diff --name-only ...` the\n> git hook can do it and pass the results to pgindent in its command line.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>",
"msg_date": "Mon, 23 Jan 2023 15:49:15 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "I wrote:\n> Cool. I'll take a look at doing this later (probably after the current\n> CF) unless somebody beats me to it.\n\nThinking about that (importing pg_bsd_indent into our main source\ntree) a bit harder:\n\n1. I'd originally thought vaguely that we could teach pgindent\nhow to build pg_bsd_indent automatically. But with a little\nmore consideration, I doubt that would work transparently.\nIt's common (at least for me) to run pgindent in a distclean'd\ntree, where configure results wouldn't be available. It's even\nworse if you habitually use VPATH builds, so that those files\n*never* exist in your source tree. So now I think that we should\nstick to the convention that it's on the user to install\npg_bsd_indent somewhere in their PATH; all we'll be doing with\nthis change is eliminating the step of fetching pg_bsd_indent's\nsource files from somewhere else.\n\n2. Given #1, it'll be prudent to continue having pgindent\ndouble-check that pg_bsd_indent reports a specific version\nnumber. We could imagine starting to use the main Postgres\nversion number for that, but I'm inclined to continue with\nits existing numbering series. One argument for that is\nthat we generally change pg_bsd_indent less often than annually,\nso having it track the main version would end up forcing\nmake-work builds of your installed pg_bsd_indent at least\nonce a year. Also, when we do change pg_bsd_indent, it's\ntypically right before a mass reindentation commit, and those\ndo not happen at the same time as forking a new PG version.\n\n3. If we do nothing special, the first mass reindentation is\ngoing to reformat the pg_bsd_indent sources per PG style,\nwhich is ... er ... not the way they look now. Do we want\nto accept that outcome, or take steps to prevent pgindent\nfrom processing pg_bsd_indent? I have a feeling that manual\ncleanup would be necessary if we let such reindentation\nhappen, but I haven't experimented.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Jan 2023 10:09:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-23 10:09:06 -0500, Tom Lane wrote:\n> 1. I'd originally thought vaguely that we could teach pgindent\n> how to build pg_bsd_indent automatically. But with a little\n> more consideration, I doubt that would work transparently.\n> It's common (at least for me) to run pgindent in a distclean'd\n> tree, where configure results wouldn't be available. It's even\n> worse if you habitually use VPATH builds, so that those files\n> *never* exist in your source tree. So now I think that we should\n> stick to the convention that it's on the user to install\n> pg_bsd_indent somewhere in their PATH; all we'll be doing with\n> this change is eliminating the step of fetching pg_bsd_indent's\n> source files from somewhere else.\n\nI think it'd be better to build pg_bsd_indent automatically as you planned\nearlier - most others don't run pgindent from a distcleaned source tree. And\nit shouldn't be hard to teach pgindent to run from a vpath build directory.\n\nI'd like to get to the point where we can have simple build target for\na) re-indenting the whole tree\nb) re-indenting the files touched in changes compared to master\n\nIf we add that to the list of things to do before sending a patch upstream,\nwe're a heck of a lot more likely to get decently formatted patches compared\nto today.\n\n\nAs long as we need typedefs.list, I think it'd be good for such a target to\nadd new typedefs found in the local build to typedefs.list (but *not* remove\nold ones, due to platform dependent code). But that's a separate enough\ntopic...\n\n\n> 2. Given #1, it'll be prudent to continue having pgindent\n> double-check that pg_bsd_indent reports a specific version\n> number.\n\n+1\n\n\n> 3. If we do nothing special, the first mass reindentation is\n> going to reformat the pg_bsd_indent sources per PG style,\n> which is ... er ... not the way they look now. Do we want\n> to accept that outcome, or take steps to prevent pgindent\n> from processing pg_bsd_indent? I have a feeling that manual\n> cleanup would be necessary if we let such reindentation\n> happen, but I haven't experimented.\n\nI think we should exempt it, initially at least. If somebody decides to invest\na substantial amount of time in pgindent, let's change it, but I'm somewhat\ndoubtful that'll happen anytime soon.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 23 Jan 2023 09:31:36 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-01-23 Mo 09:49, Jelte Fennema wrote:\n> Attached is a patch\n> that adds an updated pre-commit hook with the same behaviour\n> as the one before. I definitely think having a pre-commit hook\n> in the repo is beneficial, since writing one that works in all\n> cases definitely takes some time.\n\n\nNot sure if this should go in the git repo or in the developer wiki.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 23 Jan 2023 13:52:46 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "> Not sure if this should go in the git repo or in the developer wiki.\n\nI would say the git repo is currently the most fitting place, since it\nhas all the existing docs for pgindent. The wiki even links to the\npgindent source directory:\nhttps://wiki.postgresql.org/wiki/Developer_FAQ#What.27s_the_formatting_style_used_in_PostgreSQL_source_code.3F\n\n\n",
"msg_date": "Mon, 23 Jan 2023 21:53:00 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Mon, Jan 23, 2023 at 09:31:36AM -0800, Andres Freund wrote:\n> As long as we need typedefs.list, I think it'd be good for such a target to\n> add new typedefs found in the local build to typedefs.list (but *not* remove\n> old ones, due to platform dependent code). But that's a separate enough\n> topic...\n\nOne issue on requiring patches to have run pgindent previously is\nactually the typedef list. If someone adds a typedef in a commit, they\nwill see different pgident output in the committed files, and perhaps\nothers, and the new typedefs might only appear after the commit, causing\nlater commits to not match.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Mon, 23 Jan 2023 16:11:50 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "> One issue on requiring patches to have run pgindent previously is\n> actually the typedef list. If someone adds a typedef in a commit, they\n> will see different pgident output in the committed files, and perhaps\n> others, and the new typedefs might only appear after the commit, causing\n> later commits to not match.\n\nI'm not sure I understand the issue you're pointing out. If someone\nchanges the typedef list, imho they want the formatting to change\nbecause of that. So requiring an addition to the typedef list to also\ncommit reindentation to all files that this typedef indirectly impacts\nseems pretty reasonable to me.\n\n\n",
"msg_date": "Tue, 24 Jan 2023 12:44:00 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Jelte Fennema <postgres@jeltef.nl> writes:\n>> One issue on requiring patches to have run pgindent previously is\n>> actually the typedef list. If someone adds a typedef in a commit, they\n>> will see different pgident output in the committed files, and perhaps\n>> others, and the new typedefs might only appear after the commit, causing\n>> later commits to not match.\n\n> I'm not sure I understand the issue you're pointing out. If someone\n> changes the typedef list, imho they want the formatting to change\n> because of that. So requiring an addition to the typedef list to also\n> commit reindentation to all files that this typedef indirectly impacts\n> seems pretty reasonable to me.\n\nI think the issue Bruce is pointing out is that this is another mechanism\nwhereby different people could get different indentation results.\nI fear any policy that is based on an assumption that indentation has\nOne True Result is going to fail.\n\nAs a concrete example, suppose Alice commits some code that uses \"foo\"\nas a variable name, and more or less concurrently, Bob commits something\nthat defines \"foo\" as a typedef. Bob's change is likely to have\nside-effects on the formatting of Alice's code. If they're working in\nwell-separated parts of the source tree, nobody is likely to notice\nthat for awhile --- but whoever next touches the files Alice touched\nwill be in for a surprise, which will be more or less painful depending\non whether we've installed brittle processes.\n\nAs another example, the mechanisms we use to create the typedefs list\nin the first place are pretty squishy/leaky: they depend on which\nbuildfarm animals are running the typedef-generation step, and on\nwhether anything's broken lately in that code --- which happens on\na fairly regular basis (eg [1]). Maybe that could be improved,\nbut I don't see an easy way to capture the set of system-defined\ntypedefs that are in use on platforms other than your own. I\ndefinitely do not want to go over to hand maintenance of that list.\n\nI think we need to be content with a \"soft\", it's more-or-less-right\napproach to indentation. As I explained to somebody upthread, the\nmain benefit of this for most people is avoiding the need for a massive\nonce-a-year reindent run that causes merge failures for many pending\npatches. But we don't need to completely eliminate such runs to get\n99.9% of that benefit; we only need to reduce the number of places\nthey're likely to touch.\n\n\t\t\tregards, tom lane\n\n[1] https://github.com/PGBuildFarm/client-code/commit/dcca861554e90d6395c3c153317b0b0e3841f103\n\n\n",
"msg_date": "Tue, 24 Jan 2023 09:54:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres> Of course, but I somehow feel a change of formatting should be\nreviewable to\nAndres> at least some degree\n\nOne way of reviewing the formatting changes is to compare the compiled\nbinaries.\n\nIf the binaries before and after formatting are the same, then there's a\nhigh chance the behaviour is the same.\n\nVladimir\n\nAndres> Of course, but I somehow feel a change of formatting should be reviewable toAndres> at least some degreeOne way of reviewing the formatting changes is to compare the compiled binaries.If the binaries before and after formatting are the same, then there's a high chance the behaviour is the same.Vladimir",
"msg_date": "Tue, 24 Jan 2023 18:03:51 +0300",
"msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "> As a concrete example, suppose Alice commits some code that uses \"foo\"\n> as a variable name, and more or less concurrently, Bob commits something\n> that defines \"foo\" as a typedef. Bob's change is likely to have\n> side-effects on the formatting of Alice's code. If they're working in\n> well-separated parts of the source tree, nobody is likely to notice\n> that for awhile --- but whoever next touches the files Alice touched\n> will be in for a surprise, which will be more or less painful depending\n> on whether we've installed brittle processes.\n\nSounds like this conflict could be handled fairly easily by\nhaving a local git hook rerunning pgindent whenever\nyou rebase a commit:\n1. if you changed typedefs.list the hook would format all files\n2. if you didn't it only formats the files that you changed\n\n> As another example, the mechanisms we use to create the typedefs list\n> in the first place are pretty squishy/leaky: they depend on which\n> buildfarm animals are running the typedef-generation step, and on\n> whether anything's broken lately in that code --- which happens on\n> a fairly regular basis (eg [1]). Maybe that could be improved,\n> but I don't see an easy way to capture the set of system-defined\n> typedefs that are in use on platforms other than your own. I\n> definitely do not want to go over to hand maintenance of that list.\n\nWouldn't the automatic addition-only solution that Andres suggested\nsolve this issue? Build farms could still remove unused typedefs\non a regular basis, but commits would at least add typedefs for the\nplatform that the comitter uses.\n\n> I think we need to be content with a \"soft\", it's more-or-less-right\n> approach to indentation.\n\nI think that this would already be a significant improvement over the\ncurrent situation. My experience with the current situation is that\nindentation is more-or-less-wrong.\n\n> As I explained to somebody upthread, the\n> main benefit of this for most people is avoiding the need for a massive\n> once-a-year reindent run that causes merge failures for many pending\n> patches.\n\nMerge failures are one issue. But personally the main benefit that\nI would be getting is being able to run pgindent on the files\nI'm editing and get this weird +12 columns formatting correct\nwithout having to manually type it. Without pgindent also\nchanging random parts of the files that someone else touched\na few commits before me.\n\n\n",
"msg_date": "Tue, 24 Jan 2023 17:03:25 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Jelte Fennema <postgres@jeltef.nl> writes:\n> Sounds like this conflict could be handled fairly easily by\n> having a local git hook rerunning pgindent whenever\n> you rebase a commit:\n> 1. if you changed typedefs.list the hook would format all files\n> 2. if you didn't it only formats the files that you changed\n\nI think that would be undesirable, because then reindentation noise\nin completely-unrelated files would get baked into feature commits,\ncomplicating review and messing up \"git blame\" history.\nThe approach we currently have allows reindent effects to be\nseparated into ignorable labeled commits, which is a nice property.\n\n> Merge failures are one issue. But personally the main benefit that\n> I would be getting is being able to run pgindent on the files\n> I'm editing and get this weird +12 columns formatting correct\n> without having to manually type it. Without pgindent also\n> changing random parts of the files that someone else touched\n> a few commits before me.\n\nYeah, that always annoys me too, but I've always considered that\nit's my problem not something I can externalize onto other people.\nThe real bottom line here is that AFAICT, there are fewer committers\nwho care about indent cleanliness than committers who do not, so\nI do not think that the former group get to impose strict rules\non the latter, much as I might wish otherwise.\n\nFWIW, Andrew's recent --show-diff feature for pgindent has\nalready improved my workflow for that. I can do\n\"pgindent --show-diff >fixindent.patch\", manually remove any hunks\nin fixindent.patch that don't pertain to the code I'm working on,\nand apply what remains to fix up my new code. (I had been doing\nsomething basically like this, but with more file-copying steps\nto undo pgindent's edit-in-place behavior.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Jan 2023 11:43:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-01-24 Tu 11:43, Tom Lane wrote:\n> Jelte Fennema <postgres@jeltef.nl> writes:\n>> Sounds like this conflict could be handled fairly easily by\n>> having a local git hook rerunning pgindent whenever\n>> you rebase a commit:\n>> 1. if you changed typedefs.list the hook would format all files\n>> 2. if you didn't it only formats the files that you changed\n> I think that would be undesirable, because then reindentation noise\n> in completely-unrelated files would get baked into feature commits,\n> complicating review and messing up \"git blame\" history.\n> The approach we currently have allows reindent effects to be\n> separated into ignorable labeled commits, which is a nice property.\n>\n>> Merge failures are one issue. But personally the main benefit that\n>> I would be getting is being able to run pgindent on the files\n>> I'm editing and get this weird +12 columns formatting correct\n>> without having to manually type it. Without pgindent also\n>> changing random parts of the files that someone else touched\n>> a few commits before me.\n> Yeah, that always annoys me too, but I've always considered that\n> it's my problem not something I can externalize onto other people.\n> The real bottom line here is that AFAICT, there are fewer committers\n> who care about indent cleanliness than committers who do not, so\n> I do not think that the former group get to impose strict rules\n> on the latter, much as I might wish otherwise.\n>\n> FWIW, Andrew's recent --show-diff feature for pgindent has\n> already improved my workflow for that. I can do\n> \"pgindent --show-diff >fixindent.patch\", manually remove any hunks\n> in fixindent.patch that don't pertain to the code I'm working on,\n> and apply what remains to fix up my new code. (I had been doing\n> something basically like this, but with more file-copying steps\n> to undo pgindent's edit-in-place behavior.)\n>\n> \t\t\t\n\n\nI'm glad it's helpful.\n\nHere's another improvement I think will be useful when the new gadgets\nare used in a git hook: first, look for the excludes file under the\ncurrent directory if we aren't setting $code_base (e.g if we have files\ngiven on the command line), and second apply the exclude patterns to the\ncommand line files as well as to files found using File::Find.\n\nI propose to apply this fairly soon.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 24 Jan 2023 12:00:12 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "> I think that would be undesirable, because then reindentation noise\n> in completely-unrelated files would get baked into feature commits,\n> complicating review and messing up \"git blame\" history.\n\nWith a rebase hook similar to the the pre-commit hook that I shared\nupthread, your files will be changed accordingly, but you don't need\nto commit those changes in the same commit as the one that you're\nrebasing. You could append another commit after it. Another option\nwould be to move the typedefs.list change to a separate commit\ntogether with all project wide indentation changes.\n\n> The real bottom line here is that AFAICT, there are fewer committers\n> who care about indent cleanliness than committers who do not, so\n> I do not think that the former group get to impose strict rules\n> on the latter, much as I might wish otherwise.\n\nIs this actually the case? I haven't seen anyone in this thread say\nthey don't care. From my perspective it seems like the unclean\nindents simply come from forgetting to run pgindent once in\na while. And those few forgetful moments add up over a year.\nof commits. That's why to me tooling seems the answer here.\nIf the tooling makes it easy not to forget then the problem\ngoes away.\n\n> FWIW, Andrew's recent --show-diff feature for pgindent has\n> already improved my workflow for that. I can do\n> \"pgindent --show-diff >fixindent.patch\", manually remove any hunks\n> in fixindent.patch that don't pertain to the code I'm working on,\n> and apply what remains to fix up my new code. (I had been doing\n> something basically like this, but with more file-copying steps\n> to undo pgindent's edit-in-place behavior.)\n\nYeah, I have a similar workflow with the pre-commit hook that\nI shared. By using \"git checkout -p\" I can remove hunks that\ndon't pertain to my code. Still it would be really nice not\nto have to go through that effort (which is significant for the\nlibpq code that I've been workin on, since there's ~50\nincorrectly indented hunks).\n\n> Here's another improvement I think will be useful when the new gadgets\n> are used in a git hook: first, look for the excludes file under the\n> current directory if we aren't setting $code_base (e.g if we have files\n> given on the command line), and second apply the exclude patterns to the\n> command line files as well as to files found using File::Find.\n\nChange looks good to me.\n\n\n",
"msg_date": "Tue, 24 Jan 2023 19:42:18 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 09:54:57AM -0500, Tom Lane wrote:\n> As another example, the mechanisms we use to create the typedefs list\n> in the first place are pretty squishy/leaky: they depend on which\n> buildfarm animals are running the typedef-generation step, and on\n> whether anything's broken lately in that code --- which happens on\n> a fairly regular basis (eg [1]). Maybe that could be improved,\n> but I don't see an easy way to capture the set of system-defined\n> typedefs that are in use on platforms other than your own. I\n> definitely do not want to go over to hand maintenance of that list.\n\nAs I now understand it, we would need to standardize on a typedef list\nat the beginning of each major development cycle, and then only allow\nadditions, and the addition would have to include any pgindent affects\nof the addition.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Tue, 24 Jan 2023 14:04:02 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-01-24 Tu 13:42, Jelte Fennema wrote:\n>> Here's another improvement I think will be useful when the new gadgets\n>> are used in a git hook: first, look for the excludes file under the\n>> current directory if we aren't setting $code_base (e.g if we have files\n>> given on the command line), and second apply the exclude patterns to the\n>> command line files as well as to files found using File::Find.\n> Change looks good to me.\n\n\nThanks, pushed\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 24 Jan 2023 16:11:20 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-01-23 Mo 09:49, Jelte Fennema wrote:\n> Indeed the flags you added are enough. Attached is a patch\n> that adds an updated pre-commit hook with the same behaviour\n> as the one before. I definitely think having a pre-commit hook\n> in the repo is beneficial, since writing one that works in all\n> cases definitely takes some time.\n\n\nI didn't really like your hook, as it forces a reindent, and many people\nwon't want that (for reasons given elsewhere in this thread).\n\nHere's an extract from my pre-commit hook that does that if PGAUTOINDENT\nis set to \"yes\", and otherwise just warns you that you need to run pgindent.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 26 Jan 2023 09:40:39 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, 26 Jan 2023 at 15:40, Andrew Dunstan <andrew@dunslane.net> wrote:\n> I didn't really like your hook, as it forces a reindent, and many people\n> won't want that (for reasons given elsewhere in this thread).\n\nI'm not sure what you mean by \"forces a reindent\". Like I explained\nyou can simply run \"git commit\" again to ignore the changes and\ncommit anyway. As long as the files are indented on your filesystem\nthe hook doesn't care if you actually included the indentation changes\nin the changes that you're currently committing.\n\nSo to be completely clear you can do the following with my hook:\ngit commit # runs pgindent and fails\ngit commit # commits changes anyway\ngit commit -am 'Run pgindent' # commit indentation changes separately\n\nOr what I usually do:\ngit commit # runs pgindent and fails\ngit add --patch # choose relevant changes to add to commit\ngit commit # commit the changes\ngit checkout -- . # undo irrelevant changes on filesystem\n\nHonestly PGAUTOINDENT=no seems stricter, since the only\nway to bypass the failure is now to run manually run pgindent\nor git commit with the --no-verify flag.\n\n> files=$(git diff --cached --name-only --diff-filter=ACMR)\n> src/tools/pgindent/pgindent $files\n\nThat seems like it would fail if there's any files or directories with\nspaces in them. Maybe this isn't something we care about though.\n\n> # no need to filter files - pgindent ignores everything that isn't a\n> # .c or .h file\n\nIf the first argument is a non .c or .h file, then pgindent interprets\nit as the typedefs file. So it's definitely important to filter non .c\nand .h files out. Because now if you commit a single\nnon .c or .h file this hook messes up the indentation in all of\nyour files. You can reproduce by running:\nsrc/tools/pgindent/pgindent README\n\n> # only do this on master\n> test \"$branch\" = \"master\" || return 0\n\nI would definitely want a way to disable this check. As a normal\nsubmitter I never work directly on master.\n\n\n",
"msg_date": "Thu, 26 Jan 2023 17:16:52 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-01-26 Th 11:16, Jelte Fennema wrote:\n> On Thu, 26 Jan 2023 at 15:40, Andrew Dunstan <andrew@dunslane.net> wrote:\n>> I didn't really like your hook, as it forces a reindent, and many people\n>> won't want that (for reasons given elsewhere in this thread).\n> I'm not sure what you mean by \"forces a reindent\". Like I explained\n> you can simply run \"git commit\" again to ignore the changes and\n> commit anyway. As long as the files are indented on your filesystem\n> the hook doesn't care if you actually included the indentation changes\n> in the changes that you're currently committing.\n\n\nYour hook does this:\n\n\n+git diff --cached --name-only --diff-filter=ACMR | grep '\\.[ch]$' |\\\n+ xargs src/tools/pgindent/pgindent --silent-diff \\\n+ || {\n+ echo ERROR: Aborting commit because pgindent was not run\n+ git diff --cached --name-only --diff-filter=ACMR | grep\n'\\.[ch]$' | xargs src/tools/pgindent/pgindent\n+ exit 1\n+ }\n\n\nAt this stage the files are now indented, so if it failed and you run\n`git commit` again it will commit with the indention changes.\n\n\n>\n> So to be completely clear you can do the following with my hook:\n> git commit # runs pgindent and fails\n> git commit # commits changes anyway\n> git commit -am 'Run pgindent' # commit indentation changes separately\n>\n> Or what I usually do:\n> git commit # runs pgindent and fails\n> git add --patch # choose relevant changes to add to commit\n> git commit # commit the changes\n> git checkout -- . # undo irrelevant changes on filesystem\n>\n> Honestly PGAUTOINDENT=no seems stricter, since the only\n> way to bypass the failure is now to run manually run pgindent\n> or git commit with the --no-verify flag.\n>\n>> files=$(git diff --cached --name-only --diff-filter=ACMR)\n>> src/tools/pgindent/pgindent $files\n> That seems like it would fail if there's any files or directories with\n> spaces in them. Maybe this isn't something we care about though.\n\n\nWe don't have any, and the filenames git produces are relative to the\ngit root. I don't think this is an issue.\n\n\n>\n>> # no need to filter files - pgindent ignores everything that isn't a\n>> # .c or .h file\n> If the first argument is a non .c or .h file, then pgindent interprets\n> it as the typedefs file. So it's definitely important to filter non .c\n> and .h files out. Because now if you commit a single\n> non .c or .h file this hook messes up the indentation in all of\n> your files. You can reproduce by running:\n> src/tools/pgindent/pgindent README\n\n\n\nI have a patch at [1] to remove this misfeature.\n\n\n>\n>> # only do this on master\n>> test \"$branch\" = \"master\" || return 0\n> I would definitely want a way to disable this check. As a normal\n> submitter I never work directly on master.\n\n\nSure, that's your choice. My intended audience here is committers, who\nof course do work on master.\n\n\ncheers\n\n\nandrew\n\n\n[1] https://postgr.es/m/21bb8573-9e56-812b-84cf-1e4f3c4c2a7b@dunslane.net\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 26 Jan 2023 11:54:11 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "> At this stage the files are now indented, so if it failed and you run\n> `git commit` again it will commit with the indention changes.\n\nNo, because at no point a \"git add\" is happening, so the changes\nmade by pgindent are not staged. As long as you don't run the\nsecond \"git commit\" with the -a flag the commit will be exactly\nthe same as you prepared it before.\n\n> Sure, that's your choice. My intended audience here is committers, who\n> of course do work on master.\n\nYes I understand, I meant it would be nice if the script had a environment\nvariable like PG_COMMIT_HOOK_ALL_BRANCHES (bad name)\nfor this purpose.\n\nOn Thu, 26 Jan 2023 at 17:54, Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2023-01-26 Th 11:16, Jelte Fennema wrote:\n> > On Thu, 26 Jan 2023 at 15:40, Andrew Dunstan <andrew@dunslane.net> wrote:\n> >> I didn't really like your hook, as it forces a reindent, and many people\n> >> won't want that (for reasons given elsewhere in this thread).\n> > I'm not sure what you mean by \"forces a reindent\". Like I explained\n> > you can simply run \"git commit\" again to ignore the changes and\n> > commit anyway. As long as the files are indented on your filesystem\n> > the hook doesn't care if you actually included the indentation changes\n> > in the changes that you're currently committing.\n>\n>\n> Your hook does this:\n>\n>\n> +git diff --cached --name-only --diff-filter=ACMR | grep '\\.[ch]$' |\\\n> + xargs src/tools/pgindent/pgindent --silent-diff \\\n> + || {\n> + echo ERROR: Aborting commit because pgindent was not run\n> + git diff --cached --name-only --diff-filter=ACMR | grep\n> '\\.[ch]$' | xargs src/tools/pgindent/pgindent\n> + exit 1\n> + }\n>\n>\n> At this stage the files are now indented, so if it failed and you run\n> `git commit` again it will commit with the indention changes.\n>\n>\n> >\n> > So to be completely clear you can do the following with my hook:\n> > git commit # runs pgindent and fails\n> > git commit # commits changes anyway\n> > git commit -am 'Run pgindent' # commit indentation changes separately\n> >\n> > Or what I usually do:\n> > git commit # runs pgindent and fails\n> > git add --patch # choose relevant changes to add to commit\n> > git commit # commit the changes\n> > git checkout -- . # undo irrelevant changes on filesystem\n> >\n> > Honestly PGAUTOINDENT=no seems stricter, since the only\n> > way to bypass the failure is now to run manually run pgindent\n> > or git commit with the --no-verify flag.\n> >\n> >> files=$(git diff --cached --name-only --diff-filter=ACMR)\n> >> src/tools/pgindent/pgindent $files\n> > That seems like it would fail if there's any files or directories with\n> > spaces in them. Maybe this isn't something we care about though.\n>\n>\n> We don't have any, and the filenames git produces are relative to the\n> git root. I don't think this is an issue.\n>\n>\n> >\n> >> # no need to filter files - pgindent ignores everything that isn't a\n> >> # .c or .h file\n> > If the first argument is a non .c or .h file, then pgindent interprets\n> > it as the typedefs file. So it's definitely important to filter non .c\n> > and .h files out. Because now if you commit a single\n> > non .c or .h file this hook messes up the indentation in all of\n> > your files. You can reproduce by running:\n> > src/tools/pgindent/pgindent README\n>\n>\n>\n> I have a patch at [1] to remove this misfeature.\n>\n>\n> >\n> >> # only do this on master\n> >> test \"$branch\" = \"master\" || return 0\n> > I would definitely want a way to disable this check. As a normal\n> > submitter I never work directly on master.\n>\n>\n> Sure, that's your choice. My intended audience here is committers, who\n> of course do work on master.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> [1] https://postgr.es/m/21bb8573-9e56-812b-84cf-1e4f3c4c2a7b@dunslane.net\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n\n\n",
"msg_date": "Thu, 26 Jan 2023 18:05:53 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-01-26 Th 12:05, Jelte Fennema wrote:\n>> At this stage the files are now indented, so if it failed and you run\n>> `git commit` again it will commit with the indention changes.\n> No, because at no point a \"git add\" is happening, so the changes\n> made by pgindent are not staged. As long as you don't run the\n> second \"git commit\" with the -a flag the commit will be exactly\n> the same as you prepared it before.\n\n\nHmm, but I usually run with -a, I even have a git alias for it. I guess\nwhat this discussion illustrates is that there are various patters for\nusing git, and we shouldn't assume that everyone else is using the same\npatterns we are.\n\nI'm still mildly inclined to say this material would be better placed\nin the developer wiki. After all, this isn't the only thing a postgres\ndeveloper might use a git hook for (mine has more material in it than in\nwhat I posted).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 26 Jan 2023 16:46:46 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, 26 Jan 2023 at 22:46, Andrew Dunstan <andrew@dunslane.net> wrote:\n> Hmm, but I usually run with -a, I even have a git alias for it. I guess\n> what this discussion illustrates is that there are various patters for\n> using git, and we shouldn't assume that everyone else is using the same\n> patterns we are.\n\nI definitely agree that there are lots of ways to use git. And I now\nunderstand why my hook didn't work well for your existing workflow.\n\nI've pretty much unlearned the -a flag. Because the easiest way I've\nbeen able to split up changes into different commits is using \"git add\n-p\", which adds partial pieces of files to the staging area. And that\nworkflow combines terribly with \"git commit -a\" because -a adds all\nthe things that I specifically didn't put in the staging area into the\nfinal commit anyway.\n\n> I'm still mildly inclined to say this material would be better placed\n> in the developer wiki. After all, this isn't the only thing a postgres\n> developer might use a git hook for\n\nI think it should definitely be somewhere. I have a preference for the\nrepo, since I think the docs on codestyle are already in too many\ndifferent places. But the wiki is already much better than having no\nshared hook at all. I mainly think we should try to make it as easy as\npossible for people to commit well indented code.\n\n> (mine has more material in it than in what I posted).\n\nAnything that is useful for the wider community and could be part of\nthis example/template git hook? (e.g. some perltidy automation)\n\n\n",
"msg_date": "Thu, 26 Jan 2023 23:54:41 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-01-26 Th 17:54, Jelte Fennema wrote:\n>\n>> I'm still mildly inclined to say this material would be better placed\n>> in the developer wiki. After all, this isn't the only thing a postgres\n>> developer might use a git hook for\n> I think it should definitely be somewhere. I have a preference for the\n> repo, since I think the docs on codestyle are already in too many\n> different places. But the wiki is already much better than having no\n> shared hook at all. I mainly think we should try to make it as easy as\n> possible for people to commit well indented code.\n>\n>\n\nI've added a section to the wiki at\n<https://wiki.postgresql.org/wiki/Working_with_Git#Using_git_hooks> and\nput a reference to that in the pgindent docco. You can of course add\nsome more info to the wiki if you feel it's necessary.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 27 Jan 2023 09:57:45 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 02:04:02PM -0500, Bruce Momjian wrote:\n> On Tue, Jan 24, 2023 at 09:54:57AM -0500, Tom Lane wrote:\n> > As another example, the mechanisms we use to create the typedefs list\n> > in the first place are pretty squishy/leaky: they depend on which\n> > buildfarm animals are running the typedef-generation step, and on\n> > whether anything's broken lately in that code --- which happens on\n> > a fairly regular basis (eg [1]). Maybe that could be improved,\n> > but I don't see an easy way to capture the set of system-defined\n> > typedefs that are in use on platforms other than your own. I\n> > definitely do not want to go over to hand maintenance of that list.\n> \n> As I now understand it, we would need to standardize on a typedef list\n> at the beginning of each major development cycle, and then only allow\n> additions,\n\nNot to my knowledge. There's no particular obstacle to updating the list more\nfrequently or removing entries.\n\n> and the addition would have to include any pgindent affects\n> of the addition.\n\nYes, a hook intended to enforce pgindent cleanliness should run tree-wide\npgindent when the given commit(s) change the typedef list. typedef list\nchanges essentially become another kind of refactoring that can yield merge\nconflicts. If your commit passed the pgindent check, rebasing it onto a new\ntypedefs list may require further indentation changes. New typedefs don't\ntend to change a lot of old code, so I would expect this sort of conflict to\nbe minor, compared to all the other sources of conflicts.\n\n\n",
"msg_date": "Sat, 28 Jan 2023 17:06:03 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> Yes, a hook intended to enforce pgindent cleanliness should run tree-wide\n> pgindent when the given commit(s) change the typedef list. typedef list\n> changes essentially become another kind of refactoring that can yield merge\n> conflicts. If your commit passed the pgindent check, rebasing it onto a new\n> typedefs list may require further indentation changes. New typedefs don't\n> tend to change a lot of old code, so I would expect this sort of conflict to\n> be minor, compared to all the other sources of conflicts.\n\nIn fact, if a typedef addition *does* affect a lot of old code,\nthat's a good sign that the choice of typedef name ought to be\nrethought: it's evidently conflicting with existing names.\n\nI'm not sure what that observation implies for our standard\npractices here. But it does suggest that \"let pgindent do what\nit wants without human oversight\" probably isn't a good plan.\nWe've seen that to be true for other reasons as well, notably that\nit can destroy the readability of carefully-laid-out comments.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Jan 2023 23:22:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Jan 28, 2023 at 05:06:03PM -0800, Noah Misch wrote:\n> On Tue, Jan 24, 2023 at 02:04:02PM -0500, Bruce Momjian wrote:\n> > On Tue, Jan 24, 2023 at 09:54:57AM -0500, Tom Lane wrote:\n> > > As another example, the mechanisms we use to create the typedefs list\n> > > in the first place are pretty squishy/leaky: they depend on which\n> > > buildfarm animals are running the typedef-generation step, and on\n> > > whether anything's broken lately in that code --- which happens on\n> > > a fairly regular basis (eg [1]). Maybe that could be improved,\n> > > but I don't see an easy way to capture the set of system-defined\n> > > typedefs that are in use on platforms other than your own. I\n> > > definitely do not want to go over to hand maintenance of that list.\n> > \n> > As I now understand it, we would need to standardize on a typedef list\n> > at the beginning of each major development cycle, and then only allow\n> > additions,\n> \n> Not to my knowledge. There's no particular obstacle to updating the list more\n> frequently or removing entries.\n\nWe would need to re-pgindent the tree each time, I think, which would\ncause disruption if we did it too frequently.\n\n> > and the addition would have to include any pgindent affects\n> > of the addition.\n> \n> Yes, a hook intended to enforce pgindent cleanliness should run tree-wide\n> pgindent when the given commit(s) change the typedef list. typedef list\n> changes essentially become another kind of refactoring that can yield merge\n> conflicts. If your commit passed the pgindent check, rebasing it onto a new\n> typedefs list may require further indentation changes. New typedefs don't\n> tend to change a lot of old code, so I would expect this sort of conflict to\n> be minor, compared to all the other sources of conflicts.\n\nAgreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Mon, 30 Jan 2023 15:42:09 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 03:42:09PM -0500, Bruce Momjian wrote:\n> On Sat, Jan 28, 2023 at 05:06:03PM -0800, Noah Misch wrote:\n> > On Tue, Jan 24, 2023 at 02:04:02PM -0500, Bruce Momjian wrote:\n> > > On Tue, Jan 24, 2023 at 09:54:57AM -0500, Tom Lane wrote:\n> > > > As another example, the mechanisms we use to create the typedefs list\n> > > > in the first place are pretty squishy/leaky: they depend on which\n> > > > buildfarm animals are running the typedef-generation step, and on\n> > > > whether anything's broken lately in that code --- which happens on\n> > > > a fairly regular basis (eg [1]). Maybe that could be improved,\n> > > > but I don't see an easy way to capture the set of system-defined\n> > > > typedefs that are in use on platforms other than your own. I\n> > > > definitely do not want to go over to hand maintenance of that list.\n> > > \n> > > As I now understand it, we would need to standardize on a typedef list\n> > > at the beginning of each major development cycle, and then only allow\n> > > additions,\n> > \n> > Not to my knowledge. There's no particular obstacle to updating the list more\n> > frequently or removing entries.\n> \n> We would need to re-pgindent the tree each time, I think, which would\n> cause disruption if we did it too frequently.\n\nMore important than frequency is how much old code changes. A new typedef\ntypically is an identifier not already appearing in the tree, so no old code\nchanges. A removed typedef typically no longer appears in the tree, so again\nno old code changes. The tree can get those daily; they're harmless.\n\nThe push that adds or removes FooTypedef from the source code is in the best\nposition to react to any surprising indentation consequences of adding or\nremoving FooTypedef from typedefs.list. (Reactions could include choosing a\ndifferent typedef name or renaming incidental matches in older code.) Hence,\nchanging typedefs.list as frequently as it affects the code is less disruptive\nthan changing it once a year. The same applies to challenges like pgindent\nwrecking a non-\"/*----------\" comment. Such breakage is hard to miss when\nit's part of the push that crafts the comment; it's easier to miss in a bulk,\nend-of-cycle pgindent.\n\nRegarding the concern about a pre-receive hook blocking an emergency push, the\nhook could approve every push where a string like \"pgindent: no\" appears in a\ncommit message within the push. You'd still want to make the tree clean\nsometime the same week or so. It's cheap to provide a break-glass like that.\n\n\n",
"msg_date": "Wed, 1 Feb 2023 22:28:38 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> Regarding the concern about a pre-receive hook blocking an emergency push, the\n> hook could approve every push where a string like \"pgindent: no\" appears in a\n> commit message within the push. You'd still want to make the tree clean\n> sometime the same week or so. It's cheap to provide a break-glass like that.\n\nI think the real question here is whether we can get all (or at least\na solid majority of) committers to accept such draconian constraints.\nI'd buy into it, and evidently so would you, but I can't help noting\nthat less than a quarter of active committers have bothered to\ncomment on this thread. I suspect the other three-quarters would\nbe quite annoyed if we tried to institute such requirements. That's\nnot manpower we can afford to drive away.\n\nMaybe this should get taken up at the this-time-for-sure developer\nmeeting at PGCon?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Feb 2023 01:40:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, 2 Feb 2023 at 06:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Noah Misch <noah@leadboat.com> writes:\n> > Regarding the concern about a pre-receive hook blocking an emergency push, the\n> > hook could approve every push where a string like \"pgindent: no\" appears in a\n> > commit message within the push. You'd still want to make the tree clean\n> > sometime the same week or so. It's cheap to provide a break-glass like that.\n>\n> I think the real question here is whether we can get all (or at least\n> a solid majority of) committers to accept such draconian constraints.\n> I'd buy into it, and evidently so would you, but I can't help noting\n> that less than a quarter of active committers have bothered to\n> comment on this thread. I suspect the other three-quarters would\n> be quite annoyed if we tried to institute such requirements.\n>\n\nI didn't reply until now, but I'm solidly in the camp of committers\nwho care about keeping the tree properly indented, and I wouldn't have\nany problem with such a check being imposed.\n\nI regularly run pgindent locally, and if I ever commit without\nindenting, it's either intentional, or because I forgot, so the\nreminder would be useful.\n\nAnd as someone who runs pgindent regularly, I think this will be a net\ntime saver, since I won't have to skip over other unrelated indent\nchanges all the time.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 2 Feb 2023 11:34:37 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Fri, Feb 3, 2023 at 12:35 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> And as someone who runs pgindent regularly, I think this will be a net\n> time saver, since I won't have to skip over other unrelated indent\n> changes all the time.\n\n+1\n\n\n",
"msg_date": "Fri, 3 Feb 2023 00:40:14 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 5:05 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Thu, 2 Feb 2023 at 06:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Noah Misch <noah@leadboat.com> writes:\n> > > Regarding the concern about a pre-receive hook blocking an emergency push, the\n> > > hook could approve every push where a string like \"pgindent: no\" appears in a\n> > > commit message within the push. You'd still want to make the tree clean\n> > > sometime the same week or so. It's cheap to provide a break-glass like that.\n> >\n> > I think the real question here is whether we can get all (or at least\n> > a solid majority of) committers to accept such draconian constraints.\n> > I'd buy into it, and evidently so would you, but I can't help noting\n> > that less than a quarter of active committers have bothered to\n> > comment on this thread. I suspect the other three-quarters would\n> > be quite annoyed if we tried to institute such requirements.\n> >\n>\n> I didn't reply until now, but I'm solidly in the camp of committers\n> who care about keeping the tree properly indented, and I wouldn't have\n> any problem with such a check being imposed.\n>\n> I regularly run pgindent locally, and if I ever commit without\n> indenting, it's either intentional, or because I forgot, so the\n> reminder would be useful.\n>\n> And as someone who runs pgindent regularly, I think this will be a net\n> time saver, since I won't have to skip over other unrelated indent\n> changes all the time.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Feb 2023 17:29:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-02-02 Th 01:40, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n>> Regarding the concern about a pre-receive hook blocking an emergency push, the\n>> hook could approve every push where a string like \"pgindent: no\" appears in a\n>> commit message within the push. You'd still want to make the tree clean\n>> sometime the same week or so. It's cheap to provide a break-glass like that.\n> I think the real question here is whether we can get all (or at least\n> a solid majority of) committers to accept such draconian constraints.\n> I'd buy into it, and evidently so would you, but I can't help noting\n> that less than a quarter of active committers have bothered to\n> comment on this thread. I suspect the other three-quarters would\n> be quite annoyed if we tried to institute such requirements. That's\n> not manpower we can afford to drive away.\n\n\nI'd be very surprised if this caused any active committer to walk away\nfrom the project. Many will probably appreciate the nudge. But maybe I'm\noveroptimistic.\n\n\n>\n> Maybe this should get taken up at the this-time-for-sure developer\n> meeting at PGCon?\n>\n> \t\t\t\n\n\nSure. There's probably some work that could still be done in this area\ntoo, such as making pgperltidy work similarly to how we've now make\npgindent work.\n\n\nThere's also a question of timing. Possibly the best time would be when\nwe next fork the tree.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 2 Feb 2023 09:06:36 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Sure. There's probably some work that could still be done in this area\n> too, such as making pgperltidy work similarly to how we've now make\n> pgindent work.\n\nPerhaps. But before we commit to that, I'd like to see some tweaks to the\npgperltidy rules to make it less eager to revisit the formatting of lines\nclose to a change. Its current behavior will induce a lot of \"git blame\"\nnoise if we apply these same procedures to Perl code.\n\n(Should I mention reformat-dat-files?)\n\n> There's also a question of timing. Possibly the best time would be when\n> we next fork the tree.\n\nYeah. We have generally not wanted to do a mass indent except\nwhen there's a minimum amount of pending patches, ie after the last\nCF of a cycle. What I'd suggest is that we plan on doing a mass\nindent and then switch over to the new rules, right after the March\nCF closes. That gives us a couple months to nail down and test out\nthe new procedures before they go live.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Feb 2023 10:00:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-02 Th 10:00, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Sure. There's probably some work that could still be done in this area\n>> too, such as making pgperltidy work similarly to how we've now make\n>> pgindent work.\n> Perhaps. But before we commit to that, I'd like to see some tweaks to the\n> pgperltidy rules to make it less eager to revisit the formatting of lines\n> close to a change. Its current behavior will induce a lot of \"git blame\"\n> noise if we apply these same procedures to Perl code.\n\n\nI haven't done anything about that yet, but I have reworked the script\nso it's a lot more like pgindent, with --show-diff and --silent-diff\nmodes, and allowing a list of files to be indented on the command line.\nNon-perl files are filtered out from such a list.\n\n\n>\n> (Should I mention reformat-dat-files?)\n\n\nIf you want I can add those flags there too.\n\n\n>\n>> There's also a question of timing. Possibly the best time would be when\n>> we next fork the tree.\n> Yeah. We have generally not wanted to do a mass indent except\n> when there's a minimum amount of pending patches, ie after the last\n> CF of a cycle. What I'd suggest is that we plan on doing a mass\n> indent and then switch over to the new rules, right after the March\n> CF closes. That gives us a couple months to nail down and test out\n> the new procedures before they go live.\n>\n> \t\t\t\n\n\nWFM. Of course then we're not waiting for the developer meeting.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 2 Feb 2023 15:22:55 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-01-22 Su 17:47, Tom Lane wrote:\n> Andres Freund<andres@anarazel.de> writes:\n>> I strongly dislike it, I rarely get it right by hand - but it does have some\n>> benefit over aligning variable names based on the length of the type names as\n>> uncrustify/clang-format: In their approach an added local variable can cause\n>> all the other variables to be re-indented (and their initial value possibly\n>> wrapped). The fixed alignment doesn't have that issue.\n> Yeah. That's one of my biggest gripes about pgperltidy: if you insert\n> another assignment in a series of assignments, it is very likely to\n> reformat all the adjacent assignments because it thinks it's cool to\n> make all the equal signs line up. That's just awful. You can either\n> run pgperltidy on new code before committing, and accept that the feature\n> patch will touch a lot of lines it's not making real changes to (thereby\n> dirtying the \"git blame\" history) or not do so and thereby commit code\n> that's not passing tidiness checks. Let's *not* adopt any style that\n> causes similar things to start happening in our C code.\n\n\nModern versions of perltidy give you much more control over this, so \nmaybe we need to investigate the possibility of updating. See the latest \ndocco at \n<*https://metacpan.org/dist/Perl-Tidy/view/bin/perltidy#Completely-turning-off-vertical-alignment-with-novalign>\n*\n\nProbably we'd want to use something like\n\n|--valign-exclusion-list=||'= => ,'|\n||\n||\n|cheers|\n||\n|andrew|\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-01-22 Su 17:47, Tom Lane wrote:\n\n\nAndres Freund <andres@anarazel.de> writes:\n\n\nI strongly dislike it, I rarely get it right by hand - but it does have some\nbenefit over aligning variable names based on the length of the type names as\nuncrustify/clang-format: In their approach an added local variable can cause\nall the other variables to be re-indented (and their initial value possibly\nwrapped). The fixed alignment doesn't have that issue.\n\n\n\nYeah. That's one of my biggest gripes about pgperltidy: if you insert\nanother assignment in a series of assignments, it is very likely to\nreformat all the adjacent assignments because it thinks it's cool to\nmake all the equal signs line up. That's just awful. You can either\nrun pgperltidy on new code before committing, and accept that the feature\npatch will touch a lot of lines it's not making real changes to (thereby\ndirtying the \"git blame\" history) or not do so and thereby commit code\nthat's not passing tidiness checks. Let's *not* adopt any style that\ncauses similar things to start happening in our C code.\n\n\n\nModern versions of perltidy give you much more control over this,\n so maybe we need to investigate the possibility of updating. See\n the latest docco at <https://metacpan.org/dist/Perl-Tidy/view/bin/perltidy#Completely-turning-off-vertical-alignment-with-novalign>\n\nProbably we'd want to use something like \n\n--valign-exclusion-list='= => ,'\n\ncheers\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 3 Feb 2023 11:44:46 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-01-22 Su 17:47, Tom Lane wrote:\n>> Yeah. That's one of my biggest gripes about pgperltidy: if you insert\n>> another assignment in a series of assignments, it is very likely to\n>> reformat all the adjacent assignments because it thinks it's cool to\n>> make all the equal signs line up. That's just awful.\n\n> Modern versions of perltidy give you much more control over this, so \n> maybe we need to investigate the possibility of updating.\n\nI have no objection to updating perltidy from time to time. I think the\nidea is just to make sure that we have an agreed-on version for everyone\nto use.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Feb 2023 12:52:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, Feb 02, 2023 at 11:34:37AM +0000, Dean Rasheed wrote:\n> I didn't reply until now, but I'm solidly in the camp of committers\n> who care about keeping the tree properly indented, and I wouldn't have\n> any problem with such a check being imposed.\n\nSo do I. pgindent is part of my routine when it comes to all the\npatches I merge on HEAD, and having to clean up unrelated diffs in\nthe files touched after an indentation is always annoying.\n\nFWIW, I just use a script that does pgindent, pgperltidy, pgperlcritic\nand `make reformat-dat-files` in src/include/catalog.\n\n> I regularly run pgindent locally, and if I ever commit without\n> indenting, it's either intentional, or because I forgot, so the\n> reminder would be useful.\n> \n> And as someone who runs pgindent regularly, I think this will be a net\n> time saver, since I won't have to skip over other unrelated indent\n> changes all the time.\n\n+1.\n--\nMichael",
"msg_date": "Sat, 4 Feb 2023 11:38:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Fri, Feb 03, 2023 at 12:52:50PM -0500, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 2023-01-22 Su 17:47, Tom Lane wrote:\n> >> Yeah. That's one of my biggest gripes about pgperltidy: if you insert\n> >> another assignment in a series of assignments, it is very likely to\n> >> reformat all the adjacent assignments because it thinks it's cool to\n> >> make all the equal signs line up. That's just awful.\n> \n> > Modern versions of perltidy give you much more control over this, so \n> > maybe we need to investigate the possibility of updating.\n> \n> I have no objection to updating perltidy from time to time. I think the\n> idea is just to make sure that we have an agreed-on version for everyone\n> to use.\n\nAgreed. If we're changing the indentation of assignments, that's a\nconsiderable diff already. It would be a good time to absorb other diffs\nwe'll want eventually, including diffs from a perltidy version upgrade.\n\n\n",
"msg_date": "Fri, 3 Feb 2023 20:18:48 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-03 20:18:48 -0800, Noah Misch wrote:\n> On Fri, Feb 03, 2023 at 12:52:50PM -0500, Tom Lane wrote:\n> > Andrew Dunstan <andrew@dunslane.net> writes:\n> > > On 2023-01-22 Su 17:47, Tom Lane wrote:\n> > >> Yeah. That's one of my biggest gripes about pgperltidy: if you insert\n> > >> another assignment in a series of assignments, it is very likely to\n> > >> reformat all the adjacent assignments because it thinks it's cool to\n> > >> make all the equal signs line up. That's just awful.\n> > \n> > > Modern versions of perltidy give you much more control over this, so \n> > > maybe we need to investigate the possibility of updating.\n> > \n> > I have no objection to updating perltidy from time to time. I think the\n> > idea is just to make sure that we have an agreed-on version for everyone\n> > to use.\n> \n> Agreed. If we're changing the indentation of assignments, that's a\n> considerable diff already. It would be a good time to absorb other diffs\n> we'll want eventually, including diffs from a perltidy version upgrade.\n\nISTM that we're closer to being able to enforce pgindent than\nperltidy. At the same time, I think the issue of C code in HEAD not\nbeing indented is more pressing - IME it's much more common to have to\ntouch a lot of C code than to have to touch a lot fo perl files. So\nperhaps we should just start with being more stringent with C code, and\nonce we made perltidy less noisy, add that?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 4 Feb 2023 03:34:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-04 Sa 06:34, Andres Freund wrote:\n>\n> ISTM that we're closer to being able to enforce pgindent than\n> perltidy. At the same time, I think the issue of C code in HEAD not\n> being indented is more pressing - IME it's much more common to have to\n> touch a lot of C code than to have to touch a lot fo perl files. So\n> perhaps we should just start with being more stringent with C code, and\n> once we made perltidy less noisy, add that?\n>\n\nSure, we don't have to tie them together.\n\nI'm currently experimenting with settings on the buildfarm code, trying \nto get it both stable and looking nice. Then I'll try on the Postgres \ncore code and post some results.\n\n\ncheers\n\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-04 Sa 06:34, Andres Freund\n wrote:\n\n\n\nISTM that we're closer to being able to enforce pgindent than\nperltidy. At the same time, I think the issue of C code in HEAD not\nbeing indented is more pressing - IME it's much more common to have to\ntouch a lot of C code than to have to touch a lot fo perl files. So\nperhaps we should just start with being more stringent with C code, and\nonce we made perltidy less noisy, add that?\n\n\n\n\n\nSure, we don't have to tie them together.\nI'm currently experimenting with settings on the buildfarm code,\n trying to get it both stable and looking nice. Then I'll try on\n the Postgres core code and post some results.\n\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 4 Feb 2023 09:20:26 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> ISTM that we're closer to being able to enforce pgindent than\n> perltidy. At the same time, I think the issue of C code in HEAD not\n> being indented is more pressing - IME it's much more common to have to\n> touch a lot of C code than to have to touch a lot fo perl files. So\n> perhaps we should just start with being more stringent with C code, and\n> once we made perltidy less noisy, add that?\n\nAgreed, we should move more slowly with perltidy. Aside from the\npoints you raise, I bet fewer committers have it installed at all.\n\n(I haven't forgotten that I'm on the hook to import pg_bsd_indent\ninto our tree. Will get to that soon.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Feb 2023 11:07:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Feb 04, 2023 at 11:07:59AM -0500, Tom Lane wrote:\n> (I haven't forgotten that I'm on the hook to import pg_bsd_indent\n> into our tree. Will get to that soon.)\n\n+1 for that - it's no surprise that you have trouble convincing people\nto follow the current process:\n\n1) requires using a hacked copy of BSD indent; 2) which is stored\noutside the main repo; 3) is run via a perl script that itself mungles\nthe source code (because the only indent tool that can support the\nproject's style doesn't actually support what's needed); 4) and wants to\nretrieve a remote copy of typedefs.list (?). \n\nThe only thing that makes this scheme even remotely viable is that\napt.postgresql.org includes a package for pg-bsd-indent. I've used it\nonly a handful of times by running:\npg_bsd_indent -bad -bap -bbb -bc -bl -cli1 -cp33 -cdb -nce -d0 -di12 -nfc1 -i4 -l79 -lp -lpl -nip -npro -sac -tpg -ts4 -U .../typedefs.list\n\nThe perl wrapper is still a step too far for me (maybe it'd be tolerable\nif available as a build target).\n\nWould you want to make those the default options of the in-tree indent ?\nOr provide a shortcut like --postgresql ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 4 Feb 2023 11:11:01 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Would you want to make those the default options of the in-tree indent ?\n> Or provide a shortcut like --postgresql ?\n\nHmmm ... inserting all of those as the default options would likely\nmake it impossible to update pg_bsd_indent itself with anything like\nits current indent style (not that it's terribly consistent about\nthat). I could see inventing a --postgresql shortcut switch perhaps.\n\nBut it's not clear to me why you're allergic to the perl wrapper?\nIt's not like that's the only perl infrastructure in our build process.\nAlso, whether or not we could push some of what it does into pg_bsd_indent\nproper, I can't see pushing all of it (for instance, the very PG-specific\nlist of typedef exclusions).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Feb 2023 12:37:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-04 Sa 09:20, Andrew Dunstan wrote:\n>\n>\n> On 2023-02-04 Sa 06:34, Andres Freund wrote:\n>>\n>> ISTM that we're closer to being able to enforce pgindent than\n>> perltidy. At the same time, I think the issue of C code in HEAD not\n>> being indented is more pressing - IME it's much more common to have to\n>> touch a lot of C code than to have to touch a lot fo perl files. So\n>> perhaps we should just start with being more stringent with C code, and\n>> once we made perltidy less noisy, add that?\n>>\n>\n> Sure, we don't have to tie them together.\n>\n> I'm currently experimenting with settings on the buildfarm code, \n> trying to get it both stable and looking nice. Then I'll try on the \n> Postgres core code and post some results.\n>\n\nSo here's a diff made from running perltidy v20221112 with the \nadditional setting --valign-exclusion-list=\", = => || && if unless\"\n\nEssentially this abandons those bits of vertical alignment that tend to \ncatch us out when additions are made to the code.\n\nI think this will make the code much more maintainable and result in \nmuch less perltidy churn. It would also mean that it's far more likely \nthat what we would naturally code can be undisturbed by perltidy.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com",
"msg_date": "Sun, 5 Feb 2023 09:29:08 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Feb 4, 2023 at 12:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> But it's not clear to me why you're allergic to the perl wrapper?\n> It's not like that's the only perl infrastructure in our build process.\n> Also, whether or not we could push some of what it does into pg_bsd_indent\n> proper, I can't see pushing all of it (for instance, the very PG-specific\n> list of typedef exclusions).\n\nI don't mind that there is a script. I do mind that it's not that good\nof a script. There have been some improvements for which I am\ngrateful, like removing the thing where the first argument was taken\nas a typedefs file under some circumstances. But there are still some\nthings that I would like:\n\n1. I'd like to be able to run pgindent src/include and have it indent\neverything relevant under src/include. Right now that silently does\nnothing.\n\n2. I'd like an easy way to indent the unstaged files in the current\ndirectory (e.g. pgindent --dirty) or the files that have been queued\nup for commit (e.g. pgindent --cached).\n\n3. I'd also like an easy way to indent every file touched by a recent\ncommit, e.g. pgindent --commit HEAD, pgindent --commit HEAD~2,\npgindent --commit 62e1e28bf7.\n\nIt'd be much less annoying to include this in my workflow with these\nkinds of options. For instance:\n\npatch -p1 < ~/Downloads/some_stuff_v94.patch\n# committer adjustments as desired\ngit add -u\npgindent --cached\ngit diff # did pgindent change anything? does it look ok?\ngit commit -a\n\nFor a while I, like some others here, was trying to be religious about\npgindenting at least the bigger commits that I pushed. But I fear I've\ngrown slack. I don't mind if we tighten up the process, but the better\nwe make the tools, the less friction it will cause.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 Feb 2023 09:40:02 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I don't mind that there is a script. I do mind that it's not that good\n> of a script. There have been some improvements for which I am\n> grateful, like removing the thing where the first argument was taken\n> as a typedefs file under some circumstances. But there are still some\n> things that I would like:\n\nI have no objection to someone coding those things up ;-).\nI'll just note that adding features like those to a Perl script\nis going to be a ton easier than doing it inside pg_bsd_indent.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Feb 2023 10:16:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-06 Mo 09:40, Robert Haas wrote:\n> On Sat, Feb 4, 2023 at 12:37 PM Tom Lane<tgl@sss.pgh.pa.us> wrote:\n>> But it's not clear to me why you're allergic to the perl wrapper?\n>> It's not like that's the only perl infrastructure in our build process.\n>> Also, whether or not we could push some of what it does into pg_bsd_indent\n>> proper, I can't see pushing all of it (for instance, the very PG-specific\n>> list of typedef exclusions).\n> I don't mind that there is a script. I do mind that it's not that good\n> of a script. There have been some improvements for which I am\n> grateful, like removing the thing where the first argument was taken\n> as a typedefs file under some circumstances. But there are still some\n> things that I would like:\n>\n> 1. I'd like to be able to run pgindent src/include and have it indent\n> everything relevant under src/include. Right now that silently does\n> nothing.\n>\n> 2. I'd like an easy way to indent the unstaged files in the current\n> directory (e.g. pgindent --dirty) or the files that have been queued\n> up for commit (e.g. pgindent --cached).\n>\n> 3. I'd also like an easy way to indent every file touched by a recent\n> commit, e.g. pgindent --commit HEAD, pgindent --commit HEAD~2,\n> pgindent --commit 62e1e28bf7.\n\n\nGood suggestions. 1 and 3 seem fairly straightforward. I'll start on \nthose, and look into 2.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-06 Mo 09:40, Robert Haas\n wrote:\n\n\nOn Sat, Feb 4, 2023 at 12:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\nBut it's not clear to me why you're allergic to the perl wrapper?\nIt's not like that's the only perl infrastructure in our build process.\nAlso, whether or not we could push some of what it does into pg_bsd_indent\nproper, I can't see pushing all of it (for instance, the very PG-specific\nlist of typedef exclusions).\n\n\n\nI don't mind that there is a script. I do mind that it's not that good\nof a script. There have been some improvements for which I am\ngrateful, like removing the thing where the first argument was taken\nas a typedefs file under some circumstances. But there are still some\nthings that I would like:\n\n1. I'd like to be able to run pgindent src/include and have it indent\neverything relevant under src/include. Right now that silently does\nnothing.\n\n2. I'd like an easy way to indent the unstaged files in the current\ndirectory (e.g. pgindent --dirty) or the files that have been queued\nup for commit (e.g. pgindent --cached).\n\n3. I'd also like an easy way to indent every file touched by a recent\ncommit, e.g. pgindent --commit HEAD, pgindent --commit HEAD~2,\npgindent --commit 62e1e28bf7.\n\n\n\nGood suggestions. 1 and 3 seem fairly straightforward. I'll start\n on those, and look into 2.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 6 Feb 2023 10:21:07 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Mon, Feb 6, 2023 at 10:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'll just note that adding features like those to a Perl script\n> is going to be a ton easier than doing it inside pg_bsd_indent.\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 Feb 2023 10:35:57 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Mon, Feb 6, 2023 at 10:21 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> Good suggestions. 1 and 3 seem fairly straightforward. I'll start on those, and look into 2.\n\nThanks!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 Feb 2023 10:36:18 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-06 Mo 10:36, Robert Haas wrote:\n> On Mon, Feb 6, 2023 at 10:21 AM Andrew Dunstan<andrew@dunslane.net> wrote:\n>> Good suggestions. 1 and 3 seem fairly straightforward. I'll start on those, and look into 2.\n> Thanks!\n>\n\nHere's a quick patch for 1 and 3. Would also need to adjust the docco.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-06 Mo 10:36, Robert Haas\n wrote:\n\n\nOn Mon, Feb 6, 2023 at 10:21 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nGood suggestions. 1 and 3 seem fairly straightforward. I'll start on those, and look into 2.\n\n\n\nThanks!\n\n\n\n\n\nHere's a quick patch for 1 and 3. Would also need to adjust the\n docco.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 6 Feb 2023 12:03:47 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 02.02.23 07:40, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n>> Regarding the concern about a pre-receive hook blocking an emergency push, the\n>> hook could approve every push where a string like \"pgindent: no\" appears in a\n>> commit message within the push. You'd still want to make the tree clean\n>> sometime the same week or so. It's cheap to provide a break-glass like that.\n> \n> I think the real question here is whether we can get all (or at least\n> a solid majority of) committers to accept such draconian constraints.\n> I'd buy into it, and evidently so would you, but I can't help noting\n> that less than a quarter of active committers have bothered to\n> comment on this thread. I suspect the other three-quarters would\n> be quite annoyed if we tried to institute such requirements. That's\n> not manpower we can afford to drive away.\n\nI have some concerns about this.\n\nFirst, as a matter of principle, it would introduce another level of \ngatekeeping power. Right now, the committers are as a group in charge \nof what gets into the tree. Adding commit hooks that are installed \nsomewhere(?) by someone(?) and can only be seen by some(?) would upset \nthat. If we were using something like github or gitlab (not suggesting \nthat, but for illustration), then you could put this kind of thing under \n.github/ or similar and then it would be under the same control as the \nsource code itself.\n\nAlso, pgindent takes tens of seconds to run, so hooking that into the \ngit push process would slow this down quite a bit. And maybe we want to \nadd pgperltidy and so on, where would this lead? If somehow your local \nindenting doesn't give you the \"correct\" result for some reason, you \nmight sit there for minutes and minutes trying to fix and push and fix \nand push.\n\nThen, consider the typedefs issue. If you add a typedef but don't add \nit to the typedefs list but otherwise pgindent your code perfectly, the \npush would be accepted. If then later someone updates the typedefs \nlist, perhaps from the build farm, it would then reject the indentation \nof your previously committed code, thus making it their problem.\n\nI think a better way to address these issues would be making this into a \ntest suite, so that you can run some command that checks \"is everything \nindented correctly\". Then you can run this locally, on the build farm, \nin the cfbot etc. in a uniform way and apply the existing \nblaming/encouragement processes like for any other test failure.\n\n\n\n",
"msg_date": "Mon, 6 Feb 2023 18:17:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-06 Mo 12:03, Andrew Dunstan wrote:\n>\n>\n> On 2023-02-06 Mo 10:36, Robert Haas wrote:\n>> On Mon, Feb 6, 2023 at 10:21 AM Andrew Dunstan<andrew@dunslane.net> wrote:\n>>> Good suggestions. 1 and 3 seem fairly straightforward. I'll start on those, and look into 2.\n>> Thanks!\n>>\n>\n> Here's a quick patch for 1 and 3. Would also need to adjust the docco.\n>\n>\n\nThis time with patch.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com",
"msg_date": "Mon, 6 Feb 2023 12:53:44 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-06 Mo 12:17, Peter Eisentraut wrote:\n> On 02.02.23 07:40, Tom Lane wrote:\n>> Noah Misch <noah@leadboat.com> writes:\n>>> Regarding the concern about a pre-receive hook blocking an emergency \n>>> push, the\n>>> hook could approve every push where a string like \"pgindent: no\" \n>>> appears in a\n>>> commit message within the push. You'd still want to make the tree \n>>> clean\n>>> sometime the same week or so. It's cheap to provide a break-glass \n>>> like that.\n>>\n>> I think the real question here is whether we can get all (or at least\n>> a solid majority of) committers to accept such draconian constraints.\n>> I'd buy into it, and evidently so would you, but I can't help noting\n>> that less than a quarter of active committers have bothered to\n>> comment on this thread. I suspect the other three-quarters would\n>> be quite annoyed if we tried to institute such requirements. That's\n>> not manpower we can afford to drive away.\n>\n> I have some concerns about this.\n>\n> First, as a matter of principle, it would introduce another level of \n> gatekeeping power. Right now, the committers are as a group in charge \n> of what gets into the tree. Adding commit hooks that are installed \n> somewhere(?) by someone(?) and can only be seen by some(?) would upset \n> that. If we were using something like github or gitlab (not \n> suggesting that, but for illustration), then you could put this kind \n> of thing under .github/ or similar and then it would be under the same \n> control as the source code itself.\n>\n> Also, pgindent takes tens of seconds to run, so hooking that into the \n> git push process would slow this down quite a bit. And maybe we want \n> to add pgperltidy and so on, where would this lead? If somehow your \n> local indenting doesn't give you the \"correct\" result for some reason, \n> you might sit there for minutes and minutes trying to fix and push and \n> fix and push.\n\n\nWell, pgindent should produce canonical results or we're surely doing it \nwrong. Regarding the time it takes, if we are only indenting the changed \nfiles that time will be vastly reduced for most cases.\n\nBut I take your point to some extent. I think we should start by making \nit easier and quicker to run pgindent locally, both by hand and in local \ngit hooks, for ordinary developers and for committers, and we should \nencourage committers to be stricter in their use of pgindent. If there \nare features we need to make this possible, speak up (c.f. Robert's \nemail earlier today). I'm committed to making this as easy as possible \nfor people.\n\nOnce we get over those hurdles we can possibly revisit automation.\n\n\n>\n> Then, consider the typedefs issue. If you add a typedef but don't add \n> it to the typedefs list but otherwise pgindent your code perfectly, \n> the push would be accepted. If then later someone updates the \n> typedefs list, perhaps from the build farm, it would then reject the \n> indentation of your previously committed code, thus making it their \n> problem.\n\n\nIt would be nice if there were a gadget that would find new typedefs and \nwarn you about them. Unfortunately our current code to find typedefs \nisn't all that fast either. Nicer still would be a way of not needing \nthe typedefs list, but I don't think anyone has come up with one yet \nthat meets our other requirements.\n\n\n>\n> I think a better way to address these issues would be making this into \n> a test suite, so that you can run some command that checks \"is \n> everything indented correctly\". Then you can run this locally, on the \n> build farm, in the cfbot etc. in a uniform way and apply the existing \n> blaming/encouragement processes like for any other test failure.\n>\n>\n\nWell arguably the new --silent-diff and --show-diff modes are such tests :-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-06 Mo 12:17, Peter\n Eisentraut wrote:\n\nOn\n 02.02.23 07:40, Tom Lane wrote:\n \nNoah Misch <noah@leadboat.com>\n writes:\n \nRegarding the concern about a\n pre-receive hook blocking an emergency push, the\n \n hook could approve every push where a string like \"pgindent:\n no\" appears in a\n \n commit message within the push. You'd still want to make the\n tree clean\n \n sometime the same week or so. It's cheap to provide a\n break-glass like that.\n \n\n\n I think the real question here is whether we can get all (or at\n least\n \n a solid majority of) committers to accept such draconian\n constraints.\n \n I'd buy into it, and evidently so would you, but I can't help\n noting\n \n that less than a quarter of active committers have bothered to\n \n comment on this thread. I suspect the other three-quarters\n would\n \n be quite annoyed if we tried to institute such requirements. \n That's\n \n not manpower we can afford to drive away.\n \n\n\n I have some concerns about this.\n \n\n First, as a matter of principle, it would introduce another level\n of gatekeeping power. Right now, the committers are as a group in\n charge of what gets into the tree. Adding commit hooks that are\n installed somewhere(?) by someone(?) and can only be seen by\n some(?) would upset that. If we were using something like github\n or gitlab (not suggesting that, but for illustration), then you\n could put this kind of thing under .github/ or similar and then it\n would be under the same control as the source code itself.\n \n\n Also, pgindent takes tens of seconds to run, so hooking that into\n the git push process would slow this down quite a bit. And maybe\n we want to add pgperltidy and so on, where would this lead? If\n somehow your local indenting doesn't give you the \"correct\" result\n for some reason, you might sit there for minutes and minutes\n trying to fix and push and fix and push.\n\n\nWell, pgindent should produce canonical results or we're surely\n doing it wrong. Regarding the time it takes, if we are only\n indenting the changed files that time will be vastly reduced for\n most cases.\nBut I take your point to some extent. I think we should start by\n making it easier and quicker to run pgindent locally, both by hand\n and in local git hooks, for ordinary developers and for\n committers, and we should encourage committers to be stricter in\n their use of pgindent. If there are features we need to make this\n possible, speak up (c.f. Robert's email earlier today). I'm\n committed to making this as easy as possible for people.\n\nOnce we get over those hurdles we can possibly revisit\n automation.\n\n\n\n\n Then, consider the typedefs issue. If you add a typedef but don't\n add it to the typedefs list but otherwise pgindent your code\n perfectly, the push would be accepted. If then later someone\n updates the typedefs list, perhaps from the build farm, it would\n then reject the indentation of your previously committed code,\n thus making it their problem.\n \n\n\n\nIt would be nice if there were a gadget that would find new\n typedefs and warn you about them. Unfortunately our current code\n to find typedefs isn't all that fast either. Nicer still would be\n a way of not needing the typedefs list, but I don't think anyone\n has come up with one yet that meets our other requirements.\n\n\n\n\n\n I think a better way to address these issues would be making this\n into a test suite, so that you can run some command that checks\n \"is everything indented correctly\". Then you can run this\n locally, on the build farm, in the cfbot etc. in a uniform way and\n apply the existing blaming/encouragement processes like for any\n other test failure.\n \n\n\n\n\n\nWell arguably the new --silent-diff and --show-diff modes are\n such tests :-)\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 6 Feb 2023 16:13:24 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-06 18:17:02 +0100, Peter Eisentraut wrote:\n> First, as a matter of principle, it would introduce another level of\n> gatekeeping power. Right now, the committers are as a group in charge of\n> what gets into the tree. Adding commit hooks that are installed\n> somewhere(?) by someone(?) and can only be seen by some(?) would upset that.\n> If we were using something like github or gitlab (not suggesting that, but\n> for illustration), then you could put this kind of thing under .github/ or\n> similar and then it would be under the same control as the source code\n> itself.\n\nWell, we did talk about adding a pre-commit hook to the repository, with\ninstructions for how to enable it. And I don't see a problem with adding the\npre-receive we're discussing here to src/tools/something.\n\n\n> Also, pgindent takes tens of seconds to run, so hooking that into the git\n> push process would slow this down quite a bit. And maybe we want to add\n> pgperltidy and so on, where would this lead?\n\nYes, I've been annoyed by this as well. This is painful, even without a push\nhook. Not just for pgindent, headerscheck/cpluspluscheck are quite painful as\nwell. I came to the conclusion that we ought to integrate pgindent etc into\nthe buildsystem properly. Instead of running such targets serially across all\nfiles, without logic to prevent re-processing files, the relevant targets\nought to be run once for each process, and create a stamp file.\n\n\n\n> If somehow your local indenting doesn't give you the \"correct\" result for\n> some reason, you might sit there for minutes and minutes trying to fix and\n> push and fix and push.\n\nI was imagining that such a pre-receive hook would spit out the target that\nyou'd need to run locally to verify that the issue is resolved.\n\n\n> Then, consider the typedefs issue. If you add a typedef but don't add it to\n> the typedefs list but otherwise pgindent your code perfectly, the push would\n> be accepted. If then later someone updates the typedefs list, perhaps from\n> the build farm, it would then reject the indentation of your previously\n> committed code, thus making it their problem.\n\nI'd like to address this one via the buildsystem as well. We can do the\ntrickery that the buildfarm uses to extract typedefs as part of the build, and\nupdate typedefs.list with the additional types. There's really no need to\nforce us to do this manually.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Feb 2023 13:21:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-06 18:17:02 +0100, Peter Eisentraut wrote:\n>> First, as a matter of principle, it would introduce another level of\n>> gatekeeping power. Right now, the committers are as a group in charge of\n>> what gets into the tree. Adding commit hooks that are installed\n>> somewhere(?) by someone(?) and can only be seen by some(?) would upset that.\n>> If we were using something like github or gitlab (not suggesting that, but\n>> for illustration), then you could put this kind of thing under .github/ or\n>> similar and then it would be under the same control as the source code\n>> itself.\n\n> Well, we did talk about adding a pre-commit hook to the repository, with\n> instructions for how to enable it. And I don't see a problem with adding the\n> pre-receive we're discussing here to src/tools/something.\n\nYeah. I don't think we are seriously considering putting any restrictions\nin place on gitmaster --- the idea is to offer better tools to committers\nto let them check/fix the indentation of what they are working on. If\nsomebody wants to run that as a local pre-commit hook, that's their choice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Feb 2023 16:36:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Mon, Feb 06, 2023 at 06:17:02PM +0100, Peter Eisentraut wrote:\n> Also, pgindent takes tens of seconds to run, so hooking that into the git\n> push process would slow this down quite a bit.\n\nThe pre-receive hook would do a full pgindent when you change typedefs.list.\nOtherwise, it would reindent only the files being changed. The average push\nneed not take tens of seconds.\n\n> If somehow your local\n> indenting doesn't give you the \"correct\" result for some reason, you might\n> sit there for minutes and minutes trying to fix and push and fix and push.\n\nAs Andres mentioned, the hook could print the command it used. It could even\nprint the diff it found, for you to apply.\n\nOn Mon, Feb 06, 2023 at 04:36:07PM -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-02-06 18:17:02 +0100, Peter Eisentraut wrote:\n> >> First, as a matter of principle, it would introduce another level of\n> >> gatekeeping power. Right now, the committers are as a group in charge of\n> >> what gets into the tree. Adding commit hooks that are installed\n> >> somewhere(?) by someone(?) and can only be seen by some(?) would upset that.\n> >> If we were using something like github or gitlab (not suggesting that, but\n> >> for illustration), then you could put this kind of thing under .github/ or\n> >> similar and then it would be under the same control as the source code\n> >> itself.\n> \n> > Well, we did talk about adding a pre-commit hook to the repository, with\n> > instructions for how to enable it. And I don't see a problem with adding the\n> > pre-receive we're discussing here to src/tools/something.\n> \n> Yeah. I don't think we are seriously considering putting any restrictions\n> in place on gitmaster\n\nI could have sworn that was exactly what we were discussing, a pre-receive\nhook on gitmaster.\n\n\n",
"msg_date": "Mon, 6 Feb 2023 20:43:24 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-06 Mo 23:43, Noah Misch wrote:\n>>\n>>> Well, we did talk about adding a pre-commit hook to the repository, with\n>>> instructions for how to enable it. And I don't see a problem with adding the\n>>> pre-receive we're discussing here to src/tools/something.\n>> Yeah. I don't think we are seriously considering putting any restrictions\n>> in place on gitmaster\n> I could have sworn that was exactly what we were discussing, a pre-receive\n> hook on gitmaster.\n>\n\nThat's one idea that's been put forward, but it seems clear that some \npeople are nervous about it.\n\nMaybe a better course would be to continue improving the toolset and get \nmore people comfortable with using it locally and then talk about \nintegrating it upstream.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-06 Mo 23:43, Noah Misch\n wrote:\n\n\n\n\n\n\nWell, we did talk about adding a pre-commit hook to the repository, with\ninstructions for how to enable it. And I don't see a problem with adding the\npre-receive we're discussing here to src/tools/something.\n\n\n\nYeah. I don't think we are seriously considering putting any restrictions\nin place on gitmaster\n\n\n\nI could have sworn that was exactly what we were discussing, a pre-receive\nhook on gitmaster.\n\n\n\n\n\nThat's one idea that's been put forward, but it seems clear that\n some people are nervous about it.\nMaybe a better course would be to continue improving the toolset\n and get more people comfortable with using it locally and then\n talk about integrating it upstream.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 7 Feb 2023 06:45:45 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 5:16 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 2023-02-06 Mo 23:43, Noah Misch wrote:\n>\n>\n> Well, we did talk about adding a pre-commit hook to the repository, with\n> instructions for how to enable it. And I don't see a problem with adding the\n> pre-receive we're discussing here to src/tools/something.\n>\n> Yeah. I don't think we are seriously considering putting any restrictions\n> in place on gitmaster\n>\n> I could have sworn that was exactly what we were discussing, a pre-receive\n> hook on gitmaster.\n>\n>\n> That's one idea that's been put forward, but it seems clear that some people are nervous about it.\n>\n> Maybe a better course would be to continue improving the toolset and get more people comfortable with using it locally and then talk about integrating it upstream.\n>\n\nYeah, that sounds more reasonable to me as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Feb 2023 18:26:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 1:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Feb 7, 2023 at 5:16 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >\n> > On 2023-02-06 Mo 23:43, Noah Misch wrote:\n> >\n> >\n> > Well, we did talk about adding a pre-commit hook to the repository, with\n> > instructions for how to enable it. And I don't see a problem with adding\n> the\n> > pre-receive we're discussing here to src/tools/something.\n> >\n> > Yeah. I don't think we are seriously considering putting any\n> restrictions\n> > in place on gitmaster\n> >\n> > I could have sworn that was exactly what we were discussing, a\n> pre-receive\n> > hook on gitmaster.\n> >\n> >\n> > That's one idea that's been put forward, but it seems clear that some\n> people are nervous about it.\n> >\n> > Maybe a better course would be to continue improving the toolset and get\n> more people comfortable with using it locally and then talk about\n> integrating it upstream.\n> >\n>\n> Yeah, that sounds more reasonable to me as well.\n>\n\nIf we wanted something \"in between\" we could perhaps also have a async ci\njob that runs after each commit and sends an emali to the committer if the\ncommit doesn't match up, instead of rejecting it hard but still getting\nsome relatively fast feedback.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Feb 7, 2023 at 1:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Feb 7, 2023 at 5:16 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 2023-02-06 Mo 23:43, Noah Misch wrote:\n>\n>\n> Well, we did talk about adding a pre-commit hook to the repository, with\n> instructions for how to enable it. And I don't see a problem with adding the\n> pre-receive we're discussing here to src/tools/something.\n>\n> Yeah. I don't think we are seriously considering putting any restrictions\n> in place on gitmaster\n>\n> I could have sworn that was exactly what we were discussing, a pre-receive\n> hook on gitmaster.\n>\n>\n> That's one idea that's been put forward, but it seems clear that some people are nervous about it.\n>\n> Maybe a better course would be to continue improving the toolset and get more people comfortable with using it locally and then talk about integrating it upstream.\n>\n\nYeah, that sounds more reasonable to me as well.If we wanted something \"in between\" we could perhaps also have a async ci job that runs after each commit and sends an emali to the committer if the commit doesn't match up, instead of rejecting it hard but still getting some relatively fast feedback.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 7 Feb 2023 13:59:53 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-06 Mo 09:40, Robert Haas wrote:\n> 2. I'd like an easy way to indent the unstaged files in the current\n> directory (e.g. pgindent --dirty) or the files that have been queued\n> up for commit (e.g. pgindent --cached).\n>\n\nMy git-fu is probably not all that it should be. I think we could \npossibly get at this list of files by running\n\n git status --porcelain --untracked-files=no --ignored=no -- .\n\nAnd then your --dirty list would be lines beginning with ' M' while your \n--cached list would be lines beginning with 'A[ M]'\n\nDoes that seem plausible?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-06 Mo 09:40, Robert Haas\n wrote:\n\n\n2. I'd like an easy way to indent the unstaged files in the current\ndirectory (e.g. pgindent --dirty) or the files that have been queued\nup for commit (e.g. pgindent --cached).\n\n\n\n\n\nMy git-fu is probably not all that it should be. I think we could\n possibly get at this list of files by running \n\n git status --porcelain --untracked-files=no --ignored=no -- .\nAnd then your --dirty list would be lines beginning with ' M'\n while your --cached list would be lines beginning with 'A[ M]'\nDoes that seem plausible?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 7 Feb 2023 08:17:49 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-07 Tu 07:59, Magnus Hagander wrote:\n>\n>\n> On Tue, Feb 7, 2023 at 1:56 PM Amit Kapila <amit.kapila16@gmail.com> \n> wrote:\n>\n> On Tue, Feb 7, 2023 at 5:16 PM Andrew Dunstan\n> <andrew@dunslane.net> wrote:\n> >\n> > On 2023-02-06 Mo 23:43, Noah Misch wrote:\n> >\n> >\n> > Well, we did talk about adding a pre-commit hook to the\n> repository, with\n> > instructions for how to enable it. And I don't see a problem\n> with adding the\n> > pre-receive we're discussing here to src/tools/something.\n> >\n> > Yeah. I don't think we are seriously considering putting any\n> restrictions\n> > in place on gitmaster\n> >\n> > I could have sworn that was exactly what we were discussing, a\n> pre-receive\n> > hook on gitmaster.\n> >\n> >\n> > That's one idea that's been put forward, but it seems clear that\n> some people are nervous about it.\n> >\n> > Maybe a better course would be to continue improving the toolset\n> and get more people comfortable with using it locally and then\n> talk about integrating it upstream.\n> >\n>\n> Yeah, that sounds more reasonable to me as well.\n>\n>\n> If we wanted something \"in between\" we could perhaps also have a async \n> ci job that runs after each commit and sends an emali to the committer \n> if the commit doesn't match up, instead of rejecting it hard but still \n> getting some relatively fast feedback.\n\n\nSure, worth trying. We can always turn it off and no harm done if it \ndoesn't suit. I'd probably start by having it email a couple of guinea \npigs like you and me before turning it loose on committers generally. \nLMK if you need help with it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-07 Tu 07:59, Magnus Hagander\n wrote:\n\n\n\n\n\n\n\n\nOn Tue, Feb 7, 2023 at 1:56\n PM Amit Kapila <amit.kapila16@gmail.com>\n wrote:\n\nOn Tue, Feb 7, 2023 at\n 5:16 PM Andrew Dunstan <andrew@dunslane.net>\n wrote:\n >\n > On 2023-02-06 Mo 23:43, Noah Misch wrote:\n >\n >\n > Well, we did talk about adding a pre-commit hook to the\n repository, with\n > instructions for how to enable it. And I don't see a\n problem with adding the\n > pre-receive we're discussing here to\n src/tools/something.\n >\n > Yeah. I don't think we are seriously considering\n putting any restrictions\n > in place on gitmaster\n >\n > I could have sworn that was exactly what we were\n discussing, a pre-receive\n > hook on gitmaster.\n >\n >\n > That's one idea that's been put forward, but it seems\n clear that some people are nervous about it.\n >\n > Maybe a better course would be to continue improving\n the toolset and get more people comfortable with using it\n locally and then talk about integrating it upstream.\n >\n\n Yeah, that sounds more reasonable to me as well.\n\n\n\nIf we wanted something \"in between\" we could perhaps also\n have a async ci job that runs after each commit and sends an\n emali to the committer if the commit doesn't match up,\n instead of rejecting it hard but still getting some\n relatively fast feedback.\n\n\n\n\n\nSure, worth trying. We can always turn it off and no harm done if\n it doesn't suit. I'd probably start by having it email a couple of\n guinea pigs like you and me before turning it loose on committers\n generally. LMK if you need help with it.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 7 Feb 2023 08:33:33 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Tue, Feb 7, 2023 at 5:16 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> On 2023-02-06 Mo 23:43, Noah Misch wrote:\n>>>> Yeah. I don't think we are seriously considering putting any restrictions\n>>>> in place on gitmaster\n\n>>> I could have sworn that was exactly what we were discussing, a pre-receive\n>>> hook on gitmaster.\n\n>> That's one idea that's been put forward, but it seems clear that some people are nervous about it.\n>> Maybe a better course would be to continue improving the toolset and get more people comfortable with using it locally and then talk about integrating it upstream.\n\n> Yeah, that sounds more reasonable to me as well.\n\n+1. Even if we end up with such a hook, we need to walk before we\ncan run. The tooling of which we speak doesn't even exist today,\nso it's unlikely to be bug-free, fast, and convenient to use\njust two months from now. Let's have some people use whatever\nis proposed for awhile locally, and see what their experience is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Feb 2023 10:17:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Feb 04, 2023 at 12:37:11PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n\n> Hmmm ... inserting all of those as the default options would likely\n> make it impossible to update pg_bsd_indent itself with anything like\n> its current indent style (not that it's terribly consistent about\n> that). I could see inventing a --postgresql shortcut switch perhaps.\n\nOr you could add ./.indent.pro, or ./src/tools/indent.profile for it to\nread.\n\n> > Would you want to make those the default options of the in-tree indent ?\n> > Or provide a shortcut like --postgresql ?\n> \n> But it's not clear to me why you're allergic to the perl wrapper?\n\nMy allergy is to the totality of the process, not to the perl component.\nIt's a bit weird to enforce a coding style that no upstream indent tool\nsupports. But what's weirder is that, *having forked the indent tool*,\nit still doesn't implement the desired style, and the perl wrapper tries\nto work around that.\n\nIt would be more reasonable if the forked C program knew how to handle\nthe stuff for which the perl script currently has kludges to munge the\nsource code before indenting and then un-munging afterwards.\n\nOr if the indentation were handled by the (or a) perl script itself.\n\nOr if the perl script handled everything that an unpatched \"ident\"\ndidn't handle, rather than some things but not others, demanding use of\na patched indent as well as a perl wrapper (not just for looping around\nfiles and fancy high-level shortcuts like indenting staged files).\n\nOn the one hand, \"indent\" ought to handle all the source-munging stuff.\nOn the other hand, it'd be better to use an unpatched indent tool. The\ncurrent way is the worst of both worlds.\n\nCurrently, the perl wrapper supports the \"/* -\"-style comments that\npostgres wants to use (why?) by munging the source code. That could be\nsupported in pg-bsd-indent with a one line change. I think an even\nbetter option would be to change postgres' C files to use \"/*-\" without\na space, which requires neither perl munging nor patching indent.\n\nOn a less critical note, I wonder if it's a good idea to import\npgbsdindent as a git \"submodule\".\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 7 Feb 2023 09:25:42 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sat, Feb 04, 2023 at 12:37:11PM -0500, Tom Lane wrote:\n>> But it's not clear to me why you're allergic to the perl wrapper?\n\n> My allergy is to the totality of the process, not to the perl component.\n> It's a bit weird to enforce a coding style that no upstream indent tool\n> supports. But what's weirder is that, *having forked the indent tool*,\n> it still doesn't implement the desired style, and the perl wrapper tries\n> to work around that.\n\n> It would be more reasonable if the forked C program knew how to handle\n> the stuff for which the perl script currently has kludges to munge the\n> source code before indenting and then un-munging afterwards.\n\n[ shrug... ] If you want to put cycles into that, nobody is stopping\nyou. For me, it sounds like make-work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Feb 2023 10:46:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-07 Tu 10:25, Justin Pryzby wrote:\n> On Sat, Feb 04, 2023 at 12:37:11PM -0500, Tom Lane wrote:\n>> Justin Pryzby<pryzby@telsasoft.com> writes:\n>> Hmmm ... inserting all of those as the default options would likely\n>> make it impossible to update pg_bsd_indent itself with anything like\n>> its current indent style (not that it's terribly consistent about\n>> that). I could see inventing a --postgresql shortcut switch perhaps.\n> Or you could add ./.indent.pro, or ./src/tools/indent.profile for it to\n> read.\n>\n>>> Would you want to make those the default options of the in-tree indent ?\n>>> Or provide a shortcut like --postgresql ?\n>> But it's not clear to me why you're allergic to the perl wrapper?\n> My allergy is to the totality of the process, not to the perl component.\n> It's a bit weird to enforce a coding style that no upstream indent tool\n> supports. But what's weirder is that, *having forked the indent tool*,\n> it still doesn't implement the desired style, and the perl wrapper tries\n> to work around that.\n>\n> It would be more reasonable if the forked C program knew how to handle\n> the stuff for which the perl script currently has kludges to munge the\n> source code before indenting and then un-munging afterwards.\n>\n> Or if the indentation were handled by the (or a) perl script itself.\n>\n> Or if the perl script handled everything that an unpatched \"ident\"\n> didn't handle, rather than some things but not others, demanding use of\n> a patched indent as well as a perl wrapper (not just for looping around\n> files and fancy high-level shortcuts like indenting staged files).\n>\n> On the one hand, \"indent\" ought to handle all the source-munging stuff.\n> On the other hand, it'd be better to use an unpatched indent tool. The\n> current way is the worst of both worlds.\n>\n> Currently, the perl wrapper supports the \"/* -\"-style comments that\n> postgres wants to use (why?) by munging the source code. That could be\n> supported in pg-bsd-indent with a one line change. I think an even\n> better option would be to change postgres' C files to use \"/*-\" without\n> a space, which requires neither perl munging nor patching indent.\n\n\nHistorically we used to do a heck of a lot more in pgindent that is \ncurrently done in the pre_indent and post_indent functions. If you want \nto spend time implementing that logic in pg_bsd_indent so we can remove \nthe remaining bits of that processing then go for it.\n\n\n> On a less critical note, I wonder if it's a good idea to import\n> pgbsdindent as a git \"submodule\".\n\n\nMeh, git submodules can be a pain in the neck in my limited experience. \nI'd rather steer clear.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-07 Tu 10:25, Justin Pryzby\n wrote:\n\n\nOn Sat, Feb 04, 2023 at 12:37:11PM -0500, Tom Lane wrote:\n\n\nJustin Pryzby <pryzby@telsasoft.com> writes:\n\n\n\n\n\nHmmm ... inserting all of those as the default options would likely\nmake it impossible to update pg_bsd_indent itself with anything like\nits current indent style (not that it's terribly consistent about\nthat). I could see inventing a --postgresql shortcut switch perhaps.\n\n\n\nOr you could add ./.indent.pro, or ./src/tools/indent.profile for it to\nread.\n\n\n\n\nWould you want to make those the default options of the in-tree indent ?\nOr provide a shortcut like --postgresql ?\n\n\n\nBut it's not clear to me why you're allergic to the perl wrapper?\n\n\n\nMy allergy is to the totality of the process, not to the perl component.\nIt's a bit weird to enforce a coding style that no upstream indent tool\nsupports. But what's weirder is that, *having forked the indent tool*,\nit still doesn't implement the desired style, and the perl wrapper tries\nto work around that.\n\nIt would be more reasonable if the forked C program knew how to handle\nthe stuff for which the perl script currently has kludges to munge the\nsource code before indenting and then un-munging afterwards.\n\nOr if the indentation were handled by the (or a) perl script itself.\n\nOr if the perl script handled everything that an unpatched \"ident\"\ndidn't handle, rather than some things but not others, demanding use of\na patched indent as well as a perl wrapper (not just for looping around\nfiles and fancy high-level shortcuts like indenting staged files).\n\nOn the one hand, \"indent\" ought to handle all the source-munging stuff.\nOn the other hand, it'd be better to use an unpatched indent tool. The\ncurrent way is the worst of both worlds.\n\nCurrently, the perl wrapper supports the \"/* -\"-style comments that\npostgres wants to use (why?) by munging the source code. That could be\nsupported in pg-bsd-indent with a one line change. I think an even\nbetter option would be to change postgres' C files to use \"/*-\" without\na space, which requires neither perl munging nor patching indent.\n\n\n\nHistorically we used to do a heck of a lot more in pgindent that\n is currently done in the pre_indent and post_indent functions. If\n you want to spend time implementing that logic in pg_bsd_indent so\n we can remove the remaining bits of that processing then go for\n it.\n\n\n\n\n\nOn a less critical note, I wonder if it's a good idea to import\npgbsdindent as a git \"submodule\".\n\n\n\nMeh, git submodules can be a pain in the neck in my limited\n experience. I'd rather steer clear.\n\n\ncheers\n\n\nandrew\n \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 7 Feb 2023 10:46:51 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 8:17 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> My git-fu is probably not all that it should be. I think we could possibly get at this list of files by running\n>\n> git status --porcelain --untracked-files=no --ignored=no -- .\n>\n> And then your --dirty list would be lines beginning with ' M' while your --cached list would be lines beginning with 'A[ M]'\n>\n> Does that seem plausible?\n\nI don't know if that works or not, but it does seem plausible, at\nleast. My idea would have been to use the --name-status option, which\nworks for both git diff and git show. You just look and see which\nlines in the output start with M or A and then take the file names\nfrom those lines.\n\nSo to indent files that are dirty, you would look at:\n\ngit diff --name-status\n\nFor what's cached:\n\ngit diff --name-status --cached\n\nFor the combination of the two:\n\ngit diff --name-status HEAD\n\nFor a prior commit:\n\ngit show --name-status $COMMITID\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 7 Feb 2023 11:10:56 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, 7 Feb 2023 at 17:11, Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't know if that works or not, but it does seem plausible, at\n> least. My idea would have been to use the --name-status option, which\n> works for both git diff and git show. You just look and see which\n> lines in the output start with M or A and then take the file names\n> from those lines.\n\nIf you add `--diff-filter=ACMR`, then git diff/show will only show\nAdded, Copied, Modified, and Renamed files.\n\nThe pre-commit hook that Andrew added to the wiki uses that in\ncombination with --name-only to get the list of files that you want to\ncheck on commit:\nhttps://wiki.postgresql.org/wiki/Working_with_Git#Using_git_hooks\n\n\n",
"msg_date": "Tue, 7 Feb 2023 17:32:30 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 11:32 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> On Tue, 7 Feb 2023 at 17:11, Robert Haas <robertmhaas@gmail.com> wrote:\n> > I don't know if that works or not, but it does seem plausible, at\n> > least. My idea would have been to use the --name-status option, which\n> > works for both git diff and git show. You just look and see which\n> > lines in the output start with M or A and then take the file names\n> > from those lines.\n>\n> If you add `--diff-filter=ACMR`, then git diff/show will only show\n> Added, Copied, Modified, and Renamed files.\n>\n> The pre-commit hook that Andrew added to the wiki uses that in\n> combination with --name-only to get the list of files that you want to\n> check on commit:\n> https://wiki.postgresql.org/wiki/Working_with_Git#Using_git_hooks\n\nThanks, that sounds nicer than what I suggested.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 7 Feb 2023 11:57:31 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "> On Mon, Feb 6, 2023 at 10:21 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> Here's a quick patch for 1 and 3. Would also need to adjust the docco.\n>\n>\n>\n> This time with patch.\n\nWhen supplying the --commit flag it still formats all files for me. I\nwas able to fix that by replacing:\n# no non-option arguments given. so do everything in the current directory\n$code_base ||= '.' unless @ARGV;\n\nwith:\n# no files, dirs or commits given. so do everything in the current directory\n$code_base ||= '.' unless @ARGV || @commits;\n\nDoes the code-base flag still make sense if you can simply pass a\ndirectory as regular args now?\n\n\n",
"msg_date": "Tue, 7 Feb 2023 18:21:19 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-07 Tu 12:21, Jelte Fennema wrote:\n>> On Mon, Feb 6, 2023 at 10:21 AM Andrew Dunstan<andrew@dunslane.net> wrote:\n>>\n>> Here's a quick patch for 1 and 3. Would also need to adjust the docco.\n>>\n>>\n>>\n>> This time with patch.\n> When supplying the --commit flag it still formats all files for me. I\n> was able to fix that by replacing:\n> # no non-option arguments given. so do everything in the current directory\n> $code_base ||= '.' unless @ARGV;\n>\n> with:\n> # no files, dirs or commits given. so do everything in the current directory\n> $code_base ||= '.' unless @ARGV || @commits;\n\n\nYeah, thanks for testing. Here's a new patch with that change and the \ncomment adjusted.\n\n\n>\n> Does the code-base flag still make sense if you can simply pass a\n> directory as regular args now?\n\n\nProbably not. I'll look into removing it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com",
"msg_date": "Wed, 8 Feb 2023 07:41:59 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-08 We 07:41, Andrew Dunstan wrote:\n>\n>\n> On 2023-02-07 Tu 12:21, Jelte Fennema wrote:\n>\n>\n>> Does the code-base flag still make sense if you can simply pass a\n>> directory as regular args now?\n>\n>\n> Probably not. I'll look into removing it.\n>\n>\n>\n\nWhat we should probably do is remove all the build stuff along with \n$code_base. It dates back to the time when I developed this as an out of \ntree replacement for the old pgindent, and is just basically wasted \nspace now. After I get done with the current round of enhancements I'll \nreorganize the script and get rid of things we don't need any more.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-08 We 07:41, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-02-07 Tu 12:21, Jelte Fennema\n wrote:\n\n\n\n\n\nDoes the code-base flag still make sense if you can simply pass a\ndirectory as regular args now?\n\n\n\nProbably not. I'll look into removing it.\n\n\n\n\n\n\nWhat we should probably do is remove all the build stuff along\n with $code_base. It dates back to the time when I developed this\n as an out of tree replacement for the old pgindent, and is just\n basically wasted space now. After I get done with the current\n round of enhancements I'll reorganize the script and get rid of\n things we don't need any more.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 8 Feb 2023 08:27:07 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "With the new patch --commit works as expected for me now. And sounds\ngood to up the script a bit afterwards.\n\nOn Wed, 8 Feb 2023 at 14:27, Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2023-02-08 We 07:41, Andrew Dunstan wrote:\n>\n>\n> On 2023-02-07 Tu 12:21, Jelte Fennema wrote:\n>\n>\n> Does the code-base flag still make sense if you can simply pass a\n> directory as regular args now?\n>\n>\n> Probably not. I'll look into removing it.\n>\n>\n>\n>\n> What we should probably do is remove all the build stuff along with $code_base. It dates back to the time when I developed this as an out of tree replacement for the old pgindent, and is just basically wasted space now. After I get done with the current round of enhancements I'll reorganize the script and get rid of things we don't need any more.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Feb 2023 18:06:22 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-08 We 12:06, Jelte Fennema wrote:\n> With the new patch --commit works as expected for me now. And sounds\n> good to up the script a bit afterwards.\n>\n>\n\nThanks, I have committed this. Still looking at Robert's other request.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-08 We 12:06, Jelte Fennema\n wrote:\n\n\nWith the new patch --commit works as expected for me now. And sounds\ngood to up the script a bit afterwards.\n\n\n\n\n\n\nThanks, I have committed this. Still looking at Robert's other\n request.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 8 Feb 2023 17:09:45 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\r\nI tried the committed pgindent.\r\nThe attached small patch changes spaces in the usage message to tabs.\r\nOptions other than --commit start with a tab.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\nFrom: Andrew Dunstan <andrew@dunslane.net>\r\nSent: Thursday, February 9, 2023 7:10 AM\r\nTo: Jelte Fennema <postgres@jeltef.nl>\r\nCc: Robert Haas <robertmhaas@gmail.com>; Tom Lane <tgl@sss.pgh.pa.us>; Justin Pryzby <pryzby@telsasoft.com>; Andres Freund <andres@anarazel.de>; Noah Misch <noah@leadboat.com>; Peter Geoghegan <pg@bowt.ie>; Bruce Momjian <bruce@momjian.us>; Magnus Hagander <magnus@hagander.net>; Alvaro Herrera <alvherre@2ndquadrant.com>; Stephen Frost <sfrost@snowman.net>; Jesse Zhang <sbjesse@gmail.com>; pgsql-hackers@postgresql.org\r\nSubject: Re: run pgindent on a regular basis / scripted manner\r\n\r\n\r\n\r\nOn 2023-02-08 We 12:06, Jelte Fennema wrote:\r\n\r\nWith the new patch --commit works as expected for me now. And sounds\r\n\r\ngood to up the script a bit afterwards.\r\n\r\n\r\n\r\n\r\n\r\n\r\nThanks, I have committed this. Still looking at Robert's other request.\r\n\r\n\r\n\r\ncheers\r\n\r\n\r\n\r\nandrew\r\n\r\n--\r\n\r\nAndrew Dunstan\r\n\r\nEDB: https://www.enterprisedb.com<https://www.enterprisedb.com>",
"msg_date": "Thu, 9 Feb 2023 02:29:41 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": false,
"msg_subject": "RE: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-08 We 21:29, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n>\n> Hi,\n> I tried the committed pgindent.\n> The attached small patch changes spaces in the usage message to tabs.\n> Options other than --commit start with a tab.\n>\n\nThanks, pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-08 We 21:29, Shinoda,\n Noriyoshi (PN Japan FSIP) wrote:\n\n\n\n\n\n\nHi,\n \n I tried the committed pgindent.\n The attached small patch changes spaces in the usage message\n to tabs.\n Options other than --commit start with a tab.\n\n\n\n\nThanks, pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 9 Feb 2023 13:34:34 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, Feb 9, 2023 6:10 AM Andrew Dunstan <andrew@dunslane.net> wrote:\r\n> Thanks, I have committed this. Still looking at Robert's other request.\r\n>\r\n\r\nHi,\r\n\r\nI tried the new option --commit and found that it seems to try to indent files\r\nwhich are deleted in the specified commit and reports an error.\r\n \r\ncannot open file \"src/backend/access/brin/brin.c\": No such file or directory\r\n\r\nIt looks we should filter such files.\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Fri, 10 Feb 2023 02:37:40 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Ah yes, I had seen that when I read the initial --commit patch but\nthen forgot about it when the flag didn't work at all when I tried it.\n\nAttached is a patch that fixes the issue. And also implements the\n--dirty and --staged flags in pgindent that Robert Haas requested.\n\nOn Fri, 10 Feb 2023 at 03:37, shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Thu, Feb 9, 2023 6:10 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > Thanks, I have committed this. Still looking at Robert's other request.\n> >\n>\n> Hi,\n>\n> I tried the new option --commit and found that it seems to try to indent files\n> which are deleted in the specified commit and reports an error.\n>\n> cannot open file \"src/backend/access/brin/brin.c\": No such file or directory\n>\n> It looks we should filter such files.\n>\n> Regards,\n> Shi Yu",
"msg_date": "Fri, 10 Feb 2023 10:25:44 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-07 Tu 11:32, Jelte Fennema wrote:\n> On Tue, 7 Feb 2023 at 17:11, Robert Haas<robertmhaas@gmail.com> wrote:\n>> I don't know if that works or not, but it does seem plausible, at\n>> least. My idea would have been to use the --name-status option, which\n>> works for both git diff and git show. You just look and see which\n>> lines in the output start with M or A and then take the file names\n>> from those lines.\n> If you add `--diff-filter=ACMR`, then git diff/show will only show\n> Added, Copied, Modified, and Renamed files.\n>\n> The pre-commit hook that Andrew added to the wiki uses that in\n> combination with --name-only to get the list of files that you want to\n> check on commit:\n> https://wiki.postgresql.org/wiki/Working_with_Git#Using_git_hooks\n\n\nOK, here's a patch based on Robert's and Jelte's ideas.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com",
"msg_date": "Fri, 10 Feb 2023 09:26:50 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-10 Fr 04:25, Jelte Fennema wrote:\n> Ah yes, I had seen that when I read the initial --commit patch but\n> then forgot about it when the flag didn't work at all when I tried it.\n>\n> Attached is a patch that fixes the issue. And also implements the\n> --dirty and --staged flags in pgindent that Robert Haas requested.\n\n\n[please don't top-post]\n\n\nI don't think just adding a diff filter is really a sufficient fix. The \nfile might have been deleted since the commit(s) in question. Here's a \nmore general fix for missing files.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com",
"msg_date": "Fri, 10 Feb 2023 10:21:49 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-10 Fr 10:21, Andrew Dunstan wrote:\n>\n>\n> On 2023-02-10 Fr 04:25, Jelte Fennema wrote:\n>> Ah yes, I had seen that when I read the initial --commit patch but\n>> then forgot about it when the flag didn't work at all when I tried it.\n>>\n>> Attached is a patch that fixes the issue. And also implements the\n>> --dirty and --staged flags in pgindent that Robert Haas requested.\n>\n>\n>\n> I don't think just adding a diff filter is really a sufficient fix. \n> The file might have been deleted since the commit(s) in question. \n> Here's a more general fix for missing files.\n>\n\nOK, I've pushed this along with a check to make sure we only process \neach file once.\n\n\nI'm not sure how much more I really want to do here. Given the way \npgindent now processes command line arguments, maybe the best thing is \nfor people to use that. Use of git aliases can help. Something like \nthese for example\n\n\n[alias]\n\n dirty = diff --name-only --diff-filter=ACMU -- .\n staged = diff --name-only --cached --diff-filter=ACMU -- .\n dstaged = diff --name-only --diff-filter=ACMU HEAD -- .\n\n\nand then you could do\n\n pgindent `git dirty`\n\n\nThe only danger would be if there were no dirty files. Maybe we need a \nswitch to inhibit using the current directory if there are no command \nline files.\n\n\nThoughts?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-10 Fr 10:21, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-02-10 Fr 04:25, Jelte Fennema\n wrote:\n\n\nAh yes, I had seen that when I read the initial --commit patch but\nthen forgot about it when the flag didn't work at all when I tried it.\n\nAttached is a patch that fixes the issue. And also implements the\n--dirty and --staged flags in pgindent that Robert Haas requested.\n\n\n\n\n\nI don't think just adding a diff filter is really a sufficient\n fix. The file might have been deleted since the commit(s) in\n question. Here's a more general fix for missing files.\n\n\n\nOK, I've pushed this along with a check to make sure we only\n process each file once.\n\n\nI'm not sure how much more I really want to do here. Given the\n way pgindent now processes command line arguments, maybe the best\n thing is for people to use that. Use of git aliases can help.\n Something like these for example\n\n\n[alias]\n\n dirty = diff --name-only --diff-filter=ACMU -- .\n staged = diff --name-only --cached --diff-filter=ACMU -- .\n dstaged = diff --name-only --diff-filter=ACMU HEAD -- .\n\n\n\nand then you could do\n pgindent `git dirty`\n\n\nThe only danger would be if there were no dirty files. Maybe we\n need a switch to inhibit using the current directory if there are\n no command line files.\n\n\nThoughts?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 12 Feb 2023 09:16:25 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> ... then you could do\n> pgindent `git dirty`\n> The only danger would be if there were no dirty files. Maybe we need a \n> switch to inhibit using the current directory if there are no command \n> line files.\n\nIt seems like \"indent the whole tree\" is about to become a minority\nuse-case. Maybe instead of continuing to privilege that case, we\nshould say that it's invoked by some new switch like --all-files,\nand without that only the stuff identified by command-line arguments\ngets processed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Feb 2023 11:24:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-12 Su 11:24, Tom Lane wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>> ... then you could do\n>> pgindent `git dirty`\n>> The only danger would be if there were no dirty files. Maybe we need a\n>> switch to inhibit using the current directory if there are no command\n>> line files.\n> It seems like \"indent the whole tree\" is about to become a minority\n> use-case. Maybe instead of continuing to privilege that case, we\n> should say that it's invoked by some new switch like --all-files,\n> and without that only the stuff identified by command-line arguments\n> gets processed.\n>\n> \t\t\t\n\n\nI don't think we need --all-files. The attached gets rid of the build \nand code-base cruft, which is now in any case obsolete given we've put \npg_bsd_indent in our code base. So the way to spell this instead of \n\"pgindent --all-files\" would be \"pgindent .\"\n\nI added a warning if there are no files at all specified.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com",
"msg_date": "Sun, 12 Feb 2023 15:41:56 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sun, Feb 12, 2023 at 11:24:14AM -0500, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > ... then you could do\n> > ��� pgindent `git dirty`\n> > The only danger would be if there were no dirty files. Maybe we need a \n> > switch to inhibit using the current directory if there are no command \n> > line files.\n> \n> It seems like \"indent the whole tree\" is about to become a minority\n> use-case. Maybe instead of continuing to privilege that case, we\n> should say that it's invoked by some new switch like --all-files,\n> and without that only the stuff identified by command-line arguments\n> gets processed.\n\nIt seems like if pgindent knows about git, it ought to process only\ntracked files. Then, it wouldn't need to manually exclude generated\nfiles, and it wouldn't process vpath builds and who-knows-what else it\nfinds in CWD.\n\nAt least --commit doesn't seem to work when run outside of the root\nsource dir.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 12 Feb 2023 14:59:36 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-02-12 Su 11:24, Tom Lane wrote:\n>> It seems like \"indent the whole tree\" is about to become a minority\n>> use-case. Maybe instead of continuing to privilege that case, we\n>> should say that it's invoked by some new switch like --all-files,\n>> and without that only the stuff identified by command-line arguments\n>> gets processed.\n\n> I don't think we need --all-files. The attached gets rid of the build \n> and code-base cruft, which is now in any case obsolete given we've put \n> pg_bsd_indent in our code base. So the way to spell this instead of \n> \"pgindent --all-files\" would be \"pgindent .\"\n\nAh, of course.\n\n> I added a warning if there are no files at all specified.\n\nLGTM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Feb 2023 16:13:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-12 Su 15:59, Justin Pryzby wrote:\n> It seems like if pgindent knows about git, it ought to process only\n> tracked files. Then, it wouldn't need to manually exclude generated\n> files, and it wouldn't process vpath builds and who-knows-what else it\n> finds in CWD.\n\n\nfor vpath builds use an exclude file that excludes the vpath you use.\n\nI don't really want restrict this to tracked files because it would mean \nyou can't pgindent files before you `git add` them. And we would still \nneed to do manual exclusion for some files that are tracked, e.g. the \nsnowball files.\n\n\n>\n> At least --commit doesn't seem to work when run outside of the root\n> source dir.\n>\n\nYeah, I'll fix that, thanks for mentioning.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-12 Su 15:59, Justin Pryzby\n wrote:\n\n\nIt seems like if pgindent knows about git, it ought to process only\ntracked files. Then, it wouldn't need to manually exclude generated\nfiles, and it wouldn't process vpath builds and who-knows-what else it\nfinds in CWD.\n\n\n\nfor vpath builds use an exclude file that excludes the vpath you\n use.\nI don't really want restrict this to tracked files because it\n would mean you can't pgindent files before you `git add` them. And\n we would still need to do manual exclusion for some files that are\n tracked, e.g. the snowball files.\n\n\n\n\n\n\nAt least --commit doesn't seem to work when run outside of the root\nsource dir.\n\n\n\n\n\nYeah, I'll fix that, thanks for mentioning.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 13 Feb 2023 07:57:25 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-12 Su 16:13, Tom Lane wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>> On 2023-02-12 Su 11:24, Tom Lane wrote:\n>>> It seems like \"indent the whole tree\" is about to become a minority\n>>> use-case. Maybe instead of continuing to privilege that case, we\n>>> should say that it's invoked by some new switch like --all-files,\n>>> and without that only the stuff identified by command-line arguments\n>>> gets processed.\n>> I don't think we need --all-files. The attached gets rid of the build\n>> and code-base cruft, which is now in any case obsolete given we've put\n>> pg_bsd_indent in our code base. So the way to spell this instead of\n>> \"pgindent --all-files\" would be \"pgindent .\"\n> Ah, of course.\n>\n>> I added a warning if there are no files at all specified.\n> LGTM.\n\n\nThanks, pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-12 Su 16:13, Tom Lane wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\nOn 2023-02-12 Su 11:24, Tom Lane wrote:\n\n\nIt seems like \"indent the whole tree\" is about to become a minority\nuse-case. Maybe instead of continuing to privilege that case, we\nshould say that it's invoked by some new switch like --all-files,\nand without that only the stuff identified by command-line arguments\ngets processed.\n\n\n\n\n\n\nI don't think we need --all-files. The attached gets rid of the build \nand code-base cruft, which is now in any case obsolete given we've put \npg_bsd_indent in our code base. So the way to spell this instead of \n\"pgindent --all-files\" would be \"pgindent .\"\n\n\n\nAh, of course.\n\n\n\nI added a warning if there are no files at all specified.\n\n\n\nLGTM.\n\n\n\n\nThanks, pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 13 Feb 2023 08:27:37 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sun, 12 Feb 2023 at 15:16, Andrew Dunstan <andrew@dunslane.net> wrote:\n> I'm not sure how much more I really want to do here. Given the way pgindent now processes command line arguments, maybe the best thing is for people to use that. Use of git aliases can help. Something like these for example\n>\n>\n> [alias]\n>\n> dirty = diff --name-only --diff-filter=ACMU -- .\n> staged = diff --name-only --cached --diff-filter=ACMU -- .\n> dstaged = diff --name-only --diff-filter=ACMU HEAD -- .\n>\n>\n> and then you could do\n>\n> pgindent `git dirty`\n>\n>\n> The only danger would be if there were no dirty files. Maybe we need a switch to inhibit using the current directory if there are no command line files.\n>\n>\n> Thoughts?\n\nI think indenting staged or dirty files is probably the most common\noperation that people want to do with pgindent. So I think that having\ndedicated flags makes sense. I agree that it's not strictly necessary\nand git aliases help a lot. But the git aliases require you to set\nthem up. To me making the most common operation as easy as possible to\ndo, seems worth the few extra lines to pgindent.\n\nSidenote: You mentioned untracked files in another email. I think that\nthe --dirty flag should probably also include untracked files. A\ncommand to do so is: git ls-files --others --exclude-standard\n\n\n",
"msg_date": "Mon, 13 Feb 2023 15:02:39 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-13 Mo 09:02, Jelte Fennema wrote:\n> On Sun, 12 Feb 2023 at 15:16, Andrew Dunstan<andrew@dunslane.net> wrote:\n>> I'm not sure how much more I really want to do here. Given the way pgindent now processes command line arguments, maybe the best thing is for people to use that. Use of git aliases can help. Something like these for example\n>>\n>>\n>> [alias]\n>>\n>> dirty = diff --name-only --diff-filter=ACMU -- .\n>> staged = diff --name-only --cached --diff-filter=ACMU -- .\n>> dstaged = diff --name-only --diff-filter=ACMU HEAD -- .\n>>\n>>\n>> and then you could do\n>>\n>> pgindent `git dirty`\n>>\n>>\n>> The only danger would be if there were no dirty files. Maybe we need a switch to inhibit using the current directory if there are no command line files.\n>>\n>>\n>> Thoughts?\n> I think indenting staged or dirty files is probably the most common\n> operation that people want to do with pgindent. So I think that having\n> dedicated flags makes sense. I agree that it's not strictly necessary\n> and git aliases help a lot. But the git aliases require you to set\n> them up. To me making the most common operation as easy as possible to\n> do, seems worth the few extra lines to pgindent.\n\n\nOK, but I'd like to hear from more people about what they want. \nExperience tells me that making assumptions about how people work is not \na good idea. I doubt anyone's work pattern is like mine. I don't want to \nimplement an option that three people are going to use.\n\n\n>\n> Sidenote: You mentioned untracked files in another email. I think that\n> the --dirty flag should probably also include untracked files. A\n> command to do so is: git ls-files --others --exclude-standard\n\n\nThanks for the info.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-13 Mo 09:02, Jelte Fennema\n wrote:\n\n\nOn Sun, 12 Feb 2023 at 15:16, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nI'm not sure how much more I really want to do here. Given the way pgindent now processes command line arguments, maybe the best thing is for people to use that. Use of git aliases can help. Something like these for example\n\n\n[alias]\n\n dirty = diff --name-only --diff-filter=ACMU -- .\n staged = diff --name-only --cached --diff-filter=ACMU -- .\n dstaged = diff --name-only --diff-filter=ACMU HEAD -- .\n\n\nand then you could do\n\n pgindent `git dirty`\n\n\nThe only danger would be if there were no dirty files. Maybe we need a switch to inhibit using the current directory if there are no command line files.\n\n\nThoughts?\n\n\n\nI think indenting staged or dirty files is probably the most common\noperation that people want to do with pgindent. So I think that having\ndedicated flags makes sense. I agree that it's not strictly necessary\nand git aliases help a lot. But the git aliases require you to set\nthem up. To me making the most common operation as easy as possible to\ndo, seems worth the few extra lines to pgindent.\n\n\n\nOK, but I'd like to hear from more people about what they want.\n Experience tells me that making assumptions about how people work\n is not a good idea. I doubt anyone's work pattern is like mine. I\n don't want to implement an option that three people are going to\n use.\n\n\n\n\n\n\nSidenote: You mentioned untracked files in another email. I think that\nthe --dirty flag should probably also include untracked files. A\ncommand to do so is: git ls-files --others --exclude-standard\n\n\n\nThanks for the info.\n\n\ncheers\n\n\nandrew\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 13 Feb 2023 11:46:59 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Mon, 13 Feb 2023 at 17:47, Andrew Dunstan <andrew@dunslane.net> wrote:\n> OK, but I'd like to hear from more people about what they want. Experience tells me that making assumptions about how people work is not a good idea. I doubt anyone's work pattern is like mine. I don't want to implement an option that three people are going to use.\n\n\nIn the general case I agree with you. But in this specific case I\ndon't. To me the whole point of this email thread is to nudge people\ntowards indenting the changes that they are committing. Thus indenting\nthose changes (either before or after adding) is the workflow that we\nwant to make as easy as possible. Because even if it's not people\ntheir current workflow, by adding the flag it hopefully becomes their\nworkflow, because it's so easy to use. So my point is we want to\nremove as few hurdles as possible for people to indent their changes\n(and setting up git aliases or pre-commit hooks are all hurdles).\n\n\n",
"msg_date": "Mon, 13 Feb 2023 19:29:19 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-13 Mo 13:29, Jelte Fennema wrote:\n> On Mon, 13 Feb 2023 at 17:47, Andrew Dunstan<andrew@dunslane.net> wrote:\n>> OK, but I'd like to hear from more people about what they want. Experience tells me that making assumptions about how people work is not a good idea. I doubt anyone's work pattern is like mine. I don't want to implement an option that three people are going to use.\n>\n> In the general case I agree with you. But in this specific case I\n> don't. To me the whole point of this email thread is to nudge people\n> towards indenting the changes that they are committing. Thus indenting\n> those changes (either before or after adding) is the workflow that we\n> want to make as easy as possible. Because even if it's not people\n> their current workflow, by adding the flag it hopefully becomes their\n> workflow, because it's so easy to use. So my point is we want to\n> remove as few hurdles as possible for people to indent their changes\n> (and setting up git aliases or pre-commit hooks are all hurdles).\n\n\n(ITYM \"remove as many hurdles as possible\"). It remains to be seen how \nmuch easier any of this will make life for committers, at least. But I \nconcede this might make life a bit simpler for developers generally.\n\nAnyway, let's talk about the details of what is proposed.\n\nSo far, we have had the following categories suggested: dirty, staged, \ndirty+staged, untracked. Are there any others?\n\nAnother issue is whether or not to restrict these to files under the \ncurrent directory. I think we probably should, or at least provide a \n--relative option.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-13 Mo 13:29, Jelte Fennema\n wrote:\n\n\nOn Mon, 13 Feb 2023 at 17:47, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nOK, but I'd like to hear from more people about what they want. Experience tells me that making assumptions about how people work is not a good idea. I doubt anyone's work pattern is like mine. I don't want to implement an option that three people are going to use.\n\n\n\n\nIn the general case I agree with you. But in this specific case I\ndon't. To me the whole point of this email thread is to nudge people\ntowards indenting the changes that they are committing. Thus indenting\nthose changes (either before or after adding) is the workflow that we\nwant to make as easy as possible. Because even if it's not people\ntheir current workflow, by adding the flag it hopefully becomes their\nworkflow, because it's so easy to use. So my point is we want to\nremove as few hurdles as possible for people to indent their changes\n(and setting up git aliases or pre-commit hooks are all hurdles).\n\n\n\n(ITYM \"remove as many hurdles as possible\"). It remains to be\n seen how much easier any of this will make life for committers, at\n least. But I concede this might make life a bit simpler for\n developers generally.\n\nAnyway, let's talk about the details of what is proposed.\nSo far, we have had the following categories suggested: dirty,\n staged, dirty+staged, untracked. Are there any others?\nAnother issue is whether or not to restrict these to files under\n the current directory. I think we probably should, or at least\n provide a --relative option.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 14 Feb 2023 11:46:24 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "> (ITYM \"remove as many hurdles as possible\").\n\nyes, I messed up rewriting that sentence from \"having as few hurdles\nas possible\" to \"removing as many hurdles as possible\"\n\n> So far, we have had the following categories suggested: dirty, staged, dirty+staged, untracked. Are there any others?\n\nThe two workflows that make most sense to me personally are:\n1. staged (indent anything that you're staging for a commit)\n2. dirty+staged+untracked (indent anything you've been working on that\nis not committed yet)\n\nThe obvious way of having --dirty, --staged, and --untracked flags\nwould require 3 flags for this second (to me seemingly) common\noperation. That seems quite unfortunate. So I would propose the\nfollowing two flags for those purposes:\n1. --staged/--cached (--cached is in line with git, but I personally\nthink --staged is clearer, git has --staged-only but that seems long\nfor no reason)\n2. --uncommitted\n\nAnd maybe for completeness we could have the following flags, so you\ncould target any combination of staged/untracked/dirty files:\n3. --untracked (untracked files only)\n4. --dirty (tracked files with changes that are not staged)\n\nBut I don't know in what workflow people would actually use them.\n\n> Another issue is whether or not to restrict these to files under the current directory. I think we probably should, or at least provide a --relative option.\n\nGood point, I think it makes sense to restrict it to the current\ndirectory by default. You can always cd to the root of the repo if you\nwant to format everything.\n\n\n",
"msg_date": "Wed, 15 Feb 2023 12:28:07 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 07:57:25AM -0500, Andrew Dunstan wrote:\n> \n> On 2023-02-12 Su 15:59, Justin Pryzby wrote:\n> > It seems like if pgindent knows about git, it ought to process only\n> > tracked files. Then, it wouldn't need to manually exclude generated\n> > files, and it wouldn't process vpath builds and who-knows-what else it\n> > finds in CWD.\n> \n> I don't really want restrict this to tracked files because it would mean you\n> can't pgindent files before you `git add` them.\n\nI think you'd allow indenting files which were either tracked *or*\nspecified on the command line.\n\nAlso, it makes a more sense to \"add\" the file before indenting it, to\nallow checking the output and remove unrelated changes. So that doesn't\nseem to me like a restriction of any significance.\n\nBut I would never want to indent an untracked file unless I specified\nit.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 15 Feb 2023 12:45:52 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "> Also, it makes a more sense to \"add\" the file before indenting it, to\n> allow checking the output and remove unrelated changes. So that doesn't\n> seem to me like a restriction of any significance.\n\nFor my workflow it would be the same, but afaik there's two ways that\npeople commonly use git (mine is 1):\n1. Adding changes/files to the staging area using and then committing\nthose changes:\n git add (-p)/emacs magit/some other editor integration\n2. Just add everything that's changed and commit all of it:\n git add -A + git commit/git commit -a\n\nFor workflow 1, a --staged/--cached flag would be enough IMHO. But\nthat's not at all helpful for workflow 2. That's why I proposed\n--uncommitted too, to make indenting easier for workflow 2.\n\n> But I would never want to indent an untracked file unless I specified\n> it.\n\nWould the --uncommitted flag I proposed be enough of an explicit way\nof specifying that you want to indent untracked files?\n\n\n",
"msg_date": "Wed, 15 Feb 2023 22:00:41 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 12:45:52PM -0600, Justin Pryzby wrote:\n> On Mon, Feb 13, 2023 at 07:57:25AM -0500, Andrew Dunstan wrote:\n> > On 2023-02-12 Su 15:59, Justin Pryzby wrote:\n> > > It seems like if pgindent knows about git, it ought to process only\n> > > tracked files. Then, it wouldn't need to manually exclude generated\n> > > files, and it wouldn't process vpath builds and who-knows-what else it\n> > > finds in CWD.\n> > \n> > I don't really want restrict this to tracked files because it would mean you\n> > can't pgindent files before you `git add` them.\n> \n> I think you'd allow indenting files which were either tracked *or*\n> specified on the command line.\n> \n> Also, it makes a more sense to \"add\" the file before indenting it, to\n> allow checking the output and remove unrelated changes. So that doesn't\n> seem to me like a restriction of any significance.\n> \n> But I would never want to indent an untracked file unless I specified\n> it.\n\nAgreed. I use pgindent three ways:\n\n1. Indent everything that changed between master and the current branch. Most\n common, since I develop nontrivial patches on branches.\n2. Indent all staged files. For trivial changes.\n3. Indent all tracked files. For typedefs.list changes.\n\nThat said, pre-2023 pgindent changed untracked files if called without a file\nlist. I've lived with that and could continue to do so.\n\n\n",
"msg_date": "Wed, 15 Feb 2023 20:34:04 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Thu, Feb 9, 2023 6:10 AM Andrew Dunstan <andrew@dunslane.net> wrote:\r\n> Thanks, I have committed this. Still looking at Robert's other request.\r\n>\r\n\r\nHi,\r\n\r\nCommit #068a243b7 supported directories to be non-option arguments of pgindent.\r\nBut the help text doesn't mention that. Should we update it? Attach a small\r\npatch which did that.\r\n\r\nRegards,\r\nShi Yu",
"msg_date": "Thu, 16 Feb 2023 08:26:07 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-02-16 Th 03:26, shiy.fnst@fujitsu.com wrote:\n> On Thu, Feb 9, 2023 6:10 AM Andrew Dunstan<andrew@dunslane.net> wrote:\n>> Thanks, I have committed this. Still looking at Robert's other request.\n>>\n> Hi,\n>\n> Commit #068a243b7 supported directories to be non-option arguments of pgindent.\n> But the help text doesn't mention that. Should we update it? Attach a small\n> patch which did that.\n>\n\nThanks, pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-16 Th 03:26,\n shiy.fnst@fujitsu.com wrote:\n\n\nOn Thu, Feb 9, 2023 6:10 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nThanks, I have committed this. Still looking at Robert's other request.\n\n\n\n\nHi,\n\nCommit #068a243b7 supported directories to be non-option arguments of pgindent.\nBut the help text doesn't mention that. Should we update it? Attach a small\npatch which did that.\n\n\n\n\n\nThanks, pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 16 Feb 2023 11:44:01 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Now that the PG16 feature freeze happened I think it's time to bump\nthis thread again. As far as I remember everyone that responded (even\npreviously silent people) were themselves proponents of being more\nstrict around pgindent.\n\nI think there's two things needed to actually start doing this:\n1. We need to reindent the tree to create an indented baseline\n2. We need some automation to complain about unindented code being committed\n\nFor 2 the upstream thread listed two approaches:\na. Install a pre-receive git hook on the git server that rejects\npushes to master that are not indented\nb. Add a test suite that checks if the code is correctly indented, so\nthe build farm would complain about it. (Suggested by Peter E)\n\nI think both a and b would work to achieve 2. But as Peter E said, b\nindeed sounds like less of a divergence of the status quo. So my vote\nwould be for b. If that doesn't achieve 2 for some reason, or turns\nout to have problems we can always change to a afterwards.\n\n\n",
"msg_date": "Fri, 21 Apr 2023 09:58:17 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 09:58:17AM +0200, Jelte Fennema wrote:\n> For 2 the upstream thread listed two approaches:\n> a. Install a pre-receive git hook on the git server that rejects\n> pushes to master that are not indented\n> b. Add a test suite that checks if the code is correctly indented, so\n> the build farm would complain about it. (Suggested by Peter E)\n> \n> I think both a and b would work to achieve 2. But as Peter E said, b\n> indeed sounds like less of a divergence of the status quo. So my vote\n> would be for b.\n\nFWIW, I think that there is value for both of them. Anyway, isn't 'a'\nexactly the same as 'b' in design? Both require a build of\npg_bsd_indent, meaning that 'a' would also need to run an equivalent\nof the regression test suite, but it would be actually costly\nespecially if pg_bsd_indent itself is patched. I think that getting\nmore noisy on this matter with 'b' would be enough, but as an extra\nPG_TEST_EXTRA for committers to set.\n\nSuch a test suite would need a dependency to the 'git' command itself,\nwhich is not something that could be safely run in a release tarball,\nin any case.\n--\nMichael",
"msg_date": "Sat, 22 Apr 2023 17:50:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-04-22 Sa 04:50, Michael Paquier wrote:\n> On Fri, Apr 21, 2023 at 09:58:17AM +0200, Jelte Fennema wrote:\n>> For 2 the upstream thread listed two approaches:\n>> a. Install a pre-receive git hook on the git server that rejects\n>> pushes to master that are not indented\n>> b. Add a test suite that checks if the code is correctly indented, so\n>> the build farm would complain about it. (Suggested by Peter E)\n>>\n>> I think both a and b would work to achieve 2. But as Peter E said, b\n>> indeed sounds like less of a divergence of the status quo. So my vote\n>> would be for b.\n> FWIW, I think that there is value for both of them. Anyway, isn't 'a'\n> exactly the same as 'b' in design? Both require a build of\n> pg_bsd_indent, meaning that 'a' would also need to run an equivalent\n> of the regression test suite, but it would be actually costly\n> especially if pg_bsd_indent itself is patched. I think that getting\n> more noisy on this matter with 'b' would be enough, but as an extra\n> PG_TEST_EXTRA for committers to set.\n>\n> Such a test suite would need a dependency to the 'git' command itself,\n> which is not something that could be safely run in a release tarball,\n> in any case.\n\n\nPerhaps we should start with a buildfarm module, which would run \npg_indent --show-diff. That would only need to run on one animal, so a \nfailure wouldn't send the whole buildfarm red. This would be pretty easy \nto do.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-22 Sa 04:50, Michael Paquier\n wrote:\n\n\nOn Fri, Apr 21, 2023 at 09:58:17AM +0200, Jelte Fennema wrote:\n\n\nFor 2 the upstream thread listed two approaches:\na. Install a pre-receive git hook on the git server that rejects\npushes to master that are not indented\nb. Add a test suite that checks if the code is correctly indented, so\nthe build farm would complain about it. (Suggested by Peter E)\n\nI think both a and b would work to achieve 2. But as Peter E said, b\nindeed sounds like less of a divergence of the status quo. So my vote\nwould be for b.\n\n\n\nFWIW, I think that there is value for both of them. Anyway, isn't 'a'\nexactly the same as 'b' in design? Both require a build of\npg_bsd_indent, meaning that 'a' would also need to run an equivalent\nof the regression test suite, but it would be actually costly\nespecially if pg_bsd_indent itself is patched. I think that getting\nmore noisy on this matter with 'b' would be enough, but as an extra\nPG_TEST_EXTRA for committers to set.\n\nSuch a test suite would need a dependency to the 'git' command itself,\nwhich is not something that could be safely run in a release tarball,\nin any case.\n\n\n\n\nPerhaps we should start with a buildfarm module, which would run\n pg_indent --show-diff. That would only need to run on one animal,\n so a failure wouldn't send the whole buildfarm red. This would be\n pretty easy to do.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 22 Apr 2023 07:42:36 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Apr 22, 2023 at 07:42:36AM -0400, Andrew Dunstan wrote:\n> Perhaps we should start with a buildfarm module, which would run pg_indent\n> --show-diff.\n\nNice, I didn't know this one and it has been mentioned a bit on this\nthread. Indeed, it is possible to just rely on that.\n--\nMichael",
"msg_date": "Sat, 22 Apr 2023 21:10:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Apr 22, 2023 at 1:42 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2023-04-22 Sa 04:50, Michael Paquier wrote:\n>\n> On Fri, Apr 21, 2023 at 09:58:17AM +0200, Jelte Fennema wrote:\n>\n> For 2 the upstream thread listed two approaches:\n> a. Install a pre-receive git hook on the git server that rejects\n> pushes to master that are not indented\n> b. Add a test suite that checks if the code is correctly indented, so\n> the build farm would complain about it. (Suggested by Peter E)\n>\n> I think both a and b would work to achieve 2. But as Peter E said, b\n> indeed sounds like less of a divergence of the status quo. So my vote\n> would be for b.\n>\n> FWIW, I think that there is value for both of them. Anyway, isn't 'a'\n> exactly the same as 'b' in design? Both require a build of\n> pg_bsd_indent, meaning that 'a' would also need to run an equivalent\n> of the regression test suite, but it would be actually costly\n> especially if pg_bsd_indent itself is patched. I think that getting\n> more noisy on this matter with 'b' would be enough, but as an extra\n> PG_TEST_EXTRA for committers to set.\n>\n> Such a test suite would need a dependency to the 'git' command itself,\n> which is not something that could be safely run in a release tarball,\n> in any case.\n>\n>\n> Perhaps we should start with a buildfarm module, which would run pg_indent --show-diff. That would only need to run on one animal, so a failure wouldn't send the whole buildfarm red. This would be pretty easy to do.\n\n\nJust to be clear, you guys are aware we already have a git repo\nthat's supposed to track \"head + pg_indent\" at\nhttps://git.postgresql.org/gitweb/?p=postgresql-pgindent.git;a=shortlog;h=refs/heads/master-pgindent\nright?\n\nI see it is currently not working and this has not been noticed by\nanyone, so I guess it kind of indicates nobody is using it today. The\nreason appears to be that it uses pg_bsd_indent that's in our apt\nrepos and that's 2.1.1 and not 2.1.2 at this point. But if this is a\nservice that would actually be useful, this could certainly be ficked\npretty easy.\n\nBut bottom line is that if pgindent is as predictable as it should be,\nit might be easier to use that one central place that already does it\nrather than have to build a buildfarm module?\n\n//Magnus\n\n\n",
"msg_date": "Sat, 22 Apr 2023 14:47:12 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 5:43 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Mon, Feb 06, 2023 at 06:17:02PM +0100, Peter Eisentraut wrote:\n> > Also, pgindent takes tens of seconds to run, so hooking that into the git\n> > push process would slow this down quite a bit.\n>\n> The pre-receive hook would do a full pgindent when you change typedefs.list.\n> Otherwise, it would reindent only the files being changed. The average push\n> need not take tens of seconds.\n\nIt would probably ont be tens of seconds, but it would be slow. It\nwould need to do a clean git checkout into an isolated environment and\nspawn in there, and just that takes time. And it would have to also\nknow to rebuild pg_bsd_indent on demand, which would require a full\n./configure run (or meson equivalent). etc.\n\nSo while it might not be tens of seconds, it most definitely won't be fast.\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sat, 22 Apr 2023 15:23:59 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-04-22 Sa 08:47, Magnus Hagander wrote:\n> On Sat, Apr 22, 2023 at 1:42 PM Andrew Dunstan<andrew@dunslane.net> wrote:\n>>\n>> On 2023-04-22 Sa 04:50, Michael Paquier wrote:\n>>\n>> On Fri, Apr 21, 2023 at 09:58:17AM +0200, Jelte Fennema wrote:\n>>\n>> For 2 the upstream thread listed two approaches:\n>> a. Install a pre-receive git hook on the git server that rejects\n>> pushes to master that are not indented\n>> b. Add a test suite that checks if the code is correctly indented, so\n>> the build farm would complain about it. (Suggested by Peter E)\n>>\n>> I think both a and b would work to achieve 2. But as Peter E said, b\n>> indeed sounds like less of a divergence of the status quo. So my vote\n>> would be for b.\n>>\n>> FWIW, I think that there is value for both of them. Anyway, isn't 'a'\n>> exactly the same as 'b' in design? Both require a build of\n>> pg_bsd_indent, meaning that 'a' would also need to run an equivalent\n>> of the regression test suite, but it would be actually costly\n>> especially if pg_bsd_indent itself is patched. I think that getting\n>> more noisy on this matter with 'b' would be enough, but as an extra\n>> PG_TEST_EXTRA for committers to set.\n>>\n>> Such a test suite would need a dependency to the 'git' command itself,\n>> which is not something that could be safely run in a release tarball,\n>> in any case.\n>>\n>>\n>> Perhaps we should start with a buildfarm module, which would run pg_indent --show-diff. That would only need to run on one animal, so a failure wouldn't send the whole buildfarm red. This would be pretty easy to do.\n>\n> Just to be clear, you guys are aware we already have a git repo\n> that's supposed to track \"head + pg_indent\" at\n> https://git.postgresql.org/gitweb/?p=postgresql-pgindent.git;a=shortlog;h=refs/heads/master-pgindent\n> right?\n>\n> I see it is currently not working and this has not been noticed by\n> anyone, so I guess it kind of indicates nobody is using it today. The\n> reason appears to be that it uses pg_bsd_indent that's in our apt\n> repos and that's 2.1.1 and not 2.1.2 at this point. But if this is a\n> service that would actually be useful, this could certainly be ficked\n> pretty easy.\n>\n> But bottom line is that if pgindent is as predictable as it should be,\n> it might be easier to use that one central place that already does it\n> rather than have to build a buildfarm module?\n>\n\nNow that pg_bsd_indent is in the core code why not just use that?\n\n\nHappy if you can make something work without further effort on my part :-)\n\n\ncheers\n\n\nandew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-22 Sa 08:47, Magnus Hagander\n wrote:\n\n\nOn Sat, Apr 22, 2023 at 1:42 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\n\n\nOn 2023-04-22 Sa 04:50, Michael Paquier wrote:\n\nOn Fri, Apr 21, 2023 at 09:58:17AM +0200, Jelte Fennema wrote:\n\nFor 2 the upstream thread listed two approaches:\na. Install a pre-receive git hook on the git server that rejects\npushes to master that are not indented\nb. Add a test suite that checks if the code is correctly indented, so\nthe build farm would complain about it. (Suggested by Peter E)\n\nI think both a and b would work to achieve 2. But as Peter E said, b\nindeed sounds like less of a divergence of the status quo. So my vote\nwould be for b.\n\nFWIW, I think that there is value for both of them. Anyway, isn't 'a'\nexactly the same as 'b' in design? Both require a build of\npg_bsd_indent, meaning that 'a' would also need to run an equivalent\nof the regression test suite, but it would be actually costly\nespecially if pg_bsd_indent itself is patched. I think that getting\nmore noisy on this matter with 'b' would be enough, but as an extra\nPG_TEST_EXTRA for committers to set.\n\nSuch a test suite would need a dependency to the 'git' command itself,\nwhich is not something that could be safely run in a release tarball,\nin any case.\n\n\nPerhaps we should start with a buildfarm module, which would run pg_indent --show-diff. That would only need to run on one animal, so a failure wouldn't send the whole buildfarm red. This would be pretty easy to do.\n\n\n\n\nJust to be clear, you guys are aware we already have a git repo\nthat's supposed to track \"head + pg_indent\" at\nhttps://git.postgresql.org/gitweb/?p=postgresql-pgindent.git;a=shortlog;h=refs/heads/master-pgindent\nright?\n\nI see it is currently not working and this has not been noticed by\nanyone, so I guess it kind of indicates nobody is using it today. The\nreason appears to be that it uses pg_bsd_indent that's in our apt\nrepos and that's 2.1.1 and not 2.1.2 at this point. But if this is a\nservice that would actually be useful, this could certainly be ficked\npretty easy.\n\nBut bottom line is that if pgindent is as predictable as it should be,\nit might be easier to use that one central place that already does it\nrather than have to build a buildfarm module?\n\n\n\n\n\nNow that pg_bsd_indent is in the core code why not just use that?\n\n\nHappy if you can make something work without further effort on my\n part :-)\n\n\ncheers\n\n\nandew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 22 Apr 2023 10:12:06 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Sat, Apr 22, 2023 at 1:42 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> For 2 the upstream thread listed two approaches:\n>> a. Install a pre-receive git hook on the git server that rejects\n>> pushes to master that are not indented\n>> b. Add a test suite that checks if the code is correctly indented, so\n>> the build farm would complain about it. (Suggested by Peter E)\n>> \n>> I think both a and b would work to achieve 2. But as Peter E said, b\n>> indeed sounds like less of a divergence of the status quo. So my vote\n>> would be for b.\n\nI am absolutely against a pre-receive hook on gitmaster. A buildfarm\ncheck seems far more appropriate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 22 Apr 2023 10:24:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Jelte Fennema <postgres@jeltef.nl> writes:\n> I think there's two things needed to actually start doing this:\n> 1. We need to reindent the tree to create an indented baseline\n\nAs far as (1) goes, I've been holding off on that because there\nare some large patches that still seem in danger of getting\nreverted, notably 2489d76c4 and follow-ups. A pgindent run\nwould change any such reversions from being mechanical into\npossibly a fair bit of work. We still have a couple of weeks\nbefore it's necessary to make such decisions, so I don't want\nto do the pgindent run before that.\n\nAnother obstacle in the way of (1) is that there was some discussion\nof changing perltidy version and/or options. But I don't believe\nwe have a final proposal on that, much less committed code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 22 Apr 2023 10:39:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-04-22 Sa 10:39, Tom Lane wrote:\n>\n> Another obstacle in the way of (1) is that there was some discussion\n> of changing perltidy version and/or options. But I don't believe\n> we have a final proposal on that, much less committed code.\n\n\nWell, I posted a fairly concrete suggestion with an example patch \nupthread at\n\n<https://www.postgresql.org/message-id/47011581-ddec-1a87-6828-6edfabe6b7b6%40dunslane.net>\n\nI still think that's worth doing.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-22 Sa 10:39, Tom Lane wrote:\n\n\n\nAnother obstacle in the way of (1) is that there was some discussion\nof changing perltidy version and/or options. But I don't believe\nwe have a final proposal on that, much less committed code.\n\n\n\nWell, I posted a fairly concrete suggestion with an example patch\n upthread at \n\n<https://www.postgresql.org/message-id/47011581-ddec-1a87-6828-6edfabe6b7b6%40dunslane.net>\nI still think that's worth doing.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 22 Apr 2023 11:10:12 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-04-22 Sa 10:39, Tom Lane wrote:\n>> Another obstacle in the way of (1) is that there was some discussion\n>> of changing perltidy version and/or options. But I don't believe\n>> we have a final proposal on that, much less committed code.\n\n> Well, I posted a fairly concrete suggestion with an example patch \n> upthread at\n> <https://www.postgresql.org/message-id/47011581-ddec-1a87-6828-6edfabe6b7b6%40dunslane.net>\n> I still think that's worth doing.\n\nOK, so plan is (a) update perltidyrc to add --valign-exclusion-list,\n(b) adjust pgindent/README to recommend perltidy version 20221112.\n\nQuestions:\n\n* I see that there's now a 20230309 release, should we consider that\ninstead?\n\n* emacs.samples provides pgsql-perl-style that claims to match\nperltidy's rules. Does that need any adjustment? I don't see\nanything in it that looks relevant, but I'm not terribly familiar\nwith emacs' Perl formatting options.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 22 Apr 2023 11:37:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-04-22 Sa 11:37, Tom Lane wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>> On 2023-04-22 Sa 10:39, Tom Lane wrote:\n>>> Another obstacle in the way of (1) is that there was some discussion\n>>> of changing perltidy version and/or options. But I don't believe\n>>> we have a final proposal on that, much less committed code.\n>> Well, I posted a fairly concrete suggestion with an example patch\n>> upthread at\n>> <https://www.postgresql.org/message-id/47011581-ddec-1a87-6828-6edfabe6b7b6%40dunslane.net>\n>> I still think that's worth doing.\n> OK, so plan is (a) update perltidyrc to add --valign-exclusion-list,\n> (b) adjust pgindent/README to recommend perltidy version 20221112.\n>\n> Questions:\n>\n> * I see that there's now a 20230309 release, should we consider that\n> instead?\n\n\nA test I just ran gave identical results to those from 20221112\n\n\n>\n> * emacs.samples provides pgsql-perl-style that claims to match\n> perltidy's rules. Does that need any adjustment? I don't see\n> anything in it that looks relevant, but I'm not terribly familiar\n> with emacs' Perl formatting options.\n\n\nAt least w.r.t. the vertical alignment issue, AFAICT the emacs style \ndoes not attempt to align anything vertically except the first non-space \nthing on the line. So if anything, by abandoning a lot of vertical \nalignment it would actually be closer to what the sample emacs style does.\n\nThe great advantage of not doing this alignment is that there is far \nless danger of perltidy trying to realign lines that have not in fact \nchanged, because some nearby line has changed. So we'd have a good deal \nless pointless churn.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-22 Sa 11:37, Tom Lane wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\nOn 2023-04-22 Sa 10:39, Tom Lane wrote:\n\n\nAnother obstacle in the way of (1) is that there was some discussion\nof changing perltidy version and/or options. But I don't believe\nwe have a final proposal on that, much less committed code.\n\n\n\n\n\n\nWell, I posted a fairly concrete suggestion with an example patch \nupthread at\n<https://www.postgresql.org/message-id/47011581-ddec-1a87-6828-6edfabe6b7b6%40dunslane.net>\nI still think that's worth doing.\n\n\n\nOK, so plan is (a) update perltidyrc to add --valign-exclusion-list,\n(b) adjust pgindent/README to recommend perltidy version 20221112.\n\nQuestions:\n\n* I see that there's now a 20230309 release, should we consider that\ninstead?\n\n\n\nA test I just ran gave identical results to those from 20221112\n\n\n\n\n\n\n* emacs.samples provides pgsql-perl-style that claims to match\nperltidy's rules. Does that need any adjustment? I don't see\nanything in it that looks relevant, but I'm not terribly familiar\nwith emacs' Perl formatting options.\n\n\n\n\n \nAt least w.r.t. the vertical alignment issue, AFAICT the emacs\n style does not attempt to align anything vertically except the\n first non-space thing on the line. So if anything, by abandoning a\n lot of vertical alignment it would actually be closer to what the\n sample emacs style does.\nThe great advantage of not doing this alignment is that there is\n far less danger of perltidy trying to realign lines that have not\n in fact changed, because some nearby line has changed. So we'd\n have a good deal less pointless churn.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 22 Apr 2023 15:52:24 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-04-22 Sa 11:37, Tom Lane wrote:\n>> * I see that there's now a 20230309 release, should we consider that\n>> instead?\n\n> A test I just ran gave identical results to those from 20221112\n\nCool, let's use perltidy 20230309 then.\n\n> The great advantage of not doing this alignment is that there is far \n> less danger of perltidy trying to realign lines that have not in fact \n> changed, because some nearby line has changed. So we'd have a good deal \n> less pointless churn.\n\nYes, exactly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 22 Apr 2023 15:58:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "(Given that another commentator is \"absolutely against\" a hook, this message\nis mostly for readers considering this for other projects.)\n\nOn Sat, Apr 22, 2023 at 03:23:59PM +0200, Magnus Hagander wrote:\n> On Tue, Feb 7, 2023 at 5:43 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Mon, Feb 06, 2023 at 06:17:02PM +0100, Peter Eisentraut wrote:\n> > > Also, pgindent takes tens of seconds to run, so hooking that into the git\n> > > push process would slow this down quite a bit.\n> >\n> > The pre-receive hook would do a full pgindent when you change typedefs.list.\n> > Otherwise, it would reindent only the files being changed. The average push\n> > need not take tens of seconds.\n> \n> It would probably ont be tens of seconds, but it would be slow. It\n> would need to do a clean git checkout into an isolated environment and\n> spawn in there, and just that takes time.\n\nThat would be slow, but I wouldn't do it that way. I'd make \"pg_bsd_ident\n--pre-receive --show-diff\" that, instead of reading from the filesystem, gets\nthe bytes to check from the equivalent of this Perl-like pseudocode:\n\nwhile (<>) {\n my($old_hash, $new_hash, $ref) = split;\n foreach my $filename (split /\\n/, `git diff --name-only $old_hash..$new_hash`) {\n $file_content = `git show $new_hash $filename`;\n }\n}\n\nI just ran pgindent on the file name lists of the last 1000 commits, and\nruntime was less than 0.5s for each of 998/1000 commits. There's more a real\nimplementation might handle:\n\n- pg_bsd_indent changes\n- typedefs.list changes\n- skip if the break-glass \"pgindent: no\" appears in a commit message\n- commits changing so many files that a clean \"git checkout\" would be faster\n\n> And it would have to also\n> know to rebuild pg_bsd_indent on demand, which would require a full\n> ./configure run (or meson equivalent). etc.\n> \n> So while it might not be tens of seconds, it most definitely won't be fast.\n\nA project more concerned about elapsed time than detecting all defects might\neven choose to take no synchronous action for pg_bsd_indent and typedefs.list\nchanges. When a commit changes either of those, the probability that the\ncommitter already ran pgindent rises substantially.\n\n\n",
"msg_date": "Sat, 22 Apr 2023 12:59:06 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> - skip if the break-glass \"pgindent: no\" appears in a commit message\n\nThere are two things that bother me about putting this functionality\ninto a server hook, beyond the possible speed issue:\n\n* The risk of failure. I don't have a terribly difficult time imagining\nsituations where things get sufficiently wedged that the server accepts\n*no* commits, not even ones fixing the problem. An override such as\nyou suggest here could assuage that fear, perhaps.\n\n* The lack of user-friendliness. AFAIK, if a pre-receive hook fails\nyou learn little except that it failed. This could be extremely\nfrustrating to debug, especially in a situation where your local\npgindent is giving you different results than the server gets.\n\nThe idea of a buildfarm animal failing if --show-diff isn't empty\nis attractive to me mainly because it is far nicer from the\ndebuggability standpoint.\n\nMaybe, after we get some amount of experience with trying to keep\nthings always indent-clean, we will decide that it's reliable enough\nto enforce in a server hook. I think going for that right away is\nsheer folly though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 22 Apr 2023 16:15:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Apr 22, 2023 at 04:15:23PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > - skip if the break-glass \"pgindent: no\" appears in a commit message\n> \n> There are two things that bother me about putting this functionality\n> into a server hook, beyond the possible speed issue:\n> \n> * The risk of failure. I don't have a terribly difficult time imagining\n> situations where things get sufficiently wedged that the server accepts\n> *no* commits, not even ones fixing the problem. An override such as\n> you suggest here could assuage that fear, perhaps.\n\nI agree that deserves some worry.\n\n> * The lack of user-friendliness. AFAIK, if a pre-receive hook fails\n> you learn little except that it failed.\n\nThat is incorrect. The client gets whatever the hook prints. I'd probably\nmake it print the first 10000 lines of the diff.\n\n\nI'm okay with a buildfarm animal. It's going to result in a more-cluttered\ngit history as people push things, break that animal, and push followup fixes.\nWhile that's sad, I expect the level of clutter will go down pretty quickly\nand will soon be no worse than we already get from typo-fix pushes.\n\n\n",
"msg_date": "Sat, 22 Apr 2023 13:33:25 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-04-22 Sa 15:58, Tom Lane wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>> On 2023-04-22 Sa 11:37, Tom Lane wrote:\n>>> * I see that there's now a 20230309 release, should we consider that\n>>> instead?\n>> A test I just ran gave identical results to those from 20221112\n> Cool, let's use perltidy 20230309 then.\n>\n>\n\nOK, so when would we do this? The change to 20230309 + valign changes is \nfairly large:\n\n\n188 files changed, 3657 insertions(+), 3395 deletions(-)\n\n\nMaybe right before we fork the tree?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-22 Sa 15:58, Tom Lane wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\nOn 2023-04-22 Sa 11:37, Tom Lane wrote:\n\n\n* I see that there's now a 20230309 release, should we consider that\ninstead?\n\n\n\n\n\n\nA test I just ran gave identical results to those from 20221112\n\n\n\nCool, let's use perltidy 20230309 then.\n\n\n\n\n\n\nOK, so when would we do this? The change to 20230309 + valign\n changes is fairly large:\n\n\n188 files changed, 3657 insertions(+), 3395 deletions(-)\n\n\nMaybe right before we fork the tree?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 23 Apr 2023 11:04:33 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-04-22 Sa 16:15, Tom Lane wrote:\n> Noah Misch<noah@leadboat.com> writes:\n>> - skip if the break-glass \"pgindent: no\" appears in a commit message\n> There are two things that bother me about putting this functionality\n> into a server hook, beyond the possible speed issue:\n>\n> * The risk of failure. I don't have a terribly difficult time imagining\n> situations where things get sufficiently wedged that the server accepts\n> *no* commits, not even ones fixing the problem. An override such as\n> you suggest here could assuage that fear, perhaps.\n>\n> * The lack of user-friendliness. AFAIK, if a pre-receive hook fails\n> you learn little except that it failed. This could be extremely\n> frustrating to debug, especially in a situation where your local\n> pgindent is giving you different results than the server gets.\n>\n> The idea of a buildfarm animal failing if --show-diff isn't empty\n> is attractive to me mainly because it is far nicer from the\n> debuggability standpoint.\n>\n> Maybe, after we get some amount of experience with trying to keep\n> things always indent-clean, we will decide that it's reliable enough\n> to enforce in a server hook. I think going for that right away is\n> sheer folly though.\n>\n> \t\t\t\n\n\nWhat I'll do for now is set up a buildfarm module that will log the \ndifferences but won't error out if there are any. At least that way \nwe'll know better what we're dealing with.\n\nI don't really like Noah's idea of having a pre-receive hook possibly \nspew thousands of diff lines to the terminal.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-22 Sa 16:15, Tom Lane wrote:\n\n\nNoah Misch <noah@leadboat.com> writes:\n\n\n- skip if the break-glass \"pgindent: no\" appears in a commit message\n\n\n\nThere are two things that bother me about putting this functionality\ninto a server hook, beyond the possible speed issue:\n\n* The risk of failure. I don't have a terribly difficult time imagining\nsituations where things get sufficiently wedged that the server accepts\n*no* commits, not even ones fixing the problem. An override such as\nyou suggest here could assuage that fear, perhaps.\n\n* The lack of user-friendliness. AFAIK, if a pre-receive hook fails\nyou learn little except that it failed. This could be extremely\nfrustrating to debug, especially in a situation where your local\npgindent is giving you different results than the server gets.\n\nThe idea of a buildfarm animal failing if --show-diff isn't empty\nis attractive to me mainly because it is far nicer from the\ndebuggability standpoint.\n\nMaybe, after we get some amount of experience with trying to keep\nthings always indent-clean, we will decide that it's reliable enough\nto enforce in a server hook. I think going for that right away is\nsheer folly though.\n\n\t\t\t\n\n\n\nWhat I'll do for now is set up a buildfarm module that will log\n the differences but won't error out if there are any. At least\n that way we'll know better what we're dealing with.\nI don't really like Noah's idea of having a pre-receive hook\n possibly spew thousands of diff lines to the terminal.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 23 Apr 2023 11:08:42 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-04-22 Sa 15:58, Tom Lane wrote:\n>> Cool, let's use perltidy 20230309 then.\n\n> OK, so when would we do this? The change to 20230309 + valign changes is \n> fairly large:\n\nI think we could go ahead and commit the perltidyrc and README changes\nnow. But the ensuing reformatting should happen as part of the mass\npgindent run, probably next month sometime.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 23 Apr 2023 11:16:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sun, 23 Apr 2023 at 17:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think we could go ahead and commit the perltidyrc and README changes\n> now. But the ensuing reformatting should happen as part of the mass\n> pgindent run, probably next month sometime.\n\nI think it's better to make the changes close together, not with a\nmonth in between. Otherwise no-one will be able to run perltidy on\ntheir patches, because the config and the files are even more out of\nsync than they are now. So I'd propose to commit the perltidyrc\nchanges right before the pgindent run.\n\n\n",
"msg_date": "Sun, 23 Apr 2023 17:29:15 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Apr 22, 2023 at 4:12 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2023-04-22 Sa 08:47, Magnus Hagander wrote:\n>\n> On Sat, Apr 22, 2023 at 1:42 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 2023-04-22 Sa 04:50, Michael Paquier wrote:\n>\n> On Fri, Apr 21, 2023 at 09:58:17AM +0200, Jelte Fennema wrote:\n>\n> For 2 the upstream thread listed two approaches:\n> a. Install a pre-receive git hook on the git server that rejects\n> pushes to master that are not indented\n> b. Add a test suite that checks if the code is correctly indented, so\n> the build farm would complain about it. (Suggested by Peter E)\n>\n> I think both a and b would work to achieve 2. But as Peter E said, b\n> indeed sounds like less of a divergence of the status quo. So my vote\n> would be for b.\n>\n> FWIW, I think that there is value for both of them. Anyway, isn't 'a'\n> exactly the same as 'b' in design? Both require a build of\n> pg_bsd_indent, meaning that 'a' would also need to run an equivalent\n> of the regression test suite, but it would be actually costly\n> especially if pg_bsd_indent itself is patched. I think that getting\n> more noisy on this matter with 'b' would be enough, but as an extra\n> PG_TEST_EXTRA for committers to set.\n>\n> Such a test suite would need a dependency to the 'git' command itself,\n> which is not something that could be safely run in a release tarball,\n> in any case.\n>\n>\n> Perhaps we should start with a buildfarm module, which would run pg_indent --show-diff. That would only need to run on one animal, so a failure wouldn't send the whole buildfarm red. This would be pretty easy to do.\n>\n> Just to be clear, you guys are aware we already have a git repo\n> that's supposed to track \"head + pg_indent\" at\n> https://git.postgresql.org/gitweb/?p=postgresql-pgindent.git;a=shortlog;h=refs/heads/master-pgindent\n> right?\n>\n> I see it is currently not working and this has not been noticed by\n> anyone, so I guess it kind of indicates nobody is using it today. The\n> reason appears to be that it uses pg_bsd_indent that's in our apt\n> repos and that's 2.1.1 and not 2.1.2 at this point. But if this is a\n> service that would actually be useful, this could certainly be ficked\n> pretty easy.\n>\n> But bottom line is that if pgindent is as predictable as it should be,\n> it might be easier to use that one central place that already does it\n> rather than have to build a buildfarm module?\n>\n>\n> Now that pg_bsd_indent is in the core code why not just use that?\n\nyeah, it just required building. And the lazy approach was to use the DEB :)\n\nFor a quick fix I've built the current HEAD and have it just using\nthat one -- right now it'll fail again when a change is made to it,\nbut I'll get that cleaned up.\n\nIt's back up and running, and results are at\nhttps://git.postgresql.org/gitweb/?p=postgresql-pgindent.git;a=shortlog;h=refs/heads/master-pgindent\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sun, 23 Apr 2023 23:52:24 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Apr 22, 2023 at 9:59 PM Noah Misch <noah@leadboat.com> wrote:\n>\n> (Given that another commentator is \"absolutely against\" a hook, this message\n> is mostly for readers considering this for other projects.)\n>\n> On Sat, Apr 22, 2023 at 03:23:59PM +0200, Magnus Hagander wrote:\n> > On Tue, Feb 7, 2023 at 5:43 AM Noah Misch <noah@leadboat.com> wrote:\n> > > On Mon, Feb 06, 2023 at 06:17:02PM +0100, Peter Eisentraut wrote:\n> > > > Also, pgindent takes tens of seconds to run, so hooking that into the git\n> > > > push process would slow this down quite a bit.\n> > >\n> > > The pre-receive hook would do a full pgindent when you change typedefs.list.\n> > > Otherwise, it would reindent only the files being changed. The average push\n> > > need not take tens of seconds.\n> >\n> > It would probably ont be tens of seconds, but it would be slow. It\n> > would need to do a clean git checkout into an isolated environment and\n> > spawn in there, and just that takes time.\n>\n> That would be slow, but I wouldn't do it that way. I'd make \"pg_bsd_ident\n> --pre-receive --show-diff\" that, instead of reading from the filesystem, gets\n> the bytes to check from the equivalent of this Perl-like pseudocode:\n>\n> while (<>) {\n> my($old_hash, $new_hash, $ref) = split;\n> foreach my $filename (split /\\n/, `git diff --name-only $old_hash..$new_hash`) {\n> $file_content = `git show $new_hash $filename`;\n> }\n> }\n>\n> I just ran pgindent on the file name lists of the last 1000 commits, and\n> runtime was less than 0.5s for each of 998/1000 commits. There's more a real\n> implementation might handle:\n>\n> - pg_bsd_indent changes\n> - typedefs.list changes\n> - skip if the break-glass \"pgindent: no\" appears in a commit message\n> - commits changing so many files that a clean \"git checkout\" would be faster\n\nWouldn't there also be the case of a header file change that could\npotentially invalidate a whole lot of C files?\n\nThere's also the whole potential problem of isolations. We need to run\nthe whole thing in an isolated environment (because any way in at this\nstage could lead to an exploit if a committer key is compromised at\nany point). And at least in the second case, it might not have access\nto view that data yet because it's not in... Could probably be worked\naround, but not trivially so.\n\n(But as mentioned above, I think the conclusion is we don't want this\nenforced in a receive hook anyway)\n\n\n> > And it would have to also\n> > know to rebuild pg_bsd_indent on demand, which would require a full\n> > ./configure run (or meson equivalent). etc.\n> >\n> > So while it might not be tens of seconds, it most definitely won't be fast.\n>\n> A project more concerned about elapsed time than detecting all defects might\n> even choose to take no synchronous action for pg_bsd_indent and typedefs.list\n> changes. When a commit changes either of those, the probability that the\n> committer already ran pgindent rises substantially.\n\nTrue, but it's far from 100% -- and if you got something in that\ndidn't work, then the *next* committer would have to clean it up....\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 24 Apr 2023 00:01:24 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 23.04.23 17:29, Jelte Fennema wrote:\n> On Sun, 23 Apr 2023 at 17:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think we could go ahead and commit the perltidyrc and README changes\n>> now. But the ensuing reformatting should happen as part of the mass\n>> pgindent run, probably next month sometime.\n> \n> I think it's better to make the changes close together, not with a\n> month in between. Otherwise no-one will be able to run perltidy on\n> their patches, because the config and the files are even more out of\n> sync than they are now. So I'd propose to commit the perltidyrc\n> changes right before the pgindent run.\n\nDoes anyone find perltidy useful? To me, it functions more like a \nJavaScript compiler in that once you process the source code, it is no \nlonger useful for manual editing. If we are going to have the buildfarm \ncheck indentation and that is going to be extended to Perl code, I have \nsome concerns about that.\n\n\n\n",
"msg_date": "Mon, 24 Apr 2023 16:09:22 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Does anyone find perltidy useful? To me, it functions more like a \n> JavaScript compiler in that once you process the source code, it is no \n> longer useful for manual editing. If we are going to have the buildfarm \n> check indentation and that is going to be extended to Perl code, I have \n> some concerns about that.\n\nI certainly don't like its current behavior where adding/changing one\nline can have side-effects on nearby lines. But we have a proposal\nto clean that up, and I'm cautiously optimistic that it'll be better\nin future. Did you have other specific concerns?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Apr 2023 10:14:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 24.04.23 16:14, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Does anyone find perltidy useful? To me, it functions more like a\n>> JavaScript compiler in that once you process the source code, it is no\n>> longer useful for manual editing. If we are going to have the buildfarm\n>> check indentation and that is going to be extended to Perl code, I have\n>> some concerns about that.\n> \n> I certainly don't like its current behavior where adding/changing one\n> line can have side-effects on nearby lines. But we have a proposal\n> to clean that up, and I'm cautiously optimistic that it'll be better\n> in future. Did you have other specific concerns?\n\nI think the worst is how it handles multi-line data structures like\n\n $newnode->command_ok(\n [\n 'psql', '-X',\n '-v', 'ON_ERROR_STOP=1',\n '-c', $upcmds,\n '-d', $oldnode->connstr($updb),\n ],\n \"ran version adaptation commands for database $updb\");\n\nor\n\n $node->command_fails_like(\n [\n 'pg_basebackup', '-D',\n \"$tempdir/backup\", '--compress',\n $cft->[0]\n ],\n qr/$cfail/,\n 'client ' . $cft->[2]);\n\nPerhaps that is included in the upcoming changes you are referring to?\n\n\n",
"msg_date": "Wed, 26 Apr 2023 09:38:40 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 24.04.23 16:14, Tom Lane wrote:\n>> I certainly don't like its current behavior where adding/changing one\n>> line can have side-effects on nearby lines. But we have a proposal\n>> to clean that up, and I'm cautiously optimistic that it'll be better\n>> in future. Did you have other specific concerns?\n\n> I think the worst is how it handles multi-line data structures like\n\n> $newnode->command_ok(\n> [\n> 'psql', '-X',\n> '-v', 'ON_ERROR_STOP=1',\n> '-c', $upcmds,\n> '-d', $oldnode->connstr($updb),\n> ],\n> \"ran version adaptation commands for database $updb\");\n\nYeah, I agree, there is no case where that doesn't suck. I don't\nmind it imposing specific placements of brackets and so on ---\nthat's very analogous to what pgindent will do. But it likes to\nre-flow comma-separated lists, and generally manages to make a\ncomplete logical hash of them when it does, as in your other\nexample:\n\n> $node->command_fails_like(\n> [\n> 'pg_basebackup', '-D',\n> \"$tempdir/backup\", '--compress',\n> $cft->[0]\n> ],\n> qr/$cfail/,\n> 'client ' . $cft->[2]);\n\nCan we fix it to preserve the programmer's choices of line breaks\nin comma-separated lists?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Apr 2023 09:27:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-04-26 We 09:27, Tom Lane wrote:\n> Peter Eisentraut<peter.eisentraut@enterprisedb.com> writes:\n>> On 24.04.23 16:14, Tom Lane wrote:\n>>> I certainly don't like its current behavior where adding/changing one\n>>> line can have side-effects on nearby lines. But we have a proposal\n>>> to clean that up, and I'm cautiously optimistic that it'll be better\n>>> in future. Did you have other specific concerns?\n>> I think the worst is how it handles multi-line data structures like\n>> $newnode->command_ok(\n>> [\n>> 'psql', '-X',\n>> '-v', 'ON_ERROR_STOP=1',\n>> '-c', $upcmds,\n>> '-d', $oldnode->connstr($updb),\n>> ],\n>> \"ran version adaptation commands for database $updb\");\n> Yeah, I agree, there is no case where that doesn't suck. I don't\n> mind it imposing specific placements of brackets and so on ---\n> that's very analogous to what pgindent will do. But it likes to\n> re-flow comma-separated lists, and generally manages to make a\n> complete logical hash of them when it does, as in your other\n> example:\n>\n>> $node->command_fails_like(\n>> [\n>> 'pg_basebackup', '-D',\n>> \"$tempdir/backup\", '--compress',\n>> $cft->[0]\n>> ],\n>> qr/$cfail/,\n>> 'client ' . $cft->[2]);\n> Can we fix it to preserve the programmer's choices of line breaks\n> in comma-separated lists?\n\n\n\nI doubt there's something like that. You can freeze arbitrary blocks of \ncode like this (from the manual)\n\n\n#<<< format skipping: do not let perltidy change my nice formatting\n my @list = (1,\n 1, 1,\n 1, 2, 1,\n 1, 3, 3, 1,\n 1, 4, 6, 4, 1,);\n#>>>\n\n\nBut that gets old and ugly pretty quickly.\n\nThere is a --freeze-newlines option, but it's global. I don't think we \nwant that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-26 We 09:27, Tom Lane wrote:\n\n\nPeter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n\nOn 24.04.23 16:14, Tom Lane wrote:\n\n\nI certainly don't like its current behavior where adding/changing one\nline can have side-effects on nearby lines. But we have a proposal\nto clean that up, and I'm cautiously optimistic that it'll be better\nin future. Did you have other specific concerns?\n\n\n\n\n\n\nI think the worst is how it handles multi-line data structures like\n\n\n\n\n\n $newnode->command_ok(\n [\n 'psql', '-X',\n '-v', 'ON_ERROR_STOP=1',\n '-c', $upcmds,\n '-d', $oldnode->connstr($updb),\n ],\n \"ran version adaptation commands for database $updb\");\n\n\n\nYeah, I agree, there is no case where that doesn't suck. I don't\nmind it imposing specific placements of brackets and so on ---\nthat's very analogous to what pgindent will do. But it likes to\nre-flow comma-separated lists, and generally manages to make a\ncomplete logical hash of them when it does, as in your other\nexample:\n\n\n\n $node->command_fails_like(\n [\n 'pg_basebackup', '-D',\n \"$tempdir/backup\", '--compress',\n $cft->[0]\n ],\n qr/$cfail/,\n 'client ' . $cft->[2]);\n\n\n\nCan we fix it to preserve the programmer's choices of line breaks\nin comma-separated lists?\n\n\n\n\n\nI doubt there's something like that. You can freeze arbitrary\n blocks of code like this (from the manual)\n\n\n#<<< format skipping: do not let perltidy change my nice formatting\n my @list = (1,\n 1, 1,\n 1, 2, 1,\n 1, 3, 3, 1,\n 1, 4, 6, 4, 1,);\n#>>> \n\n\n\nBut that gets old and ugly pretty quickly.\nThere is a --freeze-newlines option, but it's global. I don't\n think we want that.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 26 Apr 2023 15:44:47 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-04-26 We 09:27, Tom Lane wrote:\n>> Yeah, I agree, there is no case where that doesn't suck. I don't\n>> mind it imposing specific placements of brackets and so on ---\n>> that's very analogous to what pgindent will do. But it likes to\n>> re-flow comma-separated lists, and generally manages to make a\n>> complete logical hash of them when it does, as in your other\n>> example:\n\n> I doubt there's something like that.\n\nI had a read-through of the latest version's man page, and found\nthis promising-looking entry:\n\n-boc, --break-at-old-comma-breakpoints\n\n The -boc flag is another way to prevent comma-separated lists from\n being reformatted. Using -boc on the above example, plus additional\n flags to retain the original style, yields\n\n # perltidy -boc -lp -pt=2 -vt=1 -vtc=1\n my @list = (1,\n 1, 1,\n 1, 2, 1,\n 1, 3, 3, 1,\n 1, 4, 6, 4, 1,);\n\n A disadvantage of this flag compared to the methods discussed above is\n that all tables in the file must already be nicely formatted.\n\nI've not tested this, but it looks like it would do what we need,\nmodulo needing to fix all the existing damage by hand ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Apr 2023 16:05:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-Feb-05, Andrew Dunstan wrote:\n\n> So here's a diff made from running perltidy v20221112 with the additional\n> setting --valign-exclusion-list=\", = => || && if unless\"\n\nI ran this experimentally with perltidy 20230309, and compared that with\nthe --novalign behavior (not to propose the latter -- just to be aware\nof what else is vertical alignment doing.)\n\nBased on the differences between both, I think we'll definitely want to\ninclude =~ and |= in this list, and I think we should discuss whether to\nalso include \"or\" (for \"do_stuff or die()\" type of constructs) and \"qw\"\n(mainly used in 'use Foo qw(one two)' import lists). All these have\neffects (albeit smaller than the list you gave) on our existing code.\n\n\nIf you change from an exclusion list to --novalign then you lose\nalignment of trailing # comments, which personally I find a loss, even\nthough they're still a multi-line effect. Another change would be that\nit ditches alignment of \"{\" but that only changes msvc/Install.pm, so I\nthink we shouldn't worry; and then there's this one:\n\n-use PostgreSQL::Test::Utils ();\n+use PostgreSQL::Test::Utils ();\n use PostgreSQL::Test::BackgroundPsql ();\n\nwhich I think we could just change to qw() if we cared enough (but I bet\nwe don't).\n\n\nAll in all, I think sticking to\n--valign-exclusion-list=\", = => =~ |= || && if or qw unless\"\nis a good deal.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Ellos andaban todos desnudos como su madre los parió, y también las mujeres,\naunque no vi más que una, harto moza, y todos los que yo vi eran todos\nmancebos, que ninguno vi de edad de más de XXX años\" (Cristóbal Colón)\n\n\n",
"msg_date": "Fri, 28 Apr 2023 11:25:45 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Wed, Apr 26, 2023 at 03:44:47PM -0400, Andrew Dunstan wrote:\n> On 2023-04-26 We 09:27, Tom Lane wrote:\n> I doubt there's something like that. You can freeze arbitrary blocks of code\n> like this (from the manual)\n> \n> #<<< format skipping: do not let perltidy change my nice formatting\n> my @list = (1,\n> 1, 1,\n> 1, 2, 1,\n> 1, 3, 3, 1,\n> 1, 4, 6, 4, 1,);\n> #>>> \n> \n> \n> But that gets old and ugly pretty quickly.\n\nCan those comments be added by a preprocessor before calling perltidy,\nand then removed on completion?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n",
"msg_date": "Fri, 28 Apr 2023 14:08:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-04-28 Fr 05:25, Alvaro Herrera wrote:\n> On 2023-Feb-05, Andrew Dunstan wrote:\n>\n>> So here's a diff made from running perltidy v20221112 with the additional\n>> setting --valign-exclusion-list=\", = => || && if unless\"\n> I ran this experimentally with perltidy 20230309, and compared that with\n> the --novalign behavior (not to propose the latter -- just to be aware\n> of what else is vertical alignment doing.)\n\n\nThanks for looking.\n\n\n>\n> Based on the differences between both, I think we'll definitely want to\n> include =~ and |= in this list, and I think we should discuss whether to\n> also include \"or\" (for \"do_stuff or die()\" type of constructs) and \"qw\"\n> (mainly used in 'use Foo qw(one two)' import lists). All these have\n> effects (albeit smaller than the list you gave) on our existing code.\n\n\nI'm good with all of these I think\n\n\n>\n>\n> If you change from an exclusion list to --novalign then you lose\n> alignment of trailing # comments, which personally I find a loss, even\n> though they're still a multi-line effect. Another change would be that\n> it ditches alignment of \"{\" but that only changes msvc/Install.pm, so I\n> think we shouldn't worry; and then there's this one:\n>\n> -use PostgreSQL::Test::Utils ();\n> +use PostgreSQL::Test::Utils ();\n> use PostgreSQL::Test::BackgroundPsql ();\n>\n> which I think we could just change to qw() if we cared enough (but I bet\n> we don't).\n\n\nYeah, me too.\n\n\n>\n>\n> All in all, I think sticking to\n> --valign-exclusion-list=\", = => =~ |= || && if or qw unless\"\n> is a good deal.\n>\n\nwfm\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-28 Fr 05:25, Alvaro Herrera\n wrote:\n\n\nOn 2023-Feb-05, Andrew Dunstan wrote:\n\n\n\nSo here's a diff made from running perltidy v20221112 with the additional\nsetting --valign-exclusion-list=\", = => || && if unless\"\n\n\n\nI ran this experimentally with perltidy 20230309, and compared that with\nthe --novalign behavior (not to propose the latter -- just to be aware\nof what else is vertical alignment doing.)\n\n\n\nThanks for looking.\n\n\n\n\n\n\nBased on the differences between both, I think we'll definitely want to\ninclude =~ and |= in this list, and I think we should discuss whether to\nalso include \"or\" (for \"do_stuff or die()\" type of constructs) and \"qw\"\n(mainly used in 'use Foo qw(one two)' import lists). All these have\neffects (albeit smaller than the list you gave) on our existing code.\n\n\n\nI'm good with all of these I think\n\n\n\n\n\n\n\nIf you change from an exclusion list to --novalign then you lose\nalignment of trailing # comments, which personally I find a loss, even\nthough they're still a multi-line effect. Another change would be that\nit ditches alignment of \"{\" but that only changes msvc/Install.pm, so I\nthink we shouldn't worry; and then there's this one:\n\n-use PostgreSQL::Test::Utils ();\n+use PostgreSQL::Test::Utils ();\n use PostgreSQL::Test::BackgroundPsql ();\n\nwhich I think we could just change to qw() if we cared enough (but I bet\nwe don't).\n\n\n\nYeah, me too.\n\n\n\n\n\n\n\nAll in all, I think sticking to\n--valign-exclusion-list=\", = => =~ |= || && if or qw unless\"\nis a good deal.\n\n\n\n\n\nwfm\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 30 Apr 2023 08:57:39 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-04-28 Fr 14:08, Bruce Momjian wrote:\n> On Wed, Apr 26, 2023 at 03:44:47PM -0400, Andrew Dunstan wrote:\n>> On 2023-04-26 We 09:27, Tom Lane wrote:\n>> I doubt there's something like that. You can freeze arbitrary blocks of code\n>> like this (from the manual)\n>>\n>> #<<< format skipping: do not let perltidy change my nice formatting\n>> my @list = (1,\n>> 1, 1,\n>> 1, 2, 1,\n>> 1, 3, 3, 1,\n>> 1, 4, 6, 4, 1,);\n>> #>>>\n>>\n>>\n>> But that gets old and ugly pretty quickly.\n> Can those comments be added by a preprocessor before calling perltidy,\n> and then removed on completion?\n>\n\nI imagine so, but we'd need a way of determining algorithmically which \nlines to protect. That might not be at all simple. And then we'd have \nthe maintenance burden of the preprocessor.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-28 Fr 14:08, Bruce Momjian\n wrote:\n\n\nOn Wed, Apr 26, 2023 at 03:44:47PM -0400, Andrew Dunstan wrote:\n\n\nOn 2023-04-26 We 09:27, Tom Lane wrote:\nI doubt there's something like that. You can freeze arbitrary blocks of code\nlike this (from the manual)\n\n#<<< format skipping: do not let perltidy change my nice formatting\n my @list = (1,\n 1, 1,\n 1, 2, 1,\n 1, 3, 3, 1,\n 1, 4, 6, 4, 1,);\n#>>> \n\n\nBut that gets old and ugly pretty quickly.\n\n\n\nCan those comments be added by a preprocessor before calling perltidy,\nand then removed on completion?\n\n\n\n\n\nI imagine so, but we'd need a way of determining algorithmically\n which lines to protect. That might not be at all simple. And then\n we'd have the maintenance burden of the preprocessor.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 30 Apr 2023 09:02:04 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-04-28 Fr 14:08, Bruce Momjian wrote:\n>> Can those comments be added by a preprocessor before calling perltidy,\n>> and then removed on completion?\n\n> I imagine so, but we'd need a way of determining algorithmically which \n> lines to protect. That might not be at all simple. And then we'd have \n> the maintenance burden of the preprocessor.\n\nYeah, it's hard to see how you'd do that without writing a full Perl\nparser.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 30 Apr 2023 10:32:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "I wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I doubt there's something like that.\n\n> I had a read-through of the latest version's man page, and found\n> this promising-looking entry:\n> \t-boc, --break-at-old-comma-breakpoints\n\nSadly, this seems completely not ready for prime time. I experimented\nwith it under perltidy 20230309, and found that it caused hundreds\nof kilobytes of gratuitous changes that don't seem to have a direct\nconnection to the claimed purpose. Most of these seemed to be from\nforcing a line break after a function call's open paren, like\n\n@@ -50,10 +50,12 @@ detects_heap_corruption(\n #\n fresh_test_table('test');\n $node->safe_psql('postgres', q(VACUUM (FREEZE, DISABLE_PAGE_SKIPPING) test));\n-detects_no_corruption(\"verify_heapam('test')\",\n+detects_no_corruption(\n+\t\"verify_heapam('test')\",\n \t\"all-frozen not corrupted table\");\n corrupt_first_page('test');\n-detects_heap_corruption(\"verify_heapam('test')\",\n+detects_heap_corruption(\n+\t\"verify_heapam('test')\",\n \t\"all-frozen corrupted table\");\n detects_no_corruption(\n \t\"verify_heapam('test', skip := 'all-frozen')\",\n\nalthough in some places it just wanted to insert a space, like this:\n\n@@ -77,9 +81,9 @@ print \"standby 2: $result\\n\";\n is($result, qq(33|0|t), 'check streamed sequence content on standby 2');\n \n # Check that only READ-only queries can run on standbys\n-is($node_standby_1->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n+is( $node_standby_1->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n \t3, 'read-only queries on standby 1');\n-is($node_standby_2->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n+is( $node_standby_2->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n \t3, 'read-only queries on standby 2');\n \n # Tests for connection parameter target_session_attrs\n\n\nSo I don't think we want that. Maybe in some future version it'll\nbe more under control.\n\nBarring objections, I'll use the attached on Friday.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 17 May 2023 17:10:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-05-17 We 17:10, Tom Lane wrote:\n> I wrote:\n>> Andrew Dunstan<andrew@dunslane.net> writes:\n>>> I doubt there's something like that.\n>> I had a read-through of the latest version's man page, and found\n>> this promising-looking entry:\n>> \t-boc, --break-at-old-comma-breakpoints\n> Sadly, this seems completely not ready for prime time. I experimented\n> with it under perltidy 20230309, and found that it caused hundreds\n> of kilobytes of gratuitous changes that don't seem to have a direct\n> connection to the claimed purpose. Most of these seemed to be from\n> forcing a line break after a function call's open paren, like\n>\n> @@ -50,10 +50,12 @@ detects_heap_corruption(\n> #\n> fresh_test_table('test');\n> $node->safe_psql('postgres', q(VACUUM (FREEZE, DISABLE_PAGE_SKIPPING) test));\n> -detects_no_corruption(\"verify_heapam('test')\",\n> +detects_no_corruption(\n> +\t\"verify_heapam('test')\",\n> \t\"all-frozen not corrupted table\");\n> corrupt_first_page('test');\n> -detects_heap_corruption(\"verify_heapam('test')\",\n> +detects_heap_corruption(\n> +\t\"verify_heapam('test')\",\n> \t\"all-frozen corrupted table\");\n> detects_no_corruption(\n> \t\"verify_heapam('test', skip := 'all-frozen')\",\n>\n> although in some places it just wanted to insert a space, like this:\n>\n> @@ -77,9 +81,9 @@ print \"standby 2: $result\\n\";\n> is($result, qq(33|0|t), 'check streamed sequence content on standby 2');\n> \n> # Check that only READ-only queries can run on standbys\n> -is($node_standby_1->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n> +is( $node_standby_1->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n> \t3, 'read-only queries on standby 1');\n> -is($node_standby_2->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n> +is( $node_standby_2->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n> \t3, 'read-only queries on standby 2');\n> \n> # Tests for connection parameter target_session_attrs\n>\n>\n> So I don't think we want that. Maybe in some future version it'll\n> be more under control.\n>\n> Barring objections, I'll use the attached on Friday.\n>\n> \t\t\t\n\n\nLGTM\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-17 We 17:10, Tom Lane wrote:\n\n\nI wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\nI doubt there's something like that.\n\n\n\n\n\n\nI had a read-through of the latest version's man page, and found\nthis promising-looking entry:\n\t-boc, --break-at-old-comma-breakpoints\n\n\n\nSadly, this seems completely not ready for prime time. I experimented\nwith it under perltidy 20230309, and found that it caused hundreds\nof kilobytes of gratuitous changes that don't seem to have a direct\nconnection to the claimed purpose. Most of these seemed to be from\nforcing a line break after a function call's open paren, like\n\n@@ -50,10 +50,12 @@ detects_heap_corruption(\n #\n fresh_test_table('test');\n $node->safe_psql('postgres', q(VACUUM (FREEZE, DISABLE_PAGE_SKIPPING) test));\n-detects_no_corruption(\"verify_heapam('test')\",\n+detects_no_corruption(\n+\t\"verify_heapam('test')\",\n \t\"all-frozen not corrupted table\");\n corrupt_first_page('test');\n-detects_heap_corruption(\"verify_heapam('test')\",\n+detects_heap_corruption(\n+\t\"verify_heapam('test')\",\n \t\"all-frozen corrupted table\");\n detects_no_corruption(\n \t\"verify_heapam('test', skip := 'all-frozen')\",\n\nalthough in some places it just wanted to insert a space, like this:\n\n@@ -77,9 +81,9 @@ print \"standby 2: $result\\n\";\n is($result, qq(33|0|t), 'check streamed sequence content on standby 2');\n \n # Check that only READ-only queries can run on standbys\n-is($node_standby_1->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n+is( $node_standby_1->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n \t3, 'read-only queries on standby 1');\n-is($node_standby_2->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n+is( $node_standby_2->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),\n \t3, 'read-only queries on standby 2');\n \n # Tests for connection parameter target_session_attrs\n\n\nSo I don't think we want that. Maybe in some future version it'll\nbe more under control.\n\nBarring objections, I'll use the attached on Friday.\n\n\t\t\t\n\n\n\nLGTM\n\n\ncheers\n\n\nandrew\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 18 May 2023 08:59:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, 22 Apr 2023 at 13:42, Andrew Dunstan <andrew@dunslane.net> wrote:\n> Perhaps we should start with a buildfarm module, which would run pg_indent --show-diff. That would only need to run on one animal, so a failure wouldn't send the whole buildfarm red. This would be pretty easy to do.\n\nJust to be clear on where we are. Is there anything blocking us from\ndoing this, except for the PG16 branch cut? (that I guess is planned\nsomewhere in July?)\n\nJust doing this for pgindent and not for perltidy would already be a\nhuge improvement over the current situation IMHO.\n\n\n",
"msg_date": "Thu, 15 Jun 2023 17:26:58 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-06-15 Th 11:26, Jelte Fennema wrote:\n> On Sat, 22 Apr 2023 at 13:42, Andrew Dunstan<andrew@dunslane.net> wrote:\n>> Perhaps we should start with a buildfarm module, which would run pg_indent --show-diff. That would only need to run on one animal, so a failure wouldn't send the whole buildfarm red. This would be pretty easy to do.\n> Just to be clear on where we are. Is there anything blocking us from\n> doing this, except for the PG16 branch cut? (that I guess is planned\n> somewhere in July?)\n>\n> Just doing this for pgindent and not for perltidy would already be a\n> huge improvement over the current situation IMHO.\n\n\nThe short answer is that some high priority demands from $dayjob got in \nthe way. However, I hope to have it done soon.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-15 Th 11:26, Jelte Fennema\n wrote:\n\n\nOn Sat, 22 Apr 2023 at 13:42, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nPerhaps we should start with a buildfarm module, which would run pg_indent --show-diff. That would only need to run on one animal, so a failure wouldn't send the whole buildfarm red. This would be pretty easy to do.\n\n\n\nJust to be clear on where we are. Is there anything blocking us from\ndoing this, except for the PG16 branch cut? (that I guess is planned\nsomewhere in July?)\n\nJust doing this for pgindent and not for perltidy would already be a\nhuge improvement over the current situation IMHO.\n\n\n\nThe short answer is that some high priority demands from $dayjob\n got in the way. However, I hope to have it done soon. \n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 15 Jun 2023 12:12:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-06-15 Th 12:12, Andrew Dunstan wrote:\n>\n>\n> On 2023-06-15 Th 11:26, Jelte Fennema wrote:\n>> On Sat, 22 Apr 2023 at 13:42, Andrew Dunstan<andrew@dunslane.net> wrote:\n>>> Perhaps we should start with a buildfarm module, which would run pg_indent --show-diff. That would only need to run on one animal, so a failure wouldn't send the whole buildfarm red. This would be pretty easy to do.\n>> Just to be clear on where we are. Is there anything blocking us from\n>> doing this, except for the PG16 branch cut? (that I guess is planned\n>> somewhere in July?)\n>>\n>> Just doing this for pgindent and not for perltidy would already be a\n>> huge improvement over the current situation IMHO.\n>\n>\n> The short answer is that some high priority demands from $dayjob got \n> in the way. However, I hope to have it done soon.\n>\n\n\nSee \n<https://github.com/PGBuildFarm/client-code/commit/f9c1c15048b412d34ccda8020d989b3a7b566c05>\n\n\nI have set up a new buildfarm animal called koel which will run the module.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-15 Th 12:12, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-06-15 Th 11:26, Jelte Fennema\n wrote:\n\n\nOn Sat, 22 Apr 2023 at 13:42, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nPerhaps we should start with a buildfarm module, which would run pg_indent --show-diff. That would only need to run on one animal, so a failure wouldn't send the whole buildfarm red. This would be pretty easy to do.\n\n\nJust to be clear on where we are. Is there anything blocking us from\ndoing this, except for the PG16 branch cut? (that I guess is planned\nsomewhere in July?)\n\nJust doing this for pgindent and not for perltidy would already be a\nhuge improvement over the current situation IMHO.\n\n\n\nThe short answer is that some high priority demands from\n $dayjob got in the way. However, I hope to have it done soon. \n\n\n\n\n\n\nSee\n<https://github.com/PGBuildFarm/client-code/commit/f9c1c15048b412d34ccda8020d989b3a7b566c05>\n\n\nI have set up a new buildfarm animal called koel which will run\n the module.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 17 Jun 2023 10:08:32 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I have set up a new buildfarm animal called koel which will run the module.\n\nIs koel tracking the right repo? It just spit up with a bunch of\ndiffs that seem to have little to do with the commit it's claiming\ncaused them:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=koel&dt=2023-06-19%2019%3A49%3A03\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Jun 2023 17:07:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Jun 17, 2023 at 10:08:32AM -0400, Andrew Dunstan wrote:\n> See <https://github.com/PGBuildFarm/client-code/commit/f9c1c15048b412d34ccda8020d989b3a7b566c05>\n> I have set up a new buildfarm animal called koel which will run the module.\n\nThat's really cool! Thanks for taking the time to do that!\n--\nMichael",
"msg_date": "Tue, 20 Jun 2023 11:09:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-06-19 Mo 17:07, Tom Lane wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>> I have set up a new buildfarm animal called koel which will run the module.\n> Is koel tracking the right repo? It just spit up with a bunch of\n> diffs that seem to have little to do with the commit it's claiming\n> caused them:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=koel&dt=2023-06-19%2019%3A49%3A03\n>\n> \t\t\t\n\n\nYeah, I changed it so that instead of just checking new commits it would \ncheck the whole tree. The problem with the incremental approach is that \nthe next run it might turn green again but the issue would not have been \nfixed.\n\nI think this is a one-off issue. Once we clean up the tree the problem \nwould disappear and the commits it shows would be correct. I imaging \nthat's going to happen any day now?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-19 Mo 17:07, Tom Lane wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\nI have set up a new buildfarm animal called koel which will run the module.\n\n\n\nIs koel tracking the right repo? It just spit up with a bunch of\ndiffs that seem to have little to do with the commit it's claiming\ncaused them:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=koel&dt=2023-06-19%2019%3A49%3A03\n\n\t\t\t\n\n\n\nYeah, I changed it so that instead of just checking new commits\n it would check the whole tree. The problem with the incremental\n approach is that the next run it might turn green again but the\n issue would not have been fixed.\nI think this is a one-off issue. Once we clean up the tree the\n problem would disappear and the commits it shows would be correct.\n I imaging that's going to happen any day now?\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 20 Jun 2023 08:04:39 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-06-19 Mo 17:07, Tom Lane wrote:\n>> Is koel tracking the right repo? It just spit up with a bunch of\n>> diffs that seem to have little to do with the commit it's claiming\n>> caused them:\n\n> Yeah, I changed it so that instead of just checking new commits it would \n> check the whole tree. The problem with the incremental approach is that \n> the next run it might turn green again but the issue would not have been \n> fixed.\n\nAh.\n\n> I think this is a one-off issue. Once we clean up the tree the problem \n> would disappear and the commits it shows would be correct. I imaging \n> that's going to happen any day now?\n\nI can go fix the problems now that we know there are some (already).\nHowever, if what you're saying is that koel only checks recently-changed\nfiles, that's going to be pretty misleading in future too. If people\ndon't react to such reports right away, they'll disappear, no?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Jun 2023 09:08:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-06-20 Tu 09:08, Tom Lane wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>> On 2023-06-19 Mo 17:07, Tom Lane wrote:\n>>> Is koel tracking the right repo? It just spit up with a bunch of\n>>> diffs that seem to have little to do with the commit it's claiming\n>>> caused them:\n>> Yeah, I changed it so that instead of just checking new commits it would\n>> check the whole tree. The problem with the incremental approach is that\n>> the next run it might turn green again but the issue would not have been\n>> fixed.\n> Ah.\n>\n>> I think this is a one-off issue. Once we clean up the tree the problem\n>> would disappear and the commits it shows would be correct. I imaging\n>> that's going to happen any day now?\n> I can go fix the problems now that we know there are some (already).\n> However, if what you're saying is that koel only checks recently-changed\n> files, that's going to be pretty misleading in future too. If people\n> don't react to such reports right away, they'll disappear, no?\n>\n> \t\t\t\n\n\nThat's what would have happened if I hadn't changed the way it worked \n(and that's why I changed it). Now it doesn't just check recent commits, \nit checks the whole tree, and will stay red until the tree is fixed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-20 Tu 09:08, Tom Lane wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\nOn 2023-06-19 Mo 17:07, Tom Lane wrote:\n\n\nIs koel tracking the right repo? It just spit up with a bunch of\ndiffs that seem to have little to do with the commit it's claiming\ncaused them:\n\n\n\n\n\n\nYeah, I changed it so that instead of just checking new commits it would \ncheck the whole tree. The problem with the incremental approach is that \nthe next run it might turn green again but the issue would not have been \nfixed.\n\n\n\nAh.\n\n\n\nI think this is a one-off issue. Once we clean up the tree the problem \nwould disappear and the commits it shows would be correct. I imaging \nthat's going to happen any day now?\n\n\n\nI can go fix the problems now that we know there are some (already).\nHowever, if what you're saying is that koel only checks recently-changed\nfiles, that's going to be pretty misleading in future too. If people\ndon't react to such reports right away, they'll disappear, no?\n\n\t\t\t\n\n\n\nThat's what would have happened if I hadn't changed the way it\n worked (and that's why I changed it). Now it doesn't just check\n recent commits, it checks the whole tree, and will stay red until\n the tree is fixed.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 20 Jun 2023 09:21:14 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Jun 17, 2023 at 7:08 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> I have set up a new buildfarm animal called koel which will run the module.\n\nI'm starting to have doubts about this policy. There have now been\nquite a few follow-up \"fixes\" to indentation issues that koel\ncomplained about. None of these fixups have been included in\n.git-blame-ignore-revs. If things continue like this then \"git blame\"\nis bound to become much less usable over time.\n\nI don't think that it makes sense to invent yet another rule for\n.git-blame-ignore-revs, though. Will we need another buildfarm member\nto enforce that rule, too?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 11 Aug 2023 13:59:40 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-11 13:59:40 -0700, Peter Geoghegan wrote:\n> On Sat, Jun 17, 2023 at 7:08 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > I have set up a new buildfarm animal called koel which will run the module.\n> \n> I'm starting to have doubts about this policy. There have now been\n> quite a few follow-up \"fixes\" to indentation issues that koel\n> complained about. None of these fixups have been included in\n> .git-blame-ignore-revs. If things continue like this then \"git blame\"\n> is bound to become much less usable over time.\n\nI'm not sure I buy that that's going to be a huge problem - most of the time\nsuch fixups are pretty small compared to larger reindents.\n\n\n> I don't think that it makes sense to invent yet another rule for\n> .git-blame-ignore-revs, though. Will we need another buildfarm member\n> to enforce that rule, too?\n\nWe could a test that fails when there's some mis-indented code. That seems\nlike it might catch things earlier?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 11 Aug 2023 14:25:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 2:25 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't think that it makes sense to invent yet another rule for\n> > .git-blame-ignore-revs, though. Will we need another buildfarm member\n> > to enforce that rule, too?\n>\n> We could a test that fails when there's some mis-indented code. That seems\n> like it might catch things earlier?\n\nIt definitely would. That would go a long way towards addressing my\nconcerns. But I suspect that that would run into problems that stem\nfrom the fact that the buildfarm is testing something that isn't all\nthat simple. Don't typedefs need to be downloaded from some other\nblessed buildfarm animal?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 11 Aug 2023 14:48:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Aug 11, 2023 at 2:25 PM Andres Freund <andres@anarazel.de> wrote:\n>> We could a test that fails when there's some mis-indented code. That seems\n>> like it might catch things earlier?\n\n+1 for including this in CI tests\n\n> It definitely would. That would go a long way towards addressing my\n> concerns. But I suspect that that would run into problems that stem\n> from the fact that the buildfarm is testing something that isn't all\n> that simple. Don't typedefs need to be downloaded from some other\n> blessed buildfarm animal?\n\nNo. I presume koel is using src/tools/pgindent/typedefs.list,\nwhich has always been the \"canonical\" list but up to now we've\nbeen lazy about maintaining it. Part of the new regime is that\ntypedefs.list should now be updated on-the-fly by patches that\nadd new typedefs.\n\nWe should still compare against the buildfarm's list periodically;\nbut I imagine that the primary result of that will be to remove\nno-longer-used typedefs from typedefs.list.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Aug 2023 18:30:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 3:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> No. I presume koel is using src/tools/pgindent/typedefs.list,\n> which has always been the \"canonical\" list but up to now we've\n> been lazy about maintaining it. Part of the new regime is that\n> typedefs.list should now be updated on-the-fly by patches that\n> add new typedefs.\n\nMy workflow up until now has avoiding making updates to typedefs.list\nin patches. I only update typedefs locally, for long enough to indent\nmy code. The final patch doesn't retain any typedefs.list changes.\n\n> We should still compare against the buildfarm's list periodically;\n> but I imagine that the primary result of that will be to remove\n> no-longer-used typedefs from typedefs.list.\n\nI believe that I came up with my current workflow due to the\ndifficulty of maintaining the typedef file itself. Random\nplatform/binutils implementation details created a lot of noise,\npresumably because my setup wasn't exactly the same as Bruce's setup,\nin whatever way. For example, the order of certain lines would change,\nin a way that had nothing whatsoever to do with structs that my patch\nadded.\n\nI guess that I can't do that anymore. Hopefully maintaining the\ntypedefs.list file isn't as inconvenient as it once seemed to me to\nbe.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 11 Aug 2023 15:46:40 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> My workflow up until now has avoiding making updates to typedefs.list\n> in patches. I only update typedefs locally, for long enough to indent\n> my code. The final patch doesn't retain any typedefs.list changes.\n\nYeah, I've done the same and will have to stop.\n\n> I guess that I can't do that anymore. Hopefully maintaining the\n> typedefs.list file isn't as inconvenient as it once seemed to me to\n> be.\n\nI don't think it'll be a problem. If your rule is \"add new typedef\nnames added by your patch to typedefs.list, keeping them in\nalphabetical order\" then it doesn't seem very complicated, and\nhopefully conflicts between concurrently-developed patches won't\nbe common.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Aug 2023 19:02:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I'm starting to have doubts about this policy. There have now been\n> quite a few follow-up \"fixes\" to indentation issues that koel\n> complained about. None of these fixups have been included in\n> .git-blame-ignore-revs. If things continue like this then \"git blame\"\n> is bound to become much less usable over time.\n\nFWIW, I'm much more optimistic than that. I think what we're seeing\nis just the predictable result of not all committers having yet\nincorporated \"pgindent it before committing\" into their workflow.\nThe need for followup fixes should diminish as people start doing\nthat. If you want to hurry things along, peer pressure on committers\nwho clearly aren't bothering is the solution.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Aug 2023 19:17:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Fri, 11 Aug 2023 at 23:00, Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm starting to have doubts about this policy. There have now been\n> quite a few follow-up \"fixes\" to indentation issues that koel\n> complained about.\n\nI think one thing that would help a lot in reducing the is for\ncommitters to set up the local git commit hook that's on the wiki:\nhttps://wiki.postgresql.org/wiki/Working_with_Git\n\nThat one fails the commit if there's wrongly indented files in the\ncommit. And if you still want to opt out for whatever reason you can\nuse git commit --no-verify\n\n\n",
"msg_date": "Sat, 12 Aug 2023 01:18:04 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-11 18:30:02 -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Fri, Aug 11, 2023 at 2:25 PM Andres Freund <andres@anarazel.de> wrote:\n> >> We could a test that fails when there's some mis-indented code. That seems\n> >> like it might catch things earlier?\n> \n> +1 for including this in CI tests\n\nI didn't even mean CI - I meant 'make check-world' / 'meson test'. Which of\ncourse would include CI automatically.\n\n\n> > It definitely would. That would go a long way towards addressing my\n> > concerns. But I suspect that that would run into problems that stem\n> > from the fact that the buildfarm is testing something that isn't all\n> > that simple. Don't typedefs need to be downloaded from some other\n> > blessed buildfarm animal?\n> \n> No. I presume koel is using src/tools/pgindent/typedefs.list,\n> which has always been the \"canonical\" list but up to now we've\n> been lazy about maintaining it. Part of the new regime is that\n> typedefs.list should now be updated on-the-fly by patches that\n> add new typedefs.\n\nYea. Otherwise nobody else can indent reliably, without repeating the work of\nadding typedefs.list entries of all the patches since the last time it was\nupdated in the repository.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 11 Aug 2023 16:20:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-08-11 18:30:02 -0400, Tom Lane wrote:\n>> +1 for including this in CI tests\n\n> I didn't even mean CI - I meant 'make check-world' / 'meson test'. Which of\n> course would include CI automatically.\n\nHmm. I'm allergic to anything that significantly increases the cost\nof check-world, and this seems like it'd do that.\n\nMaybe we could automate it, but not as part of check-world per se?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Aug 2023 20:11:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 08:11:34PM -0400, Tom Lane wrote:\n> Hmm. I'm allergic to anything that significantly increases the cost\n> of check-world, and this seems like it'd do that.\n> \n> Maybe we could automate it, but not as part of check-world per se?\n\nIt does not have to be part of check-world by default, as we could\nmake it optional with PG_TEST_EXTRA. I bet that most committers set\nthis option for most of the test suites anyway, so the extra cost is\nOK from here. I don't find a single indent run to be that costly,\nespecially with parallelism:\n$ time ./src/tools/pgindent/pgindent .\nreal 0m5.039s\nuser 0m3.403s\nsys 0m1.540s\n--\nMichael",
"msg_date": "Sat, 12 Aug 2023 09:27:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-11 20:11:34 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-08-11 18:30:02 -0400, Tom Lane wrote:\n> >> +1 for including this in CI tests\n>\n> > I didn't even mean CI - I meant 'make check-world' / 'meson test'. Which of\n> > course would include CI automatically.\n>\n> Hmm. I'm allergic to anything that significantly increases the cost\n> of check-world, and this seems like it'd do that.\n\nHm, compared to the cost of check-world it's not that large, but still,\nannoying to make it larger.\n\nWe can make it lot cheaper, but perhaps not in a general enough fashion that\nit's suitable for a test.\n\npgindent already can query git (for --commit). We could teach pgindent to\nask git what remote branch is being tracked, and constructed a list of files\nof the difference between the remote branch and the local branch?\n\nThat option could do something like:\ngit diff --name-only $(git rev-parse --abbrev-ref --symbolic-full-name @{upstream})\n\nThat's pretty quick, even for a relatively large delta.\n\n\n> Maybe we could automate it, but not as part of check-world per se?\n\nWe should definitely do that. Another related thing that'd be useful to\nscript is updating typedefs.list with the additional typedefs found\nlocally. Right now the script for that still lives in the\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 11 Aug 2023 17:46:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-08-11 Fr 19:17, Tom Lane wrote:\n> Peter Geoghegan<pg@bowt.ie> writes:\n>> I'm starting to have doubts about this policy. There have now been\n>> quite a few follow-up \"fixes\" to indentation issues that koel\n>> complained about. None of these fixups have been included in\n>> .git-blame-ignore-revs. If things continue like this then \"git blame\"\n>> is bound to become much less usable over time.\n> FWIW, I'm much more optimistic than that. I think what we're seeing\n> is just the predictable result of not all committers having yet\n> incorporated \"pgindent it before committing\" into their workflow.\n> The need for followup fixes should diminish as people start doing\n> that. If you want to hurry things along, peer pressure on committers\n> who clearly aren't bothering is the solution.\n\n\nYeah, part of the point of creating koel was to give committers a bit of \na nudge in that direction.\n\nWith a git pre-commit hook it's pretty painless.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-11 Fr 19:17, Tom Lane wrote:\n\n\nPeter Geoghegan <pg@bowt.ie> writes:\n\n\nI'm starting to have doubts about this policy. There have now been\nquite a few follow-up \"fixes\" to indentation issues that koel\ncomplained about. None of these fixups have been included in\n.git-blame-ignore-revs. If things continue like this then \"git blame\"\nis bound to become much less usable over time.\n\n\n\nFWIW, I'm much more optimistic than that. I think what we're seeing\nis just the predictable result of not all committers having yet\nincorporated \"pgindent it before committing\" into their workflow.\nThe need for followup fixes should diminish as people start doing\nthat. If you want to hurry things along, peer pressure on committers\nwho clearly aren't bothering is the solution.\n\n\n\nYeah, part of the point of creating koel was to give committers a\n bit of a nudge in that direction. \n\nWith a git pre-commit hook it's pretty painless.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 12 Aug 2023 11:57:25 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-08-11 Fr 19:02, Tom Lane wrote:\n> Peter Geoghegan<pg@bowt.ie> writes:\n>> My workflow up until now has avoiding making updates to typedefs.list\n>> in patches. I only update typedefs locally, for long enough to indent\n>> my code. The final patch doesn't retain any typedefs.list changes.\n> Yeah, I've done the same and will have to stop.\n>\n>> I guess that I can't do that anymore. Hopefully maintaining the\n>> typedefs.list file isn't as inconvenient as it once seemed to me to\n>> be.\n> I don't think it'll be a problem. If your rule is \"add new typedef\n> names added by your patch to typedefs.list, keeping them in\n> alphabetical order\" then it doesn't seem very complicated, and\n> hopefully conflicts between concurrently-developed patches won't\n> be common.\n>\n> \t\t\t\n\n\nMy recollection is that missing typedefs cause indentation that kinda \nsticks out like a sore thumb.\n\nThe reason we moved to a buildfarm based typedefs list was that some \ntypedefs are platform dependent, so any list really needs to be the \nunion of the found typedefs on various platforms, and the buildfarm was \na convenient vehicle for doing that. But that doesn't mean you shouldn't \nmanually add a typedef you have added in your code.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-11 Fr 19:02, Tom Lane wrote:\n\n\nPeter Geoghegan <pg@bowt.ie> writes:\n\n\nMy workflow up until now has avoiding making updates to typedefs.list\nin patches. I only update typedefs locally, for long enough to indent\nmy code. The final patch doesn't retain any typedefs.list changes.\n\n\n\nYeah, I've done the same and will have to stop.\n\n\n\nI guess that I can't do that anymore. Hopefully maintaining the\ntypedefs.list file isn't as inconvenient as it once seemed to me to\nbe.\n\n\n\nI don't think it'll be a problem. If your rule is \"add new typedef\nnames added by your patch to typedefs.list, keeping them in\nalphabetical order\" then it doesn't seem very complicated, and\nhopefully conflicts between concurrently-developed patches won't\nbe common.\n\n\t\t\t\n\n\n\nMy recollection is that missing typedefs cause indentation that\n kinda sticks out like a sore thumb.\nThe reason we moved to a buildfarm based typedefs list was that\n some typedefs are platform dependent, so any list really needs to\n be the union of the found typedefs on various platforms, and the\n buildfarm was a convenient vehicle for doing that. But that\n doesn't mean you shouldn't manually add a typedef you have added\n in your code.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 12 Aug 2023 17:03:37 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-12 17:03:37 -0400, Andrew Dunstan wrote:\n> On 2023-08-11 Fr 19:02, Tom Lane wrote:\n> > Peter Geoghegan<pg@bowt.ie> writes:\n> > > My workflow up until now has avoiding making updates to typedefs.list\n> > > in patches. I only update typedefs locally, for long enough to indent\n> > > my code. The final patch doesn't retain any typedefs.list changes.\n> > Yeah, I've done the same and will have to stop.\n> > \n> > > I guess that I can't do that anymore. Hopefully maintaining the\n> > > typedefs.list file isn't as inconvenient as it once seemed to me to\n> > > be.\n> > I don't think it'll be a problem. If your rule is \"add new typedef\n> > names added by your patch to typedefs.list, keeping them in\n> > alphabetical order\" then it doesn't seem very complicated, and\n> > hopefully conflicts between concurrently-developed patches won't\n> > be common.\n>\n> My recollection is that missing typedefs cause indentation that kinda sticks\n> out like a sore thumb.\n> \n> The reason we moved to a buildfarm based typedefs list was that some\n> typedefs are platform dependent, so any list really needs to be the union of\n> the found typedefs on various platforms, and the buildfarm was a convenient\n> vehicle for doing that. But that doesn't mean you shouldn't manually add a\n> typedef you have added in your code.\n\nIt's a somewhat annoying task though, find all the typedefs, add them to the\nright place in the file (we have an out of order entry right now). I think a\nscript that *adds* (but doesn't remove) local typedefs would make this less\npainful.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 12 Aug 2023 14:14:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-08-12 17:03:37 -0400, Andrew Dunstan wrote:\n>> My recollection is that missing typedefs cause indentation that kinda sticks\n>> out like a sore thumb.\n\nYeah, it's usually pretty obvious: \"typedef *var\" gets changed to\n\"typedef * var\", and similar oddities.\n\n> It's a somewhat annoying task though, find all the typedefs, add them to the\n> right place in the file (we have an out of order entry right now). I think a\n> script that *adds* (but doesn't remove) local typedefs would make this less\n> painful.\n\nMy practice has always been \"add typedefs until pgindent doesn't do\nanything I don't want\". If you have a new typedef that doesn't happen\nto be used in a way that pgindent mangles, it's not that critical\nto get it into the file right away.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 12 Aug 2023 18:46:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Aug 12, 2023 at 3:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > It's a somewhat annoying task though, find all the typedefs, add them to the\n> > right place in the file (we have an out of order entry right now). I think a\n> > script that *adds* (but doesn't remove) local typedefs would make this less\n> > painful.\n>\n> My practice has always been \"add typedefs until pgindent doesn't do\n> anything I don't want\". If you have a new typedef that doesn't happen\n> to be used in a way that pgindent mangles, it's not that critical\n> to get it into the file right away.\n\nWe seem to be seriously contemplating making every patch author do\nthis every time they need to get the tests to pass (after adding or\nrenaming a struct). Is that really an improvement over the old status\nquo?\n\nIn principle I'm in favor of strictly enforcing indentation rules like\nthis. But it seems likely that our current tooling just isn't up to\nthe task.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 12 Aug 2023 17:13:27 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> We seem to be seriously contemplating making every patch author do\n> this every time they need to get the tests to pass (after adding or\n> renaming a struct). Is that really an improvement over the old status\n> quo?\n\nHm. I was envisioning that we should expect committers to deal\nwith this, not original patch submitters. So that's an argument\nagainst including it in the CI tests. But I'm in favor of anything\nwe can do to make it more painless for committers to fix up patch\nindentation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 12 Aug 2023 20:20:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Aug 12, 2023 at 5:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hm. I was envisioning that we should expect committers to deal\n> with this, not original patch submitters. So that's an argument\n> against including it in the CI tests. But I'm in favor of anything\n> we can do to make it more painless for committers to fix up patch\n> indentation.\n\nMaking it a special responsibility for committers comes with the same\nset of problems that we see with catversion bumps. People are much\nmore likely to forget to do something that must happen last.\n\nMaybe I'm wrong -- maybe the new policy is practicable. It might even\nturn out to be worth the bother. Time will tell.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 12 Aug 2023 17:53:29 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-08-12 Sa 20:53, Peter Geoghegan wrote:\n> On Sat, Aug 12, 2023 at 5:20 PM Tom Lane<tgl@sss.pgh.pa.us> wrote:\n>> Hm. I was envisioning that we should expect committers to deal\n>> with this, not original patch submitters. So that's an argument\n>> against including it in the CI tests. But I'm in favor of anything\n>> we can do to make it more painless for committers to fix up patch\n>> indentation.\n\n\nI agree with this.\n\n\n> Making it a special responsibility for committers comes with the same\n> set of problems that we see with catversion bumps. People are much\n> more likely to forget to do something that must happen last.\n\n\nAfter I'd been caught by this once or twice I implemented a git hook \ntest for that too - in fact it was the first hook I did. It's not \nperfect but it's saved me a couple of times:\n\n\ncheck_catalog_version () {\n\n # only do this on master\n test \"$branch\" = \"master\" || return 0\n\n case \"$files\" in\n *src/include/catalog/catversion.h*)\n return 0;\n ;;\n *src/include/catalog/*)\n ;;\n *)\n return 0;\n ;;\n esac\n\n # changes include catalog but not catversion.h, so warn about it\n {\n echo 'Commit on master alters catalog but catversion not bumped'\n echo 'It can be forced with git commit --no-verify'\n } >&2\n\n exit 1\n}\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-12 Sa 20:53, Peter Geoghegan\n wrote:\n\n\nOn Sat, Aug 12, 2023 at 5:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\nHm. I was envisioning that we should expect committers to deal\nwith this, not original patch submitters. So that's an argument\nagainst including it in the CI tests. But I'm in favor of anything\nwe can do to make it more painless for committers to fix up patch\nindentation.\n\n\n\n\nI agree with this. \n\n\n\n\n\n\n\n\n\nMaking it a special responsibility for committers comes with the same\nset of problems that we see with catversion bumps. People are much\nmore likely to forget to do something that must happen last.\n\n\n\nAfter I'd been caught by this once or twice I implemented a git\n hook test for that too - in fact it was the first hook I did. It's\n not perfect but it's saved me a couple of times:\n\n\ncheck_catalog_version () {\n\n # only do this on master\n test \"$branch\" = \"master\" || return 0\n\n case \"$files\" in\n *src/include/catalog/catversion.h*)\n return 0;\n ;;\n *src/include/catalog/*)\n ;;\n *)\n return 0;\n ;;\n esac\n\n # changes include catalog but not catversion.h, so warn about\n it\n {\n echo 'Commit on master alters catalog but catversion not\n bumped'\n echo 'It can be forced with git commit --no-verify'\n } >&2\n\n exit 1\n }\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 13 Aug 2023 10:33:21 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sun, Aug 13, 2023 at 10:33:21AM -0400, Andrew Dunstan wrote:\n> After I'd been caught by this once or twice I implemented a git hook test\n> for that too - in fact it was the first hook I did. It's not perfect but\n> it's saved me a couple of times:\n> \n> check_catalog_version () {\n\nI find that pretty cool. Thanks for sharing.\n--\nMichael",
"msg_date": "Mon, 14 Aug 2023 08:33:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 12.08.23 23:14, Andres Freund wrote:\n> It's a somewhat annoying task though, find all the typedefs, add them to the\n> right place in the file (we have an out of order entry right now). I think a\n> script that*adds* (but doesn't remove) local typedefs would make this less\n> painful.\n\nI was puzzled once that there does not appear to be such a script \navailable. Whatever the buildfarm does (before it merges it all \ntogether) should be available locally. Then the workflow could be\n\ntype type type\ncompile\nupdate typedefs\npgindent\ncommit\n\n\n",
"msg_date": "Mon, 14 Aug 2023 16:04:04 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 12.08.23 02:11, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2023-08-11 18:30:02 -0400, Tom Lane wrote:\n>>> +1 for including this in CI tests\n> \n>> I didn't even mean CI - I meant 'make check-world' / 'meson test'. Which of\n>> course would include CI automatically.\n> \n> Hmm. I'm allergic to anything that significantly increases the cost\n> of check-world, and this seems like it'd do that.\n> \n> Maybe we could automate it, but not as part of check-world per se?\n\nAlso, during development, the code in progress is not always perfectly \nformatted, but I do want to keep running the test suites.\n\n\n\n",
"msg_date": "Mon, 14 Aug 2023 16:08:30 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-08-14 Mo 10:04, Peter Eisentraut wrote:\n> On 12.08.23 23:14, Andres Freund wrote:\n>> It's a somewhat annoying task though, find all the typedefs, add them \n>> to the\n>> right place in the file (we have an out of order entry right now). I \n>> think a\n>> script that*adds* (but doesn't remove) local typedefs would make \n>> this less\n>> painful.\n>\n> I was puzzled once that there does not appear to be such a script \n> available. Whatever the buildfarm does (before it merges it all \n> together) should be available locally. Then the workflow could be\n>\n> type type type\n> compile\n> update typedefs\n> pgindent\n> commit\n\n\n\nIt's a bit more complicated :-)\n\nYou can see what the buildfarm does at \n<https://github.com/PGBuildFarm/client-code/blob/ec4cf43613a74cb88f228efcde09931cf9fd57e7/run_build.pl#L2562> \nIt's been somewhat fragile over the years, which most people other than \nTom and I have probably not noticed.\n\nOn most platforms it needs postgres to have been installed before \nlooking for the typedefs.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-14 Mo 10:04, Peter\n Eisentraut wrote:\n\nOn\n 12.08.23 23:14, Andres Freund wrote:\n \nIt's a somewhat annoying task though, find\n all the typedefs, add them to the\n \n right place in the file (we have an out of order entry right\n now). I think a\n \n script that*adds* (but doesn't remove) local typedefs would\n make this less\n \n painful.\n \n\n\n I was puzzled once that there does not appear to be such a script\n available. Whatever the buildfarm does (before it merges it all\n together) should be available locally. Then the workflow could be\n \n\n type type type\n \n compile\n \n update typedefs\n \n pgindent\n \n commit\n \n\n\n\n\n\nIt's a bit more complicated :-)\nYou can see what the buildfarm does at\n<https://github.com/PGBuildFarm/client-code/blob/ec4cf43613a74cb88f228efcde09931cf9fd57e7/run_build.pl#L2562>\n It's been somewhat fragile over the years, which most people other\n than Tom and I have probably not noticed.\nOn most platforms it needs postgres to have been installed before\n looking for the typedefs.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 14 Aug 2023 15:58:34 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 01:59:40PM -0700, Peter Geoghegan wrote:\n> I'm starting to have doubts about this policy. There have now been\n> quite a few follow-up \"fixes\" to indentation issues that koel\n> complained about. None of these fixups have been included in\n> .git-blame-ignore-revs. If things continue like this then \"git blame\"\n> is bound to become much less usable over time.\n\nShould we add those? Patch attached.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 15 Aug 2023 13:31:09 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 1:31 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Should we add those? Patch attached.\n\nI think that that makes sense. I just don't want to normalize updating\n.git-blame-ignore-revs very frequently. (Actually, it's more like I\ndon't want to normalize any scheme that makes updating the ignore list\nvery frequently start to seem reasonable.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 16 Aug 2023 13:15:55 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 01:15:55PM -0700, Peter Geoghegan wrote:\n> On Tue, Aug 15, 2023 at 1:31 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Should we add those? Patch attached.\n> \n> I think that that makes sense.\n\nCommitted.\n\n> I just don't want to normalize updating\n> .git-blame-ignore-revs very frequently. (Actually, it's more like I\n> don't want to normalize any scheme that makes updating the ignore list\n> very frequently start to seem reasonable.)\n\nAgreed. I've found myself habitually running pgindent since becoming a\ncommitter, but I'm sure I'll forget it one of these days. From a quick\nskim of this thread, it sounds like a pre-commit hook [0] might be the best\noption at the moment.\n\n[0] https://wiki.postgresql.org/wiki/Working_with_Git#Using_git_hooks\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 17 Aug 2023 07:40:41 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sat, Aug 12, 2023 at 5:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Maybe I'm wrong -- maybe the new policy is practicable. It might even\n> turn out to be worth the bother. Time will tell.\n\n(Two months pass.)\n\nThere were two independent fixup commits to address complaints from\nkoel just today (from two different committers). Plus there was a\nthird issue (involving a third committer) this past Wednesday.\n\nThis policy isn't working.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 15 Oct 2023 17:52:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> There were two independent fixup commits to address complaints from\n> koel just today (from two different committers). Plus there was a\n> third issue (involving a third committer) this past Wednesday.\n\n> This policy isn't working.\n\nTwo thoughts about that:\n\n1. We should not commit indent fixups on behalf of somebody else's\nmisindented commit. Peer pressure on committers who aren't being\ncareful about this is the only way to improve matters; so complain\nto the person at fault until they fix it.\n\n2. We could raise awareness of this issue by adding indent verification\nto CI testing. I hesitate to suggest that, though, for a couple of\nreasons:\n 2a. It seems fairly expensive, though I might be misjudging.\n 2b. It's often pretty handy to submit patches that aren't fully\n indent-clean; I have such a patch in flight right now at [1].\n\n2b could be ameliorated by making the indent check be a separate\ntest process that doesn't obscure the results of other testing.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2617358.1697501956%40sss.pgh.pa.us\n\n\n",
"msg_date": "Mon, 16 Oct 2023 20:45:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 5:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Two thoughts about that:\n>\n> 1. We should not commit indent fixups on behalf of somebody else's\n> misindented commit. Peer pressure on committers who aren't being\n> careful about this is the only way to improve matters; so complain\n> to the person at fault until they fix it.\n\nThomas Munro's recent commit 01529c704008 was added to\n.git-blame-ignore-revs by Michael Paquier, despite the fact that\nMunro's commit technically isn't just a pure indentation fix (it also\nfixed some typos). It's hard to judge Michael too harshly for this,\nsince in general it's harder to commit things when koel is already\ncomplaining about existing misindetation -- I get why he'd prefer to\ntake care of that first.\n\n> 2. We could raise awareness of this issue by adding indent verification\n> to CI testing. I hesitate to suggest that, though, for a couple of\n> reasons:\n> 2a. It seems fairly expensive, though I might be misjudging.\n> 2b. It's often pretty handy to submit patches that aren't fully\n> indent-clean; I have such a patch in flight right now at [1].\n\nIt's also often handy to make a minor change to a comment or something\nat the last minute, without necessarily having the comment indented\nperfectly.\n\n> 2b could be ameliorated by making the indent check be a separate\n> test process that doesn't obscure the results of other testing.\n\nI was hoping that \"go back to the old status quo\" would also appear as\nan option.\n\nMy main objection to the new policy is that it's not quite clear what\nprocess I should go through in order to be 100% confident that koel\nwon't start whining (short of waiting around for koel to whine). I\nknow how to run pgindent, of course, and haven't had any problems so\nfar...but it still seems quite haphazard. If we're going to make this\na hard rule, enforced on every commit, it should be dead easy to\ncomply with the rule.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 16 Oct 2023 18:22:51 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> My main objection to the new policy is that it's not quite clear what\n> process I should go through in order to be 100% confident that koel\n> won't start whining (short of waiting around for koel to whine). I\n> know how to run pgindent, of course, and haven't had any problems so\n> far...but it still seems quite haphazard. If we're going to make this\n> a hard rule, enforced on every commit, it should be dead easy to\n> comply with the rule.\n\nBut it's *not* a hard rule --- we explicitly rejected mechanisms\nthat would make it so (such as a precommit hook). I view \"koel\nis unhappy\" as something that you ought to clean up, but if you\ndon't get to it for a day or three there's not much harm done.\n\nIn theory koel might complain even if you'd locally gotten\nclean results from pgindent (as a consequence of skew in the\ntypedef lists being used, for example). We've not seen cases\nof that so far though. Right now I think we just need to raise\ncommitters' awareness of this enough that they routinely run\npgindent on the files they're touching. In the problem cases\nso far, they very clearly didn't. I don't see much point in\nworrying about second-order problems until that first-order\nproblem is tamped down.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Oct 2023 21:32:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 6:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> But it's *not* a hard rule --- we explicitly rejected mechanisms\n> that would make it so (such as a precommit hook). I view \"koel\n> is unhappy\" as something that you ought to clean up, but if you\n> don't get to it for a day or three there's not much harm done.\n\nIt's hard to square that with what you said about needing greater peer\npressure on committers.\n\n> Right now I think we just need to raise\n> committers' awareness of this enough that they routinely run\n> pgindent on the files they're touching. In the problem cases\n> so far, they very clearly didn't. I don't see much point in\n> worrying about second-order problems until that first-order\n> problem is tamped down.\n\nRealistically, if you're the committer that broke koel, you are at\nleast the subject of mild disapproval -- you have likely\ninconvenienced others. I always try to avoid that -- it pretty much\nrounds up to \"hard rule\" in my thinking. Babysitting koel really does\nseem like it could cut into my dinner plans or what have you.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 16 Oct 2023 18:57:46 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 08:45:00PM -0400, Tom Lane wrote:\n> 2. We could raise awareness of this issue by adding indent verification\n> to CI testing. I hesitate to suggest that, though, for a couple of\n> reasons:\n> 2a. It seems fairly expensive, though I might be misjudging.\n> 2b. It's often pretty handy to submit patches that aren't fully\n> indent-clean; I have such a patch in flight right now at [1].\n> \n> 2b could be ameliorated by making the indent check be a separate\n> test process that doesn't obscure the results of other testing.\n\nI see an extra reason with not doing that: this increases the\ndifficulty when it comes to send and maintain patches to the lists and \nnewcomers would need to learn more tooling. I don't think that we\nshould make that more complicated for code-formatting reasons.\n--\nMichael",
"msg_date": "Tue, 17 Oct 2023 11:57:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, 17 Oct 2023 at 03:23, Peter Geoghegan <pg@bowt.ie> wrote:\n> My main objection to the new policy is that it's not quite clear what\n> process I should go through in order to be 100% confident that koel\n> won't start whining (short of waiting around for koel to whine). I\n> know how to run pgindent, of course, and haven't had any problems so\n> far...but it still seems quite haphazard. If we're going to make this\n> a hard rule, enforced on every commit, it should be dead easy to\n> comply with the rule.\n\nI think *it is* dead easy to comply. If you run the following commands\nbefore committing/after rebasing, then koel should always be happy:\n\nsrc/tools/pgindent/pgindent src # works always but a bit slow\nsrc/tools/pgindent/pgindent $(git diff --name-only --diff-filter=ACMR)\n# much faster, but only works if you DID NOT change typedefs.list\n\nIf you have specific cases where it does not work. Then I think we\nshould talk about those/fix them. But looking at the last few commits\nin .git-blame-ignore-revs I only see examples of people simply not\nrunning pgindent before they commit.\n\nI guess it's easy to forget, but that's why the wiki contains a\npre-commit hook[1] that you can use to remind yourself/run pgindent\nautomatically. The only annoying thing is that it does not trigger\nwhen rebasing, but you can work around that by using rebase its -x\nflag[2].\n\n[1]: https://wiki.postgresql.org/wiki/Working_with_Git#Using_git_hooks\n[2]: https://adamj.eu/tech/2022/11/07/pre-commit-run-hooks-rebase/\n\n\n",
"msg_date": "Tue, 17 Oct 2023 12:34:35 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, 17 Oct 2023 at 04:57, Michael Paquier <michael@paquier.xyz> wrote:\n> I see an extra reason with not doing that: this increases the\n> difficulty when it comes to send and maintain patches to the lists and\n> newcomers would need to learn more tooling. I don't think that we\n> should make that more complicated for code-formatting reasons.\n\nHonestly, I don't think it's a huge hurdle for newcomers. Most open\nsource projects have a CI job that runs automatic code formatting, so\nit's a pretty common thing for contributors to deal with. And as long\nas we keep it a separate CI job from the normal tests, committers can\neven choose to commit the patch if the formatting job fails, after\nrunning pgindent themselves.\n\nAnd personally as a contributor it's a much nicer experience to see\nquickly in CI that I messed up the code style, then to hear it a\nweek/month later in an email when someone took the time to review and\nmentions the styling is way off all over the place.\n\n\n",
"msg_date": "Tue, 17 Oct 2023 12:49:13 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Sun, Oct 15, 2023 at 8:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sat, Aug 12, 2023 at 5:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Maybe I'm wrong -- maybe the new policy is practicable. It might even\n> > turn out to be worth the bother. Time will tell.\n>\n> (Two months pass.)\n>\n> There were two independent fixup commits to address complaints from\n> koel just today (from two different committers). Plus there was a\n> third issue (involving a third committer) this past Wednesday.\n>\n> This policy isn't working.\n\n+1. I think this is more annoying than the status quo ante.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 08:45:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 6:34 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> I think *it is* dead easy to comply. If you run the following commands\n> before committing/after rebasing, then koel should always be happy:\n>\n> src/tools/pgindent/pgindent src # works always but a bit slow\n> src/tools/pgindent/pgindent $(git diff --name-only --diff-filter=ACMR)\n> # much faster, but only works if you DID NOT change typedefs.list\n\nIn isolation, that's true, but the list of mistakes that you can make\nwhile committing which will inconvenience everyone working on the\nproject is very long. Another one that comes up frequently is\nforgetting to bump CATALOG_VERSION_NO, but you also need a good commit\nmessage, and good comments, and a good Discussion link in the commit\nmessage, and the right list of authors and reviewers, and to update\nthe docs (with spaces, not tabs) and the Makefiles (with tabs, not\nspaces) and the meson stuff and, as if that weren't enough already,\nyou actually need the code to work! And that includes not only working\nregularly but also with CLOBBER_CACHE_ALWAYS and debug_parallel_query\nand so on. It's very easy to miss something somewhere. I put a LOT of\nwork into polishing my commits before I push them, and it's still not\nthat uncommon that I screw something up.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 09:52:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 8:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > This policy isn't working.\n>\n> +1. I think this is more annoying than the status quo ante.\n\nAlthough ... I do think it's spared me some rebasing pain, and that\ndoes have some real value. I wonder if we could think of other\nalternatives. For example, maybe we could have a bot. If you push a\ncommit that's not indented properly, the bot reindents the tree,\nupdates git-blame-ignore-revs, and sends you an email admonishing you\nfor your error. Or we could have a server-side hook that will refuse\nthe misindented commit, with some kind of override for emergency\nsituations. What I really dislike about the current situation is that\nit's doubling down on the idea that committers have to be perfect and\nget everything right every time. Turns out, that's hard to do. If not,\nwhy do people keep screwing things up? Somebody could theorize - and\nthis seems to be Tom and Jelte's theory, though perhaps I'm\nmisinterpreting their comments - that the people who have made\nmistakes here are just lazy, and what they need to do is up their\ngame.\n\nBut I don't buy that. First, I think that most of our committers are\npretty intelligent and hard-working people who are trying to do the\nright thing. We can't all be Tom Lane, no matter how hard we may try.\nSecond, even if it were true that the offending committers are \"just\nlazy,\" all of our contributors and many senior non-committer\ncontributors are people who have put thousands, if not tens of\nthousands, of hours into the project. Making them feel bad serves us\npoorly. At the end of the day, it doesn't matter whether it's too much\nof a pain for the perfect committers we'd like to have. It matters\nwhether it's too much of a pain for the human committers that we do\nhave.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 10:03:54 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Oct 17, 2023 at 8:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> +1. I think this is more annoying than the status quo ante.\n\n> Although ... I do think it's spared me some rebasing pain, and that\n> does have some real value. I wonder if we could think of other\n> alternatives.\n\nAn alternative I was thinking about after reading your earlier email\nwas going back to the status quo ante, but doing the manual tree-wide\nreindents significantly more often than once a year. Adding one at\nthe conclusion of each commitfest would be a natural thing to do,\nfor instance. It's hard to say what frequency would lead to the\nleast rebasing pain, but we know once-a-year isn't ideal.\n\n> For example, maybe we could have a bot. If you push a\n> commit that's not indented properly, the bot reindents the tree,\n> updates git-blame-ignore-revs, and sends you an email admonishing you\n> for your error.\n\nI'm absolutely not in favor of completely-automated reindents.\npgindent is a pretty stupid tool and it will sometimes do stupid\nthings, which you have to correct for by tweaking the input\nformatting. The combination of the tool and human supervision\ngenerally produces pretty good results, but the tool alone\nnot so much.\n\n> Or we could have a server-side hook that will refuse\n> the misindented commit, with some kind of override for emergency\n> situations.\n\nEven though I'm in the camp that would like the tree correctly\nindented at all times, I remain very much against a commit hook.\nI think that'd be significantly more annoying than the current\nsituation, which you're already unhappy about the annoying-ness of.\n\nThe bottom line here, I think, is that there's a subset of committers\nthat would like perfectly mechanically-indented code at all times,\nand there's another subset that just doesn't care that much.\nWe don't (and shouldn't IMO) have a mechanism to let one set force\ntheir views on the other set. The current approach is clearly\ninsufficient for that, and I don't think trying to institute stronger\nenforcement is going to make anybody happy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Oct 2023 10:23:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 7:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Oct 17, 2023 at 8:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >> +1. I think this is more annoying than the status quo ante.\n>\n> > Although ... I do think it's spared me some rebasing pain, and that\n> > does have some real value. I wonder if we could think of other\n> > alternatives.\n>\n> An alternative I was thinking about after reading your earlier email\n> was going back to the status quo ante, but doing the manual tree-wide\n> reindents significantly more often than once a year. Adding one at\n> the conclusion of each commitfest would be a natural thing to do,\n> for instance. It's hard to say what frequency would lead to the\n> least rebasing pain, but we know once-a-year isn't ideal.\n\nThat seems like the best alternative we have. The old status quo did\noccasionally allow code with indentation that *clearly* wasn't up to\nproject standards to slip in. It could stay that way for quite a few\nmonths at a time. That wasn't great either.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 17 Oct 2023 07:55:36 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, 17 Oct 2023 at 16:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Or we could have a server-side hook that will refuse\n> > the misindented commit, with some kind of override for emergency\n> > situations.\n>\n> Even though I'm in the camp that would like the tree correctly\n> indented at all times, I remain very much against a commit hook.\n> I think that'd be significantly more annoying than the current\n> situation, which you're already unhappy about the annoying-ness of.\n\nWhy do you think that would be significantly more annoying than the\ncurrent situation? Instead of getting delayed feedback you get instant\nfeedback when you push.\n\n\n",
"msg_date": "Tue, 17 Oct 2023 17:00:35 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 10:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> An alternative I was thinking about after reading your earlier email\n> was going back to the status quo ante, but doing the manual tree-wide\n> reindents significantly more often than once a year. Adding one at\n> the conclusion of each commitfest would be a natural thing to do,\n> for instance. It's hard to say what frequency would lead to the\n> least rebasing pain, but we know once-a-year isn't ideal.\n\nYes. I suspect once a commitfest still wouldn't be often enough. Maybe\nonce a month or something would be. But I'm not sure. You might rebase\nonce over the misindented commit and then have to rebase again over\nthe indent that fixed it. There's not really anything that quite\nsubstitutes for doing it right on every commit.\n\n> The bottom line here, I think, is that there's a subset of committers\n> that would like perfectly mechanically-indented code at all times,\n> and there's another subset that just doesn't care that much.\n> We don't (and shouldn't IMO) have a mechanism to let one set force\n> their views on the other set. The current approach is clearly\n> insufficient for that, and I don't think trying to institute stronger\n> enforcement is going to make anybody happy.\n\nI mean, I think we DO have such a mechanism. Everyone agrees that the\nbuildfarm has to stay green, and we have a buildfarm member that\nchecks pgindent, so that means everyone has to pgindent. We could\ndecide to kill that buildfarm member, in which case we go back to\npeople not having to pgindent, but right now they do.\n\nAnd if it's going to remain the policy, it's better to enforce that\npolicy earlier rather than later. I mean, what is the point of having\na system where we let people do the wrong thing and then publicly\nembarrass them afterwards? How is that better than preventing them\nfrom doing the wrong thing in the first place? Even if they don't\nsubjectively feel embarrassed, nobody likes having to log back on in\nthe evening or the weekend and clean up after something they thought\nthey were done with.\n\nIn fact, that particular experience is one of the worst things about\nbeing a committer. It actively discourages me, at least, from trying\nto get other people's patches committed. This particular problem is\nminor, but the overall experience of trying to get things committed is\nthat you have to check 300 things for every patch and if you get every\none of them right then nothing happens and if you get one of them\nwrong then you get a bunch of irritated emails criticizing your\nlaziness, sloppiness, or whatever, and you have to drop everything to\ngo fix it immediately. What a deal! I'm sure this isn't the only\nreason why we have such a huge backlog of patches needing committer\nattention, but it sure doesn't help. And there is absolutely zero need\nfor this to be yet another thing that you can find out you did wrong\nin the 1-24 hour period AFTER you type 'git push'.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 11:01:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 8:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> In fact, that particular experience is one of the worst things about\n> being a committer. It actively discourages me, at least, from trying\n> to get other people's patches committed. This particular problem is\n> minor, but the overall experience of trying to get things committed is\n> that you have to check 300 things for every patch and if you get every\n> one of them right then nothing happens and if you get one of them\n> wrong then you get a bunch of irritated emails criticizing your\n> laziness, sloppiness, or whatever, and you have to drop everything to\n> go fix it immediately. What a deal!\n\nYep. Enforcing perfect indentation on koel necessitates rechecking\nindentation after each and every last-minute fixup affecting C code --\nthe interactions makes it quite a bit harder to get everything right\non the first push. For example, if I spot an issue with a comment\nduring final pre-commit review, and fixup that commit, I have to run\npgindent again. On a long enough timeline, I'm going to forget to do\nthat.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 17 Oct 2023 08:15:51 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, 17 Oct 2023 at 16:04, Robert Haas <robertmhaas@gmail.com> wrote:\n> What I really dislike about the current situation is that\n> it's doubling down on the idea that committers have to be perfect and\n> get everything right every time. Turns out, that's hard to do. If not,\n> why do people keep screwing things up? Somebody could theorize - and\n> this seems to be Tom and Jelte's theory, though perhaps I'm\n> misinterpreting their comments - that the people who have made\n> mistakes here are just lazy, and what they need to do is up their\n> game.\n\nTo clarify, I did not intend to imply people that commit unindented\ncode are lazy. It's expected that humans forget to run pgindent before\ncommitting from time to time (I do too). That's why I proposed a\nserver side git hook to reject badly indented commits very early in\nthis thread. But some others said that buildfarm animals were the way\nto go for Postgres development flow. And since I'm not a committer I\nleft it at that. I was already happy enough that there was consensus\non indenting continuously, so that the semi-regular rebases for the\nfew open CF entries that I have are a lot less annoying.\n\nBut based on the current feedback I think we should seriously consider\na server-side \"update\" git hook again. People are obviously not\nperfect machines. And for whatever reason not everyone installs the\npre-commit hook from the wiki. So the koel keeps complaining. A\nserver-side hook would solve all of this IMHO.\n\n\n",
"msg_date": "Tue, 17 Oct 2023 17:23:10 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 11:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Yep. Enforcing perfect indentation on koel necessitates rechecking\n> indentation after each and every last-minute fixup affecting C code --\n> the interactions makes it quite a bit harder to get everything right\n> on the first push. For example, if I spot an issue with a comment\n> during final pre-commit review, and fixup that commit, I have to run\n> pgindent again. On a long enough timeline, I'm going to forget to do\n> that.\n\nI also just discovered that my pre-commit hook doesn't work if I pull\ncommits into master by cherry-picking. I had thought that I could have\nmy hook just check my commits to master and not all of my local dev\nbranches where I really don't want to mess with this when I'm just\nbanging out a rough draft of something. But now I see that I'm going\nto need to work harder on this if I actually want it to catch all the\nways I might screw this up.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 11:23:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 11:23 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> To clarify, I did not intend to imply people that commit unindented\n> code are lazy. It's expected that humans forget to run pgindent before\n> committing from time to time (I do too). That's why I proposed a\n> server side git hook to reject badly indented commits very early in\n> this thread. But some others said that buildfarm animals were the way\n> to go for Postgres development flow. And since I'm not a committer I\n> left it at that. I was already happy enough that there was consensus\n> on indenting continuously, so that the semi-regular rebases for the\n> few open CF entries that I have are a lot less annoying.\n\nThanks for clarifying. I didn't really think you were trying to be\naccusatory, but I didn't really understand what else to think either,\nso this is helpful context.\n\n> But based on the current feedback I think we should seriously consider\n> a server-side \"update\" git hook again. People are obviously not\n> perfect machines. And for whatever reason not everyone installs the\n> pre-commit hook from the wiki. So the koel keeps complaining. A\n> server-side hook would solve all of this IMHO.\n\nOne potential problem with a server-side hook is that if you back-port\na commit to older branches and then push the commits all together\n(which is my workflow) then you might get failure to push on some\nbranches but not others. I don't know if there's any way to avoid\nthat, but it seems not great. You could think of enforcing the policy\nonly on master to try to avoid this, but that still leaves a risk that\nyou manage to push to all the back-branches and not to master.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 11:42:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 8:24 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I also just discovered that my pre-commit hook doesn't work if I pull\n> commits into master by cherry-picking. I had thought that I could have\n> my hook just check my commits to master and not all of my local dev\n> branches where I really don't want to mess with this when I'm just\n> banging out a rough draft of something. But now I see that I'm going\n> to need to work harder on this if I actually want it to catch all the\n> ways I might screw this up.\n\nOnce you figure all that out, you're still obligated to hand-polish\ntypedefs.list to be consistent with whatever Bruce's machine's copy of\nobjdump does (or is it Tom's?). You need to sort the entries so they\nkinda look like they originated from the same source as existing\nentries, since my Debian machine seems to produce somewhat different\nresults to RHEL, for whatever reason. It's hard to imagine a worse use\nof committer time.\n\nI think that something like this new policy could work if the\nunderlying tooling was very easy to use and gave perfectly consistent\nresults on everybody's development machine. Obviously, that just isn't\nthe case.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 17 Oct 2023 08:47:01 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, 17 Oct 2023 at 17:47, Peter Geoghegan <pg@bowt.ie> wrote:\n> Once you figure all that out, you're still obligated to hand-polish\n> typedefs.list to be consistent with whatever Bruce's machine's copy of\n> objdump does (or is it Tom's?). You need to sort the entries so they\n> kinda look like they originated from the same source as existing\n> entries, since my Debian machine seems to produce somewhat different\n> results to RHEL, for whatever reason. It's hard to imagine a worse use\n> of committer time.\n>\n> I think that something like this new policy could work if the\n> underlying tooling was very easy to use and gave perfectly consistent\n> results on everybody's development machine. Obviously, that just isn't\n> the case.\n\nTo make koel pass you don't need to worry about hand-polishing\ntypedefs.list. koel uses the typedefs.list that's committed into the\nrepo, just like when you run pgindent yourself. If you forget to\nupdate the typedefs.list with new types, then worst case the pgindent\noutput will look weird. But it will look weird both on your own\nmachine and on koel. So afaik the current tooling should give\nperfectly consistent results on everybody's development machine. If\nyou have an example of where it doesn't then we should fix that\nproblem.\n\nOn Tue, 17 Oct 2023 at 17:47, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Oct 17, 2023 at 8:24 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I also just discovered that my pre-commit hook doesn't work if I pull\n> > commits into master by cherry-picking. I had thought that I could have\n> > my hook just check my commits to master and not all of my local dev\n> > branches where I really don't want to mess with this when I'm just\n> > banging out a rough draft of something. But now I see that I'm going\n> > to need to work harder on this if I actually want it to catch all the\n> > ways I might screw this up.\n>\n> Once you figure all that out, you're still obligated to hand-polish\n> typedefs.list to be consistent with whatever Bruce's machine's copy of\n> objdump does (or is it Tom's?). You need to sort the entries so they\n> kinda look like they originated from the same source as existing\n> entries, since my Debian machine seems to produce somewhat different\n> results to RHEL, for whatever reason. It's hard to imagine a worse use\n> of committer time.\n>\n> I think that something like this new policy could work if the\n> underlying tooling was very easy to use and gave perfectly consistent\n> results on everybody's development machine. Obviously, that just isn't\n> the case.\n>\n> --\n> Peter Geoghegan\n\n\n",
"msg_date": "Tue, 17 Oct 2023 18:03:44 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 9:03 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> To make koel pass you don't need to worry about hand-polishing\n> typedefs.list. koel uses the typedefs.list that's committed into the\n> repo, just like when you run pgindent yourself. If you forget to\n> update the typedefs.list with new types, then worst case the pgindent\n> output will look weird. But it will look weird both on your own\n> machine and on koel.\n\nThat's beside the point. The point is that I'm obligated to keep\ntypedef.list up to date in general, a task that is made significantly\nharder by random objdump implementation details. And I probably need\nto do this not just once per commit, but several times, since in\npractice I need to defensively run and rerun pgindent as the patch is\ntweaked.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 17 Oct 2023 09:07:05 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> That's beside the point. The point is that I'm obligated to keep\n> typedef.list up to date in general, a task that is made significantly\n> harder by random objdump implementation details. And I probably need\n> to do this not just once per commit, but several times, since in\n> practice I need to defensively run and rerun pgindent as the patch is\n> tweaked.\n\nHmm, I've not found it that hard to manage the typedefs list.\nIf I run pgindent and it adds weird spacing around uses of a new\ntypedef name, I go \"oh, I better add that to the list\" and do so.\nEnd of problem. There's not a requirement that you remove disused\ntypedef names, nor that you alphabetize perfectly. I'm content\nto update those sorts of details from the buildfarm's list once a\nyear or so.\n\nThis does assume that you inspect pgindent's changes rather than\njust accepting them blindly --- but as I commented upthread, the\ntool really requires that anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Oct 2023 12:16:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> One potential problem with a server-side hook is that if you back-port\n> a commit to older branches and then push the commits all together\n> (which is my workflow) then you might get failure to push on some\n> branches but not others. I don't know if there's any way to avoid\n> that, but it seems not great. You could think of enforcing the policy\n> only on master to try to avoid this, but that still leaves a risk that\n> you manage to push to all the back-branches and not to master.\n\nIs that actually possible? I had the idea that \"git push\" is an\natomic operation, ie 100% or nothing. Is it only atomic per-branch?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Oct 2023 12:18:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 12:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm, I've not found it that hard to manage the typedefs list.\n> If I run pgindent and it adds weird spacing around uses of a new\n> typedef name, I go \"oh, I better add that to the list\" and do so.\n> End of problem. There's not a requirement that you remove disused\n> typedef names, nor that you alphabetize perfectly. I'm content\n> to update those sorts of details from the buildfarm's list once a\n> year or so.\n\n+1 to all of that. At least for me, managing typedefs.list isn't the\nproblem. The problem is remembering to actually do it, and keep it\nupdated as the patch set is adjusted and rebased.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 12:19:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 12:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > One potential problem with a server-side hook is that if you back-port\n> > a commit to older branches and then push the commits all together\n> > (which is my workflow) then you might get failure to push on some\n> > branches but not others. I don't know if there's any way to avoid\n> > that, but it seems not great. You could think of enforcing the policy\n> > only on master to try to avoid this, but that still leaves a risk that\n> > you manage to push to all the back-branches and not to master.\n>\n> Is that actually possible? I had the idea that \"git push\" is an\n> atomic operation, ie 100% or nothing. Is it only atomic per-branch?\n\nI believe so. For instance:\n\n[rhaas pgsql]$ git push rhaas\nEnumerating objects: 2980, done.\nCounting objects: 100% (2980/2980), done.\nDelta compression using up to 16 threads\nCompressing objects: 100% (940/940), done.\nWriting objects: 100% (2382/2382), 454.52 KiB | 7.70 MiB/s, done.\nTotal 2382 (delta 2024), reused 1652 (delta 1429), pack-reused 0\nremote: Resolving deltas: 100% (2024/2024), completed with 579 local objects.\nTo ssh://git.postgresql.org/users/rhaas/postgres.git\n e434e21e11..2406c4e34c master -> master\n ! [rejected] walsummarizer2 -> walsummarizer2 (non-fast-forward)\nerror: failed to push some refs to\n'ssh://git.postgresql.org/users/rhaas/postgres.git'\nhint: Updates were rejected because a pushed branch tip is behind its remote\nhint: counterpart. Check out this branch and integrate the remote changes\nhint: (e.g. 'git pull ...') before pushing again.\nhint: See the 'Note about fast-forwards' in 'git push --help' for details.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 12:21:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023, 09:22 Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Oct 17, 2023 at 12:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > Is that actually possible? I had the idea that \"git push\" is an\n> > atomic operation, ie 100% or nothing. Is it only atomic per-branch?\n>\n> I believe so.\n\n\nGit push does have an --atomic flag to treat the entire push as a single\noperation.\n\nOn Tue, Oct 17, 2023, 09:22 Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Oct 17, 2023 at 12:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> Is that actually possible? I had the idea that \"git push\" is an\n> atomic operation, ie 100% or nothing. Is it only atomic per-branch?\n\nI believe so.Git push does have an --atomic flag to treat the entire push as a single operation.",
"msg_date": "Tue, 17 Oct 2023 09:53:03 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, 17 Oct 2023 at 18:53, Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> Git push does have an --atomic flag to treat the entire push as a single operation.\n\nI decided to play around a bit with server hooks. Attached is a git\n\"update\" hook that rejects pushes to the master branch when the new\nHEAD of master does not pass pgindent. It tries to do the minimal\namount of work necessary. Together with the --atomic flag of git push\nI think this would work quite well.\n\nNote: It does require that pg_bsd_indent is in PATH. While not perfect\nseems like it would be acceptable in practice to me. Its version is\nnot updated very frequently. So manually updating it on the git server\nwhen we do does not seem like a huge issue to me.\n\nThe easiest way to try it out is by cloning the postgres repo in two\ndifferent local directories, let's call them A and B. And then\nconfigure directory B to be the origin remote of A. By placing the\nupdate script in B/.git/hooks/ it will execute whenever you push\nmaster from A to B.",
"msg_date": "Tue, 17 Oct 2023 22:43:04 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 10:43 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> On Tue, 17 Oct 2023 at 18:53, Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n> > Git push does have an --atomic flag to treat the entire push as a single operation.\n>\n> I decided to play around a bit with server hooks. Attached is a git\n> \"update\" hook that rejects pushes to the master branch when the new\n> HEAD of master does not pass pgindent. It tries to do the minimal\n> amount of work necessary. Together with the --atomic flag of git push\n> I think this would work quite well.\n>\n> Note: It does require that pg_bsd_indent is in PATH. While not perfect\n> seems like it would be acceptable in practice to me. Its version is\n> not updated very frequently. So manually updating it on the git server\n> when we do does not seem like a huge issue to me.\n\nIf it doesn't know how to rebuild it, aren't we going to be stuck in a\ncatch-22 if we need to change it in certain ways? Since an old version\nof pg_bsd_indent would reject the patch that might include updating\nit. (And when it does, one should expect the push to take quite a long\ntime, but given the infrequency I agree that part is probably not an\nissue)\n\nAnd unless we're only enforcing it on master, we'd also need to make\nprovisions for different versions of it on different branches, I\nthink?\n\nOther than that, I agree it's fairly simple. It does nede a lot more\nsandboxing than what's in there now, but that's not too hard of a\nproblem to solve, *if* this is what we want.\n\n(And of course needs to be integrated with the existing script since\nAFAIK you can't chain git hooks unless you do it manually - but that's\nmostliy mechanical)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 17 Oct 2023 23:01:38 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> If it doesn't know how to rebuild it, aren't we going to be stuck in a\n> catch-22 if we need to change it in certain ways? Since an old version\n> of pg_bsd_indent would reject the patch that might include updating\n> it. (And when it does, one should expect the push to take quite a long\n> time, but given the infrequency I agree that part is probably not an\n> issue)\n\nEveryone who has proposed this has included a caveat that there must\nbe a way to override the check. Given that minimal expectation, it\nshouldn't be too hard to deal with pg_bsd_indent-updating commits.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Oct 2023 17:07:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Wed, 18 Oct 2023 at 01:47, Robert Haas <robertmhaas@gmail.com> wrote:\n> On Sat, Aug 12, 2023 at 5:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > This policy isn't working.\n>\n> +1. I think this is more annoying than the status quo ante.\n\nMaybe there are two camps of committers here; ones who care about\ncommitting correctly indented code and ones who do not.\n\nI don't mean that in a bad way, but if a committer just does not care\nabout correctly pgindented code then he/she likely didn't suffer\nenough pain from how things used to be... having to unindent all the\nunrelated indent fixes that were committed since the last pgindent run\nI personally found slow/annoying/error-prone.\n\nWhat I do now seems significantly easier. Assuming it's just 1 commit, just:\n\nperl src/tools/pgindent/pgindent --commit HEAD\ngit diff # manual check to see if everything looks sane.\ngit commit -a --fixup HEAD\ngit rebase -i HEAD~2\n\nIf we were to go back to how it was before, then why should I go to\nthe trouble of unindenting all the unrelated indents from code changed\nby other committers since the last pgindent run when those committers\nare not bothering to and making my job harder each time they commit\nincorrectly indented code.\n\nHow many of the committers who have broken koel are repeat offenders?\nWhat is their opinion on this?\nDid they just forget once or do they hate the process and want to go back?\n\nI'm personally very grateful for all the work that was done to improve\npgindent and set out the new process. I'd really rather not go back to\nhow things were.\n\nI agree that it's not nice to add yet another way of breaking the\nbuildfarm and even more so when the committer did make check-world\nbefore committing. We have --enable-tap-tests, we could have\n--enable-indent-checks and have pgindent check the code is correctly\nindented during make check-world. Then just not have\n--enable-indent-checks in CI.\n\nI think many of us have scripts we use instead of typing out all the\nconfigure options we want. It's likely pretty easy to add\n--enable-indent-checks to those.\n\nDavid\n\n\n",
"msg_date": "Wed, 18 Oct 2023 17:40:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 18.10.23 06:40, David Rowley wrote:\n> I agree that it's not nice to add yet another way of breaking the \n> buildfarm and even more so when the committer did make check-world \n> before committing. We have --enable-tap-tests, we could have \n> --enable-indent-checks and have pgindent check the code is correctly \n> indented during make check-world. Then just not have \n> --enable-indent-checks in CI.\n\nThis approach seems like a good improvement, even independent of \neverything else we might do about this. Making it easier to use and \nless likely to be forgotten. Also, this way, non-committer contributors \ncan opt-in, if they want to earn bonus points.\n\n\n\n",
"msg_date": "Wed, 18 Oct 2023 09:20:55 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 11:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > If it doesn't know how to rebuild it, aren't we going to be stuck in a\n> > catch-22 if we need to change it in certain ways? Since an old version\n> > of pg_bsd_indent would reject the patch that might include updating\n> > it. (And when it does, one should expect the push to take quite a long\n> > time, but given the infrequency I agree that part is probably not an\n> > issue)\n>\n> Everyone who has proposed this has included a caveat that there must\n> be a way to override the check. Given that minimal expectation, it\n> shouldn't be too hard to deal with pg_bsd_indent-updating commits.\n\nI haven't managed to fully keep up with the thread, so I missed that.\nAnd i can't directly find it looking back either - but as long as\nthree's an actual idea for how to do that, the problem goes away :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 18 Oct 2023 09:56:38 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, 17 Oct 2023 at 23:01, Magnus Hagander <magnus@hagander.net> wrote:\n> And unless we're only enforcing it on master, we'd also need to make\n> provisions for different versions of it on different branches, I\n> think?\n\nOnly enforcing on master sounds fine to me, that's what koel is doing\ntoo afaik. In practice this seems to be enough to solve my main issue\nof having to manually remove unrelated indents when rebasing my\npatches. Enforcing on different branches seems like it would add a lot\nof complexity. So I'm not sure that's worth doing at this point, since\ncurrently some committers are proposing to stop enforcing continuous\nindentation because of problems with the current flow. I think we\nshould only enforce it on more branches once we have the flow\nmastered.\n\n> It does need a lot more\n> sandboxing than what's in there now, but that's not too hard of a\n> problem to solve, *if* this is what we want.\n\nYeah, I didn't bother with that. Since that seems very tightly coupled\nwith the environment that the git server is running, and I have no\nclue what that environment is.\n\n\n",
"msg_date": "Wed, 18 Oct 2023 10:21:44 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Wed, 18 Oct 2023 at 06:40, David Rowley <dgrowleyml@gmail.com> wrote:\n> How many of the committers who have broken koel are repeat offenders?\n\nI just checked the commits and there don't seem to be real repeat\noffenders. The maximum number of times someone broke koel since its\ninception is two. That was the case for only two people. The other 8\npeople only caused one breakage.\n\n> What is their opinion on this?\n> Did they just forget once or do they hate the process and want to go back?\n\nThe commiters that broke koel since its inception are:\n- Alexander Korotkov\n- Amit Kapila\n- Amit Langote\n- Andres Freund\n- Etsuro Fujita\n- Jeff Davis\n- Michael Paquier\n- Peter Eisentraut\n- Tatsuo Ishii\n- Tomas Vondra\n\nI included all of them in the To field of this message, in the hope\nthat they share their viewpoint. Because otherwise it stays guessing\nwhat they think.\n\nBut based on the contents of the fixup commits a commonality seems to\nbe that the fixup only fixes a few lines, quite often touching only\ncomments. So it seems like the main reason for breaking koel is\nforgetting to re-run pgindent after some final cleanup/wording\nchanges/typo fixes. And that seems like an expected flaw of being\nhuman instead of a robot, which can only be worked around with better\nautomation.\n\n> I agree that it's not nice to add yet another way of breaking the\n> buildfarm and even more so when the committer did make check-world\n> before committing. We have --enable-tap-tests, we could have\n> --enable-indent-checks and have pgindent check the code is correctly\n> indented during make check-world. Then just not have\n> --enable-indent-checks in CI.\n\nI think --enable-indent-checks sounds like a good improvement to the\nstatus quo. But I'm not confident that it will help remove the cases\nwhere only a comment needs to be re-indented. Do commiters really\nalways run check-world again when only changing a typo in a comment? I\nknow I probably wouldn't (or at least not always).\n\n\n",
"msg_date": "Wed, 18 Oct 2023 11:07:00 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Wed, 18 Oct 2023 at 22:07, Jelte Fennema <postgres@jeltef.nl> wrote:\n> But based on the contents of the fixup commits a commonality seems to\n> be that the fixup only fixes a few lines, quite often touching only\n> comments. So it seems like the main reason for breaking koel is\n> forgetting to re-run pgindent after some final cleanup/wording\n> changes/typo fixes. And that seems like an expected flaw of being\n> human instead of a robot, which can only be worked around with better\n> automation.\n\nI wonder if you might just be assuming these were caused by\nlast-minute comment adjustments. I may have missed something on the\nthread, but it could be that, or it could be due to the fact that\npgindent just simply does more adjustments to comments than it does\nwith code lines.\n\nOn Wed, 18 Oct 2023 at 06:40, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I agree that it's not nice to add yet another way of breaking the\n> > buildfarm and even more so when the committer did make check-world\n> > before committing. We have --enable-tap-tests, we could have\n> > --enable-indent-checks and have pgindent check the code is correctly\n> > indented during make check-world. Then just not have\n> > --enable-indent-checks in CI.\n>\n> I think --enable-indent-checks sounds like a good improvement to the\n> status quo. But I'm not confident that it will help remove the cases\n> where only a comment needs to be re-indented. Do commiters really\n> always run check-world again when only changing a typo in a comment? I\n> know I probably wouldn't (or at least not always).\n\nI can't speak for others, but I always make edits in a dev branch and\ndo \"make check-world\" before doing \"git format-patch\" before I \"git\nam\" that patch into a clean repo. Before I push, I'll always run\n\"make check-world\" again as sometimes master might have moved on a few\ncommits from where the dev branch was taken (perhaps I need to update\nthe expected output of some newly added EXPLAIN tests if say doing a\nplanner adjustment). I personally never adjust any code or comments\nafter the git am. I only sometimes adjust the commit message.\n\nSo, in theory at least, if --enable-indent-checks existed and I used\nit, I shouldn't break koel... let's see if I just jinxed myself.\n\nIt would be good to learn how many of the committers out of the ones\nyou listed that --enable-indent-checks would have saved from breaking\nkoel.\n\nDavid\n\n\n",
"msg_date": "Wed, 18 Oct 2023 22:34:45 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 10/17/23 16:23, Tom Lane wrote:\n> An alternative I was thinking about after reading your earlier email was \n> going back to the status quo ante, but doing the manual tree-wide \n> reindents significantly more often than once a year. Adding one at the \n> conclusion of each commitfest would be a natural thing to do, for \n> instance. It's hard to say what frequency would lead to the least \n> rebasing pain, but we know once-a-year isn't ideal.\n\n\nThis is basically how the SQL Committee functions. The Change Proposals \n(patches) submitted every meeting (commitfest) are always against the \ndocuments as they exist after the application of papers (commits) from \nthe previous meeting.\n\nOne major difference is that Change Proposals are against the text, and \npatches are against the code. It is not dissimilar to people saying \nwhat our documentation should say, and then someone implementing that \nchange.\n\nSo I am in favor of a pgindent run *at least* at the end of each \ncommitfest, giving a full month for patch authors to rebase before the \nnext fest.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Wed, 18 Oct 2023 15:04:13 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 3:21 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> On 18.10.23 06:40, David Rowley wrote:\n> > I agree that it's not nice to add yet another way of breaking the\n> > buildfarm and even more so when the committer did make check-world\n> > before committing. We have --enable-tap-tests, we could have\n> > --enable-indent-checks and have pgindent check the code is correctly\n> > indented during make check-world. Then just not have\n> > --enable-indent-checks in CI.\n>\n> This approach seems like a good improvement, even independent of\n> everything else we might do about this. Making it easier to use and\n> less likely to be forgotten. Also, this way, non-committer contributors\n> can opt-in, if they want to earn bonus points.\n\nYeah. I'm not going to push anything that doesn't pass make\ncheck-world, so this is appealing in that something that I'm already\ndoing would (or could be configured to) catch this problem also.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Oct 2023 10:07:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 11:01:44AM -0400, Robert Haas wrote:\n> In fact, that particular experience is one of the worst things about\n> being a committer. It actively discourages me, at least, from trying\n> to get other people's patches committed. This particular problem is\n> minor, but the overall experience of trying to get things committed is\n> that you have to check 300 things for every patch and if you get every\n> one of them right then nothing happens and if you get one of them\n> wrong then you get a bunch of irritated emails criticizing your\n> laziness, sloppiness, or whatever, and you have to drop everything to\n> go fix it immediately. What a deal! I'm sure this isn't the only\n> reason why we have such a huge backlog of patches needing committer\n> attention, but it sure doesn't help. And there is absolutely zero need\n> for this to be yet another thing that you can find out you did wrong\n> in the 1-24 hour period AFTER you type 'git push'.\n\nThis comment resonated with me. I do all my git operations with shell\nscripts so I can check for all the mistakes I have made in the past and\ngenerate errors. Even with all of that, committing is an\nanxiety-producing activity because any small mistake is quickly revealed\nto the world. There aren't many things I do in a day where mistakes are\nso impactful.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 18 Oct 2023 12:45:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-16 20:45:00 -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> 2. We could raise awareness of this issue by adding indent verification\n> to CI testing. I hesitate to suggest that, though, for a couple of\n> reasons:\n> 2a. It seems fairly expensive, though I might be misjudging.\n\nCompared to other things it's not that expensive. On my workstation, which is\nslower on a per-core basis than CI, a whole tree pgindent --silent-diff takes\n6.8s. For That's doing things serially, it shouldn't be that hard to parallelize\nthe per-file processing.\n\nFor comparison, the current compiler warnings task takes 6-15min, depending on\nthe state of the ccache \"database\". Even when ccache is primed, running\ncpluspluscheck or headerscheck is ~30s each. Adding a few more seconds for an\nindentation check wouldn't be a problem.\n\n\n> 2b. It's often pretty handy to submit patches that aren't fully\n> indent-clean; I have such a patch in flight right now at [1].\n>\n> 2b could be ameliorated by making the indent check be a separate\n> test process that doesn't obscure the results of other testing.\n\nThe compiler warnings task already executes a number of tests even if prior\ntests have failed (to be able to find compiler warnings in different compilers\nat once). Adding pgindent cleanliness to that would be fairly simple.\n\n\nI still think that one of the more important things we ought to do is to make\nit trivial to check if code is correctly indented and reindent it for the\nuser. I've posted a preliminary patch to add a 'indent-tree' target a few\nmonths back, at\nhttps://postgr.es/m/20230527184201.2zdorrijg2inqt6v%40alap3.anarazel.de\n\nI've updated that patch, now it has\n- indent-tree, reindents the entire tree\n- indent-head, which pgindent --commit=HEAD\n- indent-check, fails if the tree isn't correctly indented\n- indent-diff, like indent-check, but also shows the diff\n\nIf we tought pgindent to emit the list of files it processes to a dependency\nfile, we could make it cheap to call indent-check repeatedly, by teaching\nmeson/ninja to not reinvoke it if the input files haven't changed. Personally\nthat'd make it more bearable to script indentation checks to happen\nfrequently.\n\n\nI'll look into writing a command to update typedefs.list with all the local\nchanges.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 18 Oct 2023 12:15:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-18 12:15:51 -0700, Andres Freund wrote:\n> I still think that one of the more important things we ought to do is to make\n> it trivial to check if code is correctly indented and reindent it for the\n> user. I've posted a preliminary patch to add a 'indent-tree' target a few\n> months back, at\n> https://postgr.es/m/20230527184201.2zdorrijg2inqt6v%40alap3.anarazel.de\n> \n> I've updated that patch, now it has\n> - indent-tree, reindents the entire tree\n> - indent-head, which pgindent --commit=HEAD\n> - indent-check, fails if the tree isn't correctly indented\n> - indent-diff, like indent-check, but also shows the diff\n> \n> If we tought pgindent to emit the list of files it processes to a dependency\n> file, we could make it cheap to call indent-check repeatedly, by teaching\n> meson/ninja to not reinvoke it if the input files haven't changed. Personally\n> that'd make it more bearable to script indentation checks to happen\n> frequently.\n> \n> \n> I'll look into writing a command to update typedefs.list with all the local\n> changes.\n\nIt turns out that updating the in-tree typedefs.list would be very noisy. On\nmy local linux system I get\n 1 file changed, 422 insertions(+), 1 deletion(-)\n\nOn a mac mini I get\n 1 file changed, 351 insertions(+), 1 deletion(-)\n\nWe could possibly address that by updating the in-tree typedefs.list a bit\nmore aggressively. Sure looks like the source systems are on the older side.\n\n\nBut in the attached patch I've implemented this slightly differently. If the\ntooling to do so is available, the indent-* targets explained above,\nuse/depend on src/tools/pgindent/typedefs.list.merged (in the build dir),\nwhich is the combination of a src/tools/pgindent/typedefs.list.local generated\nfor the local binaries/libraries and the source tree\nsrc/tools/pgindent/typedefs.list.\n\nsrc/tools/pgindent/typedefs.list.local is generated in fragments, with one\nfragment for each build target. That way the whole file doesn't have to be\nregenerated all the time, which can save a good bit of time (althoug obviously\nless when hacking on the backend).\n\nThis makes it quite quick to locally indent, without needing to manually\nfiddle around with manually modifying typedefs.list or using a separate\ntypedefs.list.\n\n\nIn a third commit I added a 'nitpick' configure time option, defaulting to\noff, which runs an indentation check. The failure mode of that currently isn't\nvery helpful though, as it just uses --silent-diff.\n\n\nAll of this currently is meson only, largely because I don't feel like\nspending the time messing with the configure build, particularly before there\nis any agreement on this being the thing to do.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 18 Oct 2023 17:56:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It turns out that updating the in-tree typedefs.list would be very noisy. On\n> my local linux system I get\n> 1 file changed, 422 insertions(+), 1 deletion(-)\n> On a mac mini I get\n> 1 file changed, 351 insertions(+), 1 deletion(-)\n\nThat seems like it needs a considerably closer look. What exactly\nis getting added/deleted?\n\n> We could possibly address that by updating the in-tree typedefs.list a bit\n> more aggressively. Sure looks like the source systems are on the older side.\n\nReally? Per [1] we've currently got contributions from calliphoridae\nwhich is Debian sid, crake which is Fedora 38, indri/sifaka which are\nmacOS Sonoma. Were you really expecting something newer, and if so what?\n\n> But in the attached patch I've implemented this slightly differently. If the\n> tooling to do so is available, the indent-* targets explained above,\n> use/depend on src/tools/pgindent/typedefs.list.merged (in the build dir),\n> which is the combination of a src/tools/pgindent/typedefs.list.local generated\n> for the local binaries/libraries and the source tree\n> src/tools/pgindent/typedefs.list.\n\nHmm ... that allows indenting your C files, but how do you get from that\nto a non-noisy patch to commit to typedefs.list?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/typedefs.pl?show_list\n\n\n",
"msg_date": "Wed, 18 Oct 2023 21:29:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-18 21:29:37 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > It turns out that updating the in-tree typedefs.list would be very noisy. On\n> > my local linux system I get\n> > 1 file changed, 422 insertions(+), 1 deletion(-)\n> > On a mac mini I get\n> > 1 file changed, 351 insertions(+), 1 deletion(-)\n>\n> That seems like it needs a considerably closer look. What exactly\n> is getting added/deleted?\n\nTypes from bison, openssl, libxml, libxslt, icu and libc, at least. If I\nenable LLVM, there are even more.\n\n(I think I figured out what's happening further down)\n\n\n\n> > We could possibly address that by updating the in-tree typedefs.list a bit\n> > more aggressively. Sure looks like the source systems are on the older side.\n>\n> Really? Per [1] we've currently got contributions from calliphoridae\n> which is Debian sid, crake which is Fedora 38, indri/sifaka which are\n> macOS Sonoma. Were you really expecting something newer, and if so what?\n\nIt's quite odd, I see plenty more types than those. I can't really explain why\nthey're not being picked up on those animals.\n\nE.g. for me bison generated files contain typedefs like\n\ntypedef int_least8_t yytype_int8;\ntypedef signed char yytype_int8;\ntypedef yytype_int8 yy_state_t;\n\nyet they don't show up in the buildfarm typedefs output. It's not a thing of\nthe binary, I checked that the symbols are present.\n\nI have a hard time parsing the buildfarm code for generating the typedefs\nfile, tbh.\n\n<stare>\n\nAh, I see. If I interpret that correctly, the code filters out symbols it\ndoesn't find in in some .[chly] file in the *source* directory. This code is,\nuh, barely readable and massively underdocumented.\n\nI guess I need to reimplement that :/. Don't immediately see how this could\nbe implemented for in-tree autoconf builds...\n\n\n> > But in the attached patch I've implemented this slightly differently. If the\n> > tooling to do so is available, the indent-* targets explained above,\n> > use/depend on src/tools/pgindent/typedefs.list.merged (in the build dir),\n> > which is the combination of a src/tools/pgindent/typedefs.list.local generated\n> > for the local binaries/libraries and the source tree\n> > src/tools/pgindent/typedefs.list.\n>\n> Hmm ... that allows indenting your C files, but how do you get from that\n> to a non-noisy patch to commit to typedefs.list?\n\nIt doesn't provide that on its own. Being able to painlessly indent the files\nseems pretty worthwhile already. But clearly it'd much better if we can\nautomatically update typedefs.list.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Oct 2023 19:18:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-18 19:18:13 -0700, Andres Freund wrote:\n> On 2023-10-18 21:29:37 -0400, Tom Lane wrote:\n> Ah, I see. If I interpret that correctly, the code filters out symbols it\n> doesn't find in in some .[chly] file in the *source* directory. This code is,\n> uh, barely readable and massively underdocumented.\n>\n> I guess I need to reimplement that :/. Don't immediately see how this could\n> be implemented for in-tree autoconf builds...\n>\n> > > But in the attached patch I've implemented this slightly differently. If the\n> > > tooling to do so is available, the indent-* targets explained above,\n> > > use/depend on src/tools/pgindent/typedefs.list.merged (in the build dir),\n> > > which is the combination of a src/tools/pgindent/typedefs.list.local generated\n> > > for the local binaries/libraries and the source tree\n> > > src/tools/pgindent/typedefs.list.\n> >\n> > Hmm ... that allows indenting your C files, but how do you get from that\n> > to a non-noisy patch to commit to typedefs.list?\n>\n> It doesn't provide that on its own. Being able to painlessly indent the files\n> seems pretty worthwhile already. But clearly it'd much better if we can\n> automatically update typedefs.list.\n\nWith code for that added, things seem to work quite nicely. I added similar\nlogic to the buildfarm code that builds a list of all tokens in the source\ncode.\n\nWith that, the in-tree typedefs.list can be updated with new tokens found\nlocally *and* typdefs that aren't used anymore can be removed from the in-tree\ntypedefs.list (detected by no matching tokens found in the source code).\n\nThe only case this approach can't handle is newly referenced typedefs in code\nthat isn't built locally - which I think isn't particularly common and seems\nsomewhat fundamental. In those cases typedefs.list still can be updated\nmanually (and the sorting will still be \"fixed\" if necessary).\n\n\nThe code is still in a somewhat rough shape and I'll not finish polishing it\ntoday. I've attached the code anyway, don't be too rough :).\n\nThe changes from running \"ninja update-typedefs indent-tree\" on debian and\nmacos are attached as 0004 - the set of changes looks quite correct to me.\n\n\nThe buildfarm code filtered out a few typedefs manually:\n push(@badsyms, 'date', 'interval', 'timestamp', 'ANY');\nbut I don't really see why? Possibly that was needed with an older\npg_bsd_indent to prevent odd stuff?\n\n\nRight now building a new unified typedefs.list and copying it to the source\ntree are still separate targets, but that probably makes less sense now? Or\nperhaps it should be copied to the source tree when reindenting?\n\n\nI've only handled linux and macos in the typedefs gathering code. But the\nremaining OSs should be \"just a bit of work\" [TM].\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 18 Oct 2023 21:49:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Wed, 2023-10-18 at 22:34 +1300, David Rowley wrote:\n> It would be good to learn how many of the committers out of the ones\n> you listed that --enable-indent-checks would have saved from breaking\n> koel.\n\nI'd find that a useful option.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 23 Oct 2023 17:50:52 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 9:51 Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Wed, 2023-10-18 at 22:34 +1300, David Rowley wrote:\n> > It would be good to learn how many of the committers out of the ones\n> > you listed that --enable-indent-checks would have saved from breaking\n> > koel.\n>\n> I'd find that a useful option.\n\n\n+1. While I’ve made it part of routine to keep my local work pgindented\nsince breaking Joel once, an option like this would still be useful.\n\n>\n\nOn Tue, Oct 24, 2023 at 9:51 Jeff Davis <pgsql@j-davis.com> wrote:On Wed, 2023-10-18 at 22:34 +1300, David Rowley wrote:\n> It would be good to learn how many of the committers out of the ones\n> you listed that --enable-indent-checks would have saved from breaking\n> koel.\n\nI'd find that a useful option.+1. While I’ve made it part of routine to keep my local work pgindented since breaking Joel once, an option like this would still be useful.",
"msg_date": "Tue, 24 Oct 2023 10:16:55 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 10:16:55AM +0900, Amit Langote wrote:\n> +1. While I’ve made it part of routine to keep my local work pgindented\n> since breaking Joel once, an option like this would still be useful.\n\nI'd be OK with an option like that. It is one of these things to type\nonce in a script, then forget about it.\n--\nMichael",
"msg_date": "Tue, 24 Oct 2023 13:31:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-10-17 Tu 09:52, Robert Haas wrote:\n> On Tue, Oct 17, 2023 at 6:34 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n>> I think *it is* dead easy to comply. If you run the following commands\n>> before committing/after rebasing, then koel should always be happy:\n>>\n>> src/tools/pgindent/pgindent src # works always but a bit slow\n>> src/tools/pgindent/pgindent $(git diff --name-only --diff-filter=ACMR)\n>> # much faster, but only works if you DID NOT change typedefs.list\n> In isolation, that's true, but the list of mistakes that you can make\n> while committing which will inconvenience everyone working on the\n> project is very long. Another one that comes up frequently is\n> forgetting to bump CATALOG_VERSION_NO, but you also need a good commit\n> message, and good comments, and a good Discussion link in the commit\n> message, and the right list of authors and reviewers, and to update\n> the docs (with spaces, not tabs) and the Makefiles (with tabs, not\n> spaces) and the meson stuff and, as if that weren't enough already,\n> you actually need the code to work! And that includes not only working\n> regularly but also with CLOBBER_CACHE_ALWAYS and debug_parallel_query\n> and so on. It's very easy to miss something somewhere. I put a LOT of\n> work into polishing my commits before I push them, and it's still not\n> that uncommon that I screw something up.\n\n\nYes, there's a lot to look out for, and you're a damn sight better at it \nthan I am. But we should try to automate the things that can be \nautomated, even if that leaves many tasks that can't be. I have three \nthings in my pre-commit hook: a check for catalog updates, a check for \nnew typedefs, and an indent check. And every one of them has saved me \nfrom doing things I should not be doing. They aren't perfect but they \nare useful.\n\nSlightly off topic, but apropos your message, maybe we should recommend \na standard git commit template.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 24 Oct 2023 09:40:01 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 09:40:01AM -0400, Andrew Dunstan wrote:\n> Slightly off topic, but apropos your message, maybe we should recommend a\n> standard git commit template.\n\nI use this and then automatically remove any sections that are empty.\n\n---------------------------------------------------------------------------\n\n\n|--- gitweb subject length limit ----------------|-email limit-|\n\n\nReported-by:\n\nDiagnosed-by:\n\nBug:\n\nDiscussion:\n\nAuthor:\n\nReviewed-by:\n\nTested-by:\n\nBackpatch-through:\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 24 Oct 2023 09:46:34 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-10-18 We 05:07, Jelte Fennema wrote:\n> I think --enable-indent-checks sounds like a good improvement to the\n> status quo. But I'm not confident that it will help remove the cases\n> where only a comment needs to be re-indented. Do commiters really\n> always run check-world again when only changing a typo in a comment? I\n> know I probably wouldn't (or at least not always).\n\n\nYeah. In fact I'm betting that a lot of the offending commits we've seen \ncome into this category. You build, you check, then you do some final \npolish. That's where a pre-commit hook can save you.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 24 Oct 2023 09:53:00 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 6:21 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Wed, 2023-10-18 at 22:34 +1300, David Rowley wrote:\n> > It would be good to learn how many of the committers out of the ones\n> > you listed that --enable-indent-checks would have saved from breaking\n> > koel.\n>\n> I'd find that a useful option.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 25 Oct 2023 15:12:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Hello,\n\n\n> Yes, there's a lot to look out for, and you're a damn sight better at\n> it \n> than I am. But we should try to automate the things that can be \n> automated, even if that leaves many tasks that can't be. I have three\n> things in my pre-commit hook: a check for catalog updates, a check\n> for \n> new typedefs, and an indent check.\n\nCould you share your configuration ? Could we provide more helper and\nintegration to help produce consistent code ?\n\nFor logfmt extension, I configured clang-formatd so that Emacs format\nthe buffer on save. Any editor running clangd will use this. This is\nease my mind about formatting. I need to investigate how to use\npgindent instead or at lease ensure clang-format produce same output as\npgindent.\n\nhttps://gitlab.com/dalibo/logfmt/-/blob/0d808b368e649b23ac06ce2657354b67be398b21/.clang-format\n\nAutomate nitpicking in CI is good, but checking locally before sending\nthe patch will save way more round-trip.\n\nRegards,\nÉtienne\n\n\n",
"msg_date": "Fri, 27 Oct 2023 09:14:38 +0200",
"msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <etienne.bersac@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "On 2023-10-27 Fr 03:14, Étienne BERSAC wrote:\n> Hello,\n>\n>\n>> Yes, there's a lot to look out for, and you're a damn sight better at\n>> it\n>> than I am. But we should try to automate the things that can be\n>> automated, even if that leaves many tasks that can't be. I have three\n>> things in my pre-commit hook: a check for catalog updates, a check\n>> for\n>> new typedefs, and an indent check.\n> Could you share your configuration ? Could we provide more helper and\n> integration to help produce consistent code ?\n\n\nSure. pre-commit hook file attached. I'm sure this could be improved on.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 27 Oct 2023 08:14:52 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-08-12 Sa 11:57, Andrew Dunstan wrote:\n>\n>\n> On 2023-08-11 Fr 19:17, Tom Lane wrote:\n>> Peter Geoghegan<pg@bowt.ie> writes:\n>>> I'm starting to have doubts about this policy. There have now been\n>>> quite a few follow-up \"fixes\" to indentation issues that koel\n>>> complained about. None of these fixups have been included in\n>>> .git-blame-ignore-revs. If things continue like this then \"git blame\"\n>>> is bound to become much less usable over time.\n>> FWIW, I'm much more optimistic than that. I think what we're seeing\n>> is just the predictable result of not all committers having yet\n>> incorporated \"pgindent it before committing\" into their workflow.\n>> The need for followup fixes should diminish as people start doing\n>> that. If you want to hurry things along, peer pressure on committers\n>> who clearly aren't bothering is the solution.\n>\n>\n> Yeah, part of the point of creating koel was to give committers a bit \n> of a nudge in that direction.\n>\n> With a git pre-commit hook it's pretty painless.\n>\n>\n\nBased on recent experience, where a lot koel's recent complaints seem to \nbe about comments, I'd like to suggest a modest adjustment.\n\nFirst, we should provide a mode of pgindent that doesn't reflow \ncomments. pg_bsd_indent has a flag for this (-nfcb), so this should be \nrelatively simple. Second, koel could use that mode, so that it \nwouldn't complain about comments it thinks need to be reflowed. Of \ncourse, we'd fix these up with our regular pgindent runs.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 28 Oct 2023 11:47:44 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Based on recent experience, where a lot koel's recent complaints seem to \n> be about comments, I'd like to suggest a modest adjustment.\n\n> First, we should provide a mode of pgindent that doesn't reflow \n> comments. pg_bsd_indent has a flag for this (-nfcb), so this should be \n> relatively simple. Second, koel could use that mode, so that it \n> wouldn't complain about comments it thinks need to be reflowed. Of \n> course, we'd fix these up with our regular pgindent runs.\n\nSeems like a bit of a kluge. Maybe it's the right thing to do, but\nI don't think we have enough data points yet to be confident that\nit'd meaningfully reduce the number of breakages.\n\nOn a more abstract level: the point of trying to maintain indent\ncleanliness is so that if you modify a file and then want to run\npgindent on your own changes, you don't get incidental changes\nelsewhere in the file. This solution would break that, so I'm\nnot sure it isn't throwing the baby out with the bathwater.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Oct 2023 12:09:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
},
{
"msg_contents": "\nOn 2023-10-28 Sa 12:09, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Based on recent experience, where a lot koel's recent complaints seem to\n>> be about comments, I'd like to suggest a modest adjustment.\n>> First, we should provide a mode of pgindent that doesn't reflow\n>> comments. pg_bsd_indent has a flag for this (-nfcb), so this should be\n>> relatively simple. Second, koel could use that mode, so that it\n>> wouldn't complain about comments it thinks need to be reflowed. Of\n>> course, we'd fix these up with our regular pgindent runs.\n> Seems like a bit of a kluge. Maybe it's the right thing to do, but\n> I don't think we have enough data points yet to be confident that\n> it'd meaningfully reduce the number of breakages.\n>\n> On a more abstract level: the point of trying to maintain indent\n> cleanliness is so that if you modify a file and then want to run\n> pgindent on your own changes, you don't get incidental changes\n> elsewhere in the file. This solution would break that, so I'm\n> not sure it isn't throwing the baby out with the bathwater.\n\n\nYeah, could be.\n\n\ncheers\n\n\nandrew.\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 29 Oct 2023 10:22:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: run pgindent on a regular basis / scripted manner"
}
] |
[
{
"msg_contents": "Hi all,\n\nPer the following commit in upstream SELinux, security_context_t has\nbeen marked as deprecated, generating complains with\n-Wdeprecated-declarations:\nhttps://github.com/SELinuxProject/selinux/commit/7a124ca2758136f49cc38efc26fb1a2d385ecfd9\n\nThis can be seen with Debian GID when building contrib/selinux/, as it\nwe have libselinux 3.1 there. Per the upstream repo,\nsecurity_context_t maps to char * in include/selinux/selinux.h, so we\ncan get rid easily of the warnings with the attached that replaces\nthe references to security_context_t. Funnily, our code already mixes\nboth definitions, see for example sepgsql_set_client_label, so this\nclarifies things.\n\nAny thoughts?\n--\nMichael",
"msg_date": "Thu, 13 Aug 2020 10:27:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Per the following commit in upstream SELinux, security_context_t has\n> been marked as deprecated, generating complains with\n> -Wdeprecated-declarations:\n> https://github.com/SELinuxProject/selinux/commit/7a124ca2758136f49cc38efc26fb1a2d385ecfd9\n\nHuh. Apparently it's been considered legacy for a good while, too.\n\n> This can be seen with Debian GID when building contrib/selinux/, as it\n> we have libselinux 3.1 there. Per the upstream repo,\n> security_context_t maps to char * in include/selinux/selinux.h, so we\n> can get rid easily of the warnings with the attached that replaces\n> the references to security_context_t.\n\nUmmm ... aren't you going to get some cast-away-const warnings now?\nOr are all of the called functions declared as taking \"const char *\"\nnot just \"char *\"?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Aug 2020 22:50:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "On Wed, Aug 12, 2020 at 10:50:21PM -0400, Tom Lane wrote:\n> Ummm ... aren't you going to get some cast-away-const warnings now?\n> Or are all of the called functions declared as taking \"const char *\"\n> not just \"char *\"?\n\nLet me see.. The function signatures we use have been visibly changed\nin 9eb9c932, which comes down to a point between 2.2.2 and 2.3, and\nthere are two of them we care about, both use now \"const char *\":\n- security_check_context_raw()\n- security_compute_create_name_raw()\nWe claim in the docs that the minimum version of libselinux supported\nis 2.1.10 (7a86fe1a from march 2012).\n\nThen, the only buildfarm animal I know of testing selinux is\nrhinoceros, that uses CentOS 7.1, and this visibly already bundles\nlibselinux 2.5 that was released in 2016 (2b69984), per the RPM list\nhere:\nhttp://mirror.centos.org/centos/7/\nJoe, what's the version of libselinux used in rhinoceros? 2.5?\n\nBased on this information, what if we increased the minimum support to\n2.3 then? That's a release from 2014, and maintaining such legacy\ncode does not seem much worth the effort IMO.\n--\nMichael",
"msg_date": "Thu, 13 Aug 2020 14:22:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Aug 12, 2020 at 10:50:21PM -0400, Tom Lane wrote:\n>> Ummm ... aren't you going to get some cast-away-const warnings now?\n\n> Let me see.. The function signatures we use have been visibly changed\n> in 9eb9c932, which comes down to a point between 2.2.2 and 2.3, and\n> there are two of them we care about, both use now \"const char *\":\n> - security_check_context_raw()\n> - security_compute_create_name_raw()\n\nOK, it's all good then.\n\n> Based on this information, what if we increased the minimum support to\n> 2.3 then? That's a release from 2014, and maintaining such legacy\n> code does not seem much worth the effort IMO.\n\nWell, \"you get a compiler warning\" isn't a reason to consider the version\nunsupported. There are probably going to be a few other warnings you get\nwhen building on an ancient platform --- as long as it works, I think\nwe're fine. So based on this, no objection, and I think no need to\nchange our statement about what's supported.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Aug 2020 01:29:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 01:29:35AM -0400, Tom Lane wrote:\n> Well, \"you get a compiler warning\" isn't a reason to consider the version\n> unsupported. There are probably going to be a few other warnings you get\n> when building on an ancient platform --- as long as it works, I think\n> we're fine. So based on this, no objection, and I think no need to\n> change our statement about what's supported.\n\nOkay, thanks for confirming. Let's do so then, I'll just wait a bit\nto see if there are more comments from others.\n--\nMichael",
"msg_date": "Thu, 13 Aug 2020 14:35:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "On 8/13/20 1:22 AM, Michael Paquier wrote:\n> On Wed, Aug 12, 2020 at 10:50:21PM -0400, Tom Lane wrote:\n>> Ummm ... aren't you going to get some cast-away-const warnings now?\n>> Or are all of the called functions declared as taking \"const char *\"\n>> not just \"char *\"?\n> \n> Let me see.. The function signatures we use have been visibly changed\n> in 9eb9c932, which comes down to a point between 2.2.2 and 2.3, and\n> there are two of them we care about, both use now \"const char *\":\n> - security_check_context_raw()\n> - security_compute_create_name_raw()\n> We claim in the docs that the minimum version of libselinux supported\n> is 2.1.10 (7a86fe1a from march 2012).\n> \n> Then, the only buildfarm animal I know of testing selinux is\n> rhinoceros, that uses CentOS 7.1, and this visibly already bundles\n> libselinux 2.5 that was released in 2016 (2b69984), per the RPM list\n> here:\n> http://mirror.centos.org/centos/7/\n> Joe, what's the version of libselinux used in rhinoceros? 2.5?\n\n\nrpm -q libselinux\nlibselinux-2.5-15.el7.x86_64\n\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Thu, 13 Aug 2020 06:54:41 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 06:54:41AM -0400, Joe Conway wrote:\n> On 8/13/20 1:22 AM, Michael Paquier wrote:\n>> Joe, what's the version of libselinux used in rhinoceros? 2.5?\n>\n> rpm -q libselinux\n> libselinux-2.5-15.el7.x86_64\n\nThanks!\n--\nMichael",
"msg_date": "Fri, 14 Aug 2020 09:10:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 02:35:28PM +0900, Michael Paquier wrote:\n> Okay, thanks for confirming. Let's do so then, I'll just wait a bit\n> to see if there are more comments from others.\n\nApplied on HEAD then. Thanks for the discussion!\n--\nMichael",
"msg_date": "Fri, 14 Aug 2020 09:43:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Applied on HEAD then. Thanks for the discussion!\n\nShould we back-patch that? Usually I figure that people might want\nto build back PG branches on newer platforms at some point, so that\nit's useful to apply portability fixes across-the-board. On the\nother hand, since it's only a compiler warning, maybe it's not worth\nthe trouble.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Aug 2020 20:47:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 08:47:28PM -0400, Tom Lane wrote:\n> Should we back-patch that? Usually I figure that people might want\n> to build back PG branches on newer platforms at some point, so that\n> it's useful to apply portability fixes across-the-board. On the\n> other hand, since it's only a compiler warning, maybe it's not worth\n> the trouble.\n\nNot sure that's worth the trouble as long as people don't complain\nabout it directly, and it does not prevent the contrib module to\nwork.\n--\nMichael",
"msg_date": "Fri, 14 Aug 2020 10:05:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "On 2020-Aug-14, Michael Paquier wrote:\n\n> On Thu, Aug 13, 2020 at 08:47:28PM -0400, Tom Lane wrote:\n> > Should we back-patch that? Usually I figure that people might want\n> > to build back PG branches on newer platforms at some point, so that\n> > it's useful to apply portability fixes across-the-board. On the\n> > other hand, since it's only a compiler warning, maybe it's not worth\n> > the trouble.\n> \n> Not sure that's worth the trouble as long as people don't complain\n> about it directly, and it does not prevent the contrib module to\n> work.\n\nFWIW I just had a CI job fail the \"warnings\" test because of lacking\nthis patch in the back branches :-) What do you think about\nback-patching this to at least 11? I would say 10, but since that one\nis going to end soon, it might not be worth much effort. OTOH maybe we\nwant to backpatch all the way back to 9.2 given the no-warnings policy\nwe recently acquired.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I must say, I am absolutely impressed with what pgsql's implementation of\nVALUES allows me to do. It's kind of ridiculous how much \"work\" goes away in\nmy code. Too bad I can't do this at work (Oracle 8/9).\" (Tom Allison)\n http://archives.postgresql.org/pgsql-general/2007-06/msg00016.php\n\n\n",
"msg_date": "Thu, 3 Nov 2022 19:10:28 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Aug-14, Michael Paquier wrote:\n>> On Thu, Aug 13, 2020 at 08:47:28PM -0400, Tom Lane wrote:\n>>> Should we back-patch that? Usually I figure that people might want\n>>> to build back PG branches on newer platforms at some point, so that\n>>> it's useful to apply portability fixes across-the-board. On the\n>>> other hand, since it's only a compiler warning, maybe it's not worth\n>>> the trouble.\n\n>> Not sure that's worth the trouble as long as people don't complain\n>> about it directly, and it does not prevent the contrib module to\n>> work.\n\n> FWIW I just had a CI job fail the \"warnings\" test because of lacking\n> this patch in the back branches :-) What do you think about\n> back-patching this to at least 11?\n\nNo objection to back-patching from me.\n\n> I would say 10, but since that one\n> is going to end soon, it might not be worth much effort. OTOH maybe we\n> want to backpatch all the way back to 9.2 given the no-warnings policy\n> we recently acquired.\n\nI'm not sure that no-warnings policy extends to stuff as far off the\nbeaten path as sepgsql. However, I won't stand in the way if you\nwant to do that. One point though: if you want to touch v10, I'd\nsuggest waiting till after next week's releases. Unlikely as it\nis that this'd break anything, I don't think we should risk it\nin the branch's last release.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Nov 2022 19:01:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "On Thu, Nov 03, 2022 at 07:01:20PM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> FWIW I just had a CI job fail the \"warnings\" test because of lacking\n>> this patch in the back branches :-) What do you think about\n>> back-patching this to at least 11?\n> \n> No objection to back-patching from me.\n\nFine by me.\n\n>> I would say 10, but since that one\n>> is going to end soon, it might not be worth much effort. OTOH maybe we\n>> want to backpatch all the way back to 9.2 given the no-warnings policy\n>> we recently acquired.\n> \n> I'm not sure that no-warnings policy extends to stuff as far off the\n> beaten path as sepgsql. However, I won't stand in the way if you\n> want to do that. One point though: if you want to touch v10, I'd\n> suggest waiting till after next week's releases. Unlikely as it\n> is that this'd break anything, I don't think we should risk it\n> in the branch's last release.\n\nIn line of ad96696, seems like that it would make sense to do the same\nhere even if the bar is lower. sepgsql has not changed in years, so I\nsuspect few conflicts. Alvaro, if you want to take care of that,\nthat's fine by me. I could do it, but not before next week.\n\nAgreed to wait after the next minor release.\n--\nMichael",
"msg_date": "Fri, 4 Nov 2022 08:49:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "On Fri, Nov 04, 2022 at 08:49:24AM +0900, Michael Paquier wrote:\n> In line of ad96696, seems like that it would make sense to do the same\n> here even if the bar is lower. sepgsql has not changed in years, so I\n> suspect few conflicts. Alvaro, if you want to take care of that,\n> that's fine by me. I could do it, but not before next week.\n\nI got to look at that, now that the minor releases have been tagged,\nand the change has no conflicts down to 9.3. 9.2 needed a slight\ntweak, though, but it seemed fine as well. (Tested the build on all\nbranches.) So done all the way down, sticking with the new no-warning\npolicy for all the buildable branches.\n--\nMichael",
"msg_date": "Wed, 9 Nov 2022 09:53:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
},
{
"msg_contents": "On 2022-Nov-09, Michael Paquier wrote:\n\n> I got to look at that, now that the minor releases have been tagged,\n> and the change has no conflicts down to 9.3. 9.2 needed a slight\n> tweak, though, but it seemed fine as well. (Tested the build on all\n> branches.) So done all the way down, sticking with the new no-warning\n> policy for all the buildable branches.\n\nThank you :-)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 9 Nov 2022 09:20:50 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: security_context_t marked as deprecated in libselinux 3.1"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nPGRES_FATAL_ERROR is the result status in most cases in the client-side when the backend process raises an error. When the query is failed to execute, PGRES_FATAL_ERROR is returned.\nBut in another case, PGRES_FATAL_ERROR is also returned. The situation is that the network is broken between the client and the backend immediately after the backend process has committed locally. So the client gets the same result status(PGRES_FATAL_ERROR) with normal errors(transaction failed). But the transaction is actually succeeded.\n\nWhen the libpq detects an EOF, a PGresult with status PGRES_FATAL_ERROR is returned to the client.\nWe can check the error message in PGresult to see why an error is returned, but it's unlikely reliable and tricky.\n\nThe result status should be unknown in the above case. Because the server has received the request, and the client doesn't get any response from the server, so it may succeed or fail.\n\nRegards,\nHao Wu\n\n\n\n\n\n\n\n\nHi hackers,\n\n\n\n\nPGRES_FATAL_ERROR is the result status in most cases in the client-side when the backend process raises an error. When the query is failed to execute, PGRES_FATAL_ERROR is returned.\n\nBut in another case, PGRES_FATAL_ERROR is also returned. The situation is that the network is broken between the client and the\n backend immediately after the backend process has committed locally. So the client gets the same result status(PGRES_FATAL_ERROR)\n with normal errors(transaction failed). But the transaction is actually succeeded.\n\n\n\n\n\nWhen the libpq detects an EOF, a PGresult with status PGRES_FATAL_ERROR\n is returned to the client.\n\nWe\n can check the error message in PGresult to see why an error is returned, but it's unlikely reliable and tricky.\n\n\n\n\nThe\n result status should be unknown in the above case. Because the server has received the request, and the client doesn't get any response from the server, so it may succeed or fail.\n\n\n\n\nRegards,\n\nHao\n Wu",
"msg_date": "Thu, 13 Aug 2020 02:12:13 +0000",
"msg_from": "Hao Wu <hawu@vmware.com>",
"msg_from_op": true,
"msg_subject": "Missing unknown status in PGresult"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm not sure if I should have posted this to pgsql-advocacy, but this is being developed so I posted here.\n\nDoes anyone know if this development come to open source Postgres, or only to the cloud services of Microsoft and Google?\n\n(I wonder this will become another reason that Postgres won't incorporate optimizer hint feature.)\n\nData systems that learn to be better\nhttp://news.mit.edu/2020/mit-data-systems-learn-be-better-tsunami-bao-0810\n\n\n[Quote]\n--------------------------------------------------\nAs a first step toward this vision, Kraska and colleagues developed Tsunami and Bao. Tsunami uses machine learning to automatically re-organize a dataset’s storage layout based on the types of queries that its users make. Tests show that it can run queries up to 10 times faster than state-of-the-art systems. What’s more, its datasets can be organized via a series of \"learned indexes\" that are up to 100 times smaller than the indexes used in traditional systems. \n\nBao, meanwhile, focuses on improving the efficiency of query optimization through machine learning.\n...\nTraditional query optimizers take years to build, are very hard to maintain, and, most importantly, do not learn from their mistakes. Bao is the first learning-based approach to query optimization that has been fully integrated into the popular database management system PostgreSQL. Lead author Ryan Marcus, a postdoc in Kraska’s group, says that Bao produces query plans that run up to 50 percent faster than those created by the PostgreSQL optimizer, meaning that it could help to significantly reduce the cost of cloud services, like Amazon’s Redshift, that are based on PostgreSQL.\n\nKraska says that in contrast to other learning-based approaches to query optimization, Bao learns much faster and can outperform open-source and commercial optimizers with as little as one hour of training time.In the future, his team aims to integrate Bao into cloud systems to improve resource utilization in environments where disk, RAM, and CPU time are scarce resources.\n...\nThe work was done as part of the Data System and AI Lab (DSAIL@CSAIL), which is sponsored by Intel, Google, Microsoft, and the U.S. National Science Foundation. \n--------------------------------------------------\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Thu, 13 Aug 2020 03:26:33 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Autonomous database is coming to Postgres?"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 03:26:33AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> Hello,\n> \n> I'm not sure if I should have posted this to pgsql-advocacy, but this is being developed so I posted here.\n> \n> Does anyone know if this development come to open source Postgres, or only to the cloud services of Microsoft and Google?\n> \n> (I wonder this will become another reason that Postgres won't incorporate optimizer hint feature.)\n> \n> Data systems that learn to be better\n> http://news.mit.edu/2020/mit-data-systems-learn-be-better-tsunami-bao-0810\n\nIt seems interesting, but I don't know anyone working on this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 14 Aug 2020 08:55:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Autonomous database is coming to Postgres?"
},
{
"msg_contents": "> On Fri, Aug 14, 2020 at 08:55:53AM -0400, Bruce Momjian wrote:\n> > On Thu, Aug 13, 2020 at 03:26:33AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> > Hello,\n> >\n> > I'm not sure if I should have posted this to pgsql-advocacy, but this is being developed so I posted here.\n> > Does anyone know if this development come to open source Postgres, or only to the cloud services of Microsoft and Google?\n> > (I wonder this will become another reason that Postgres won't incorporate optimizer hint feature.)\n> >\n> > Data systems that learn to be better\n> > http://news.mit.edu/2020/mit-data-systems-learn-be-better-tsunami-bao-0810\n>\n> It seems interesting, but I don't know anyone working on this.\n\nTim Kraska mentioned in twitter plans about releasing BAO as an open\nsource project (PostgreSQL extension I guess?), but there seems to be no\ninteraction with the community.\n\n\n",
"msg_date": "Fri, 14 Aug 2020 15:56:32 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autonomous database is coming to Postgres?"
}
] |
[
{
"msg_contents": "Hello!\n\nAccording to the docs[1], one may use DEFAULT keyword while inserting\ninto generated columns (stored and identity). However, currently it\nworks only for a single VALUES sublist with DEFAULT for a generated column\nbut not for the case when VALUES RTE is used. This is not being tested\nand it is broken.\n\nI am attaching two patches. One for tests and another one with the\nproposed changes based on ideas from Andrew on IRC. So if all good there\ngoes the credit where credit is due. If patch is no good, then it is\nlikely my misunderstanding how to put words into code :-)\n\nThis is my only second patch to PostgreSQL (the first one was rejected)\nso don't be too harsh :-) It may not be perfect but I am open for a\nfeedback and this is just to get the ball rolling and to let the\ncommunity know about this issue.\n\nBefore you ask why would I want to insert DEFAULTs ... well, there are\nORMs[2] that still need to be patched and current situation contradicts\ndocumentation[1].\n\nFootnotes:\n[1] https://www.postgresql.org/docs/devel/ddl-generated-columns.html\n\n[2] https://github.com/rails/rails/pull/39368#issuecomment-670351379\n\n--\nMikhail",
"msg_date": "Wed, 12 Aug 2020 23:30:50 -0500",
"msg_from": "Mikhail Titov <mlt@gmx.us>",
"msg_from_op": true,
"msg_subject": "[bug+patch] Inserting DEFAULT into generated columns from VALUES RTE"
},
{
"msg_contents": "Hi\n\nčt 13. 8. 2020 v 6:31 odesílatel Mikhail Titov <mlt@gmx.us> napsal:\n\n> Hello!\n>\n> According to the docs[1], one may use DEFAULT keyword while inserting\n> into generated columns (stored and identity). However, currently it\n> works only for a single VALUES sublist with DEFAULT for a generated column\n> but not for the case when VALUES RTE is used. This is not being tested\n> and it is broken.\n>\n> I am attaching two patches. One for tests and another one with the\n> proposed changes based on ideas from Andrew on IRC. So if all good there\n> goes the credit where credit is due. If patch is no good, then it is\n> likely my misunderstanding how to put words into code :-)\n>\n> This is my only second patch to PostgreSQL (the first one was rejected)\n> so don't be too harsh :-) It may not be perfect but I am open for a\n> feedback and this is just to get the ball rolling and to let the\n> community know about this issue.\n>\n> Before you ask why would I want to insert DEFAULTs ... well, there are\n> ORMs[2] that still need to be patched and current situation contradicts\n> documentation[1].\n>\n\nplease, assign your patch to commitfest application\n\nhttps://commitfest.postgresql.org/29/\n\nRegards\n\nPavel\n\n\n> Footnotes:\n> [1] https://www.postgresql.org/docs/devel/ddl-generated-columns.html\n>\n> [2] https://github.com/rails/rails/pull/39368#issuecomment-670351379\n>\n> --\n> Mikhail\n>\n\nHičt 13. 8. 2020 v 6:31 odesílatel Mikhail Titov <mlt@gmx.us> napsal:Hello!\n\nAccording to the docs[1], one may use DEFAULT keyword while inserting\ninto generated columns (stored and identity). However, currently it\nworks only for a single VALUES sublist with DEFAULT for a generated column\nbut not for the case when VALUES RTE is used. This is not being tested\nand it is broken.\n\nI am attaching two patches. One for tests and another one with the\nproposed changes based on ideas from Andrew on IRC. So if all good there\ngoes the credit where credit is due. If patch is no good, then it is\nlikely my misunderstanding how to put words into code :-)\n\nThis is my only second patch to PostgreSQL (the first one was rejected)\nso don't be too harsh :-) It may not be perfect but I am open for a\nfeedback and this is just to get the ball rolling and to let the\ncommunity know about this issue.\n\nBefore you ask why would I want to insert DEFAULTs ... well, there are\nORMs[2] that still need to be patched and current situation contradicts\ndocumentation[1].please, assign your patch to commitfest application https://commitfest.postgresql.org/29/RegardsPavel\n\nFootnotes:\n[1] https://www.postgresql.org/docs/devel/ddl-generated-columns.html\n\n[2] https://github.com/rails/rails/pull/39368#issuecomment-670351379\n\n--\nMikhail",
"msg_date": "Thu, 13 Aug 2020 07:11:08 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug+patch] Inserting DEFAULT into generated columns from VALUES\n RTE"
},
{
"msg_contents": "Previously submitted patch got somehow trailing spaces mangled on the\nway out. This is an attempt to use application/octet-stream MIME instead\nof text/x-patch to preserve those for regression tests.\n\nOn Thu, Aug 13, 2020 at 12:11 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> please, assign your patch to commitfest application\n\nHere is the backlink https://commitfest.postgresql.org/29/2681/\n\n--\nMikhail",
"msg_date": "Fri, 14 Aug 2020 14:57:38 -0500",
"msg_from": "Mikhail Titov <mlt@gmx.us>",
"msg_from_op": true,
"msg_subject": "Re: [bug+patch] Inserting DEFAULT into generated columns from VALUES\n RTE"
},
{
"msg_contents": "Mikhail Titov <mlt@gmx.us> writes:\n> Previously submitted patch got somehow trailing spaces mangled on the\n> way out. This is an attempt to use application/octet-stream MIME instead\n> of text/x-patch to preserve those for regression tests.\n\nI took a quick look through this. I agree with the general idea of\ndetecting cases where all of the entries in a VALUES column are DEFAULT,\nbut the implementation needs work.\n\nThe cfbot reports that it doesn't compile [1]:\n\nparse_relation.c: In function ‘expandNSItemVars’:\nparse_relation.c:2992:34: error: ‘T_Node’ undeclared (first use in this function)\n std = list_nth_node(Node, row, colindex);\n ^\n\nI suspect this indicates that you did not use --enable-cassert in your own\ntesting, which is usually a bad idea; that enables a lot of checks that\nyou really want to have active for development purposes.\n\nHacking expandNSItemVars() for this purpose is an extremely bad idea.\nThe API spec for that is\n *\t Produce a list of Vars, and optionally a list of column names,\n *\t for the non-dropped columns of the nsitem.\nThis patch breaks that specification, and in turn breaks callers that\nexpect it to be adhered to. I see at least one caller that will suffer\nassertion failures because of that, which reinforces my suspicion that\nyou did not test with assertions on.\n\nI think you'd be better off to make transformInsertStmt(), specifically\nits multi-VALUES-rows code path, check for all-DEFAULT columns and adjust\nthe tlist itself. Doing it there might be a good bit less inefficient\nfor very long VALUES lists, too, which is a case that we do worry about.\nSince that's already iterating through the sub-lists, you could track\nwhether all rows seen so far contain SetToDefault in each column position,\nand avoid extra scans of the sublists. (A BitmapSet might be a convenient\nrepresentation of that, though you could also use a bool array I suppose.)\n\nI do not care for what you did in rewriteValuesRTE() either: just removing\na sanity check isn't OK, unless you've done something to make the sanity\ncheck unnecessary which you surely have not. Perhaps you could extend\nthe initial scan of the tlist (lines 1294-1310) to notice SetToDefault\nnodes as well as Var nodes and keep track of which columns have those.\nThen you could cross-check that one or the other case applies whenever\nyou see a SetToDefault in the VALUES lists.\n\nBTW, another thing that needs checking is whether a rule containing\nan INSERT like this will reverse-list sanely. The whole idea of\nreplacing some of the Vars might not work so far as ruleutils.c is\nconcerned. In that case I think we might have to implement this\nby having transformInsertStmt restructure the VALUES lists to\nphysically remove the all-DEFAULT column, and adjust the target column\nlist accordingly --- that is, make a parse-time transformation of\n\tINSERT INTO gtest0 VALUES (1, DEFAULT), (2, DEFAULT);\ninto\n\tINSERT INTO gtest0(a) VALUES (1), (2);\nThat'd have the advantage that you'd not have to hack up the\nrewriter at all.\n\nAlso, in the case where the user has failed to ensure that all the\ncolumn entries are DEFAULT, I suppose that we'll still get the same\nerror as now:\n\n\tregression=# INSERT INTO gtest0 VALUES (1, DEFAULT), (2, 42);\n\tERROR: cannot insert into column \"b\"\n\tDETAIL: Column \"b\" is a generated column.\n\nThis seems fairly confusing and unhelpful. Perhaps it's not this\npatch's job to improve it, but it'd be nice if we could do better.\nOne easy change would be to make the error message more specific:\n\n\tERROR: cannot insert a non-DEFAULT value into column \"b\"\n\n(I think this wording is accurate, but I might be wrong.) It'd be\neven better if we could emit an error cursor pointing at (one of)\nthe entries that are not DEFAULT, since in a command with a long\nVALUES list it might not be that obvious where you screwed up.\n\nFWIW, I would not bother splitting a patch like this into two parts.\nThat increases your effort level, and it increases the reviewer's\neffort to apply it too, and this patch isn't big enough to justify it.\n\n\t\t\tregards, tom lane\n\n[1] https://travis-ci.org/github/postgresql-cfbot/postgresql/builds/724543451\n\n\n",
"msg_date": "Sun, 06 Sep 2020 17:42:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [bug+patch] Inserting DEFAULT into generated columns from VALUES\n RTE"
},
{
"msg_contents": "On Sun, 6 Sept 2020 at 22:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I think you'd be better off to make transformInsertStmt(), specifically\n> its multi-VALUES-rows code path, check for all-DEFAULT columns and adjust\n> the tlist itself. Doing it there might be a good bit less inefficient\n> for very long VALUES lists, too, which is a case that we do worry about.\n> Since that's already iterating through the sub-lists, you could track\n> whether all rows seen so far contain SetToDefault in each column position,\n> and avoid extra scans of the sublists. (A BitmapSet might be a convenient\n> representation of that, though you could also use a bool array I suppose.)\n>\n> I do not care for what you did in rewriteValuesRTE() either: just removing\n> a sanity check isn't OK, unless you've done something to make the sanity\n> check unnecessary which you surely have not. Perhaps you could extend\n> the initial scan of the tlist (lines 1294-1310) to notice SetToDefault\n> nodes as well as Var nodes and keep track of which columns have those.\n> Then you could cross-check that one or the other case applies whenever\n> you see a SetToDefault in the VALUES lists.\n\nThat's not quite right because by the time rewriteValuesRTE() sees the\ntlist, it will contain already-rewritten generated column expressions,\nnot SetToDefault nodes. If we're going to keep that sanity check (and\nI think that we should), I think that the way to do it is to have\nrewriteTargetListIU() record which columns it has expanded defaults\nfor, and pass that information to rewriteValuesRTE(). Those columns of\nthe VALUES RTE are no longer used in the query, so the sanity check\ncan be amended to ignore them while continuing to check the other\ncolumns.\n\nIncidentally, there is another way of causing that sanity check to\nfail -- an INSERT ... OVERRIDING USER VALUE query with some DEFAULTS\nin the VALUES RTE (but not necessarily all DEFAULTs) will trigger it.\nSo even if the parser completely removed any all-DEFAULT columns from\nthe VALUES RTE, some work in the rewriter would still be necessary.\n\n\n> BTW, another thing that needs checking is whether a rule containing\n> an INSERT like this will reverse-list sanely. The whole idea of\n> replacing some of the Vars might not work so far as ruleutils.c is\n> concerned. In that case I think we might have to implement this\n> by having transformInsertStmt restructure the VALUES lists to\n> physically remove the all-DEFAULT column, and adjust the target column\n> list accordingly --- that is, make a parse-time transformation of\n> INSERT INTO gtest0 VALUES (1, DEFAULT), (2, DEFAULT);\n> into\n> INSERT INTO gtest0(a) VALUES (1), (2);\n> That'd have the advantage that you'd not have to hack up the\n> rewriter at all.\n\nI think it's actually easier to just do it all in the rewriter -- at\nthe point where we see that we're about to insert potentially illegal\nvalues from a VALUES RTE into a generated column, scan it to see if\nall the values in that column are DEFAULTs, and if so trigger the\nexisting code to replace the Var in the tlist with the generated\ncolumn expression. That way we only do extra work in the case for\nwhich we're currently throwing an error, rather than for every query.\nAlso, I think that scanning the VALUES lists in this way is likely to\nbe cheaper than rebuilding them to remove a column.\n\nAttached is a patch doing it that way, along with additional\nregression tests that trigger both the original error and the\nsanity-check error triggered by INSERT ... OVERRIDING USER VALUES. I\nalso added a few additional comments where I found the existing code a\nlittle non-obvious.\n\nI haven't touched the existing error messages. I think that's a\nsubject for a separate patch.\n\nRegards,\nDean",
"msg_date": "Fri, 20 Nov 2020 14:30:25 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug+patch] Inserting DEFAULT into generated columns from VALUES\n RTE"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> I think it's actually easier to just do it all in the rewriter -- at\n> the point where we see that we're about to insert potentially illegal\n> values from a VALUES RTE into a generated column, scan it to see if\n> all the values in that column are DEFAULTs, and if so trigger the\n> existing code to replace the Var in the tlist with the generated\n> column expression. That way we only do extra work in the case for\n> which we're currently throwing an error, rather than for every query.\n\nThat makes sense, and it leads to a nicely small patch. I reviewed\nthis and pushed it. I found only one nitpicky bug: in\nfindDefaultOnlyColumns, the test must be bms_is_empty(default_only_cols)\nnot just default_only_cols == NULL, or it will fail to fall out early\nas intended when the first row contains some DEFAULTs but later rows\ndon't. I did tweak some of the commentary, too.\n\n> I haven't touched the existing error messages. I think that's a\n> subject for a separate patch.\n\nFair. After looking around a bit, I think that getting an error\ncursor as I'd speculated about is more trouble than it's worth.\nFor starters, we'd have to pass down the query string into this\ncode, and there might be some ticklish issues about whether a given\nchunk of parsetree came from that string or from some rule or view.\nHowever, I think that just adjusting the error string would be\nhelpful, as attached.\n\n(I'm also wondering why the second case is generic ERRCODE_SYNTAX_ERROR\nand not ERRCODE_GENERATED_ALWAYS. Didn't change it here, though.)\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 22 Nov 2020 15:58:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [bug+patch] Inserting DEFAULT into generated columns from VALUES\n RTE"
},
{
"msg_contents": "On Sun, 22 Nov 2020 at 20:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I found only one nitpicky bug: in\n> findDefaultOnlyColumns, the test must be bms_is_empty(default_only_cols)\n> not just default_only_cols == NULL, or it will fail to fall out early\n> as intended when the first row contains some DEFAULTs but later rows\n> don't.\n\nAh, good point. Thanks for fixing that.\n\n> > I haven't touched the existing error messages. I think that's a\n> > subject for a separate patch.\n>\n> Fair. After looking around a bit, I think that getting an error\n> cursor as I'd speculated about is more trouble than it's worth.\n> For starters, we'd have to pass down the query string into this\n> code, and there might be some ticklish issues about whether a given\n> chunk of parsetree came from that string or from some rule or view.\n> However, I think that just adjusting the error string would be\n> helpful, as attached.\n\n+1\n\n> (I'm also wondering why the second case is generic ERRCODE_SYNTAX_ERROR\n> and not ERRCODE_GENERATED_ALWAYS. Didn't change it here, though.)\n\nI can't see any reason for it to be different, and\nERRCODE_GENERATED_ALWAYS seems like the right code to use for both\ncases.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 23 Nov 2020 14:45:18 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [bug+patch] Inserting DEFAULT into generated columns from VALUES\n RTE"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Sun, 22 Nov 2020 at 20:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, I think that just adjusting the error string would be\n>> helpful, as attached.\n\n> +1\n\n>> (I'm also wondering why the second case is generic ERRCODE_SYNTAX_ERROR\n>> and not ERRCODE_GENERATED_ALWAYS. Didn't change it here, though.)\n\n> I can't see any reason for it to be different, and\n> ERRCODE_GENERATED_ALWAYS seems like the right code to use for both\n> cases.\n\nSounds good to me; pushed that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Nov 2020 11:16:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [bug+patch] Inserting DEFAULT into generated columns from VALUES\n RTE"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile working on support for REINDEX for partitioned relations, I have\nnoticed an old bug in the logic of ReindexMultipleTables(): the list\nof relations to process is built in a first transaction, and then each\ntable is done in an independent transaction, but we don't actually\ncheck that the relation still exists when doing the real work. I\nthink that a complete fix involves two things:\n1) Addition of one SearchSysCacheExists1() at the beginning of the\nloop processing each item in the list in ReindexMultipleTables().\nThis would protect from a relation dropped, but that would not be\nenough if ReindexMultipleTables() is looking at a relation dropped\nbefore we lock it in reindex_table() or ReindexRelationConcurrently().\nStill that's simple, cheaper, and would protect from most problems.\n2) Be completely water-proof and adopt a logic close to what we do for\nVACUUM where we try to open a relation, but leave if it does not\nexist. This can be achieved with just some try_relation_open() calls\nwith the correct lock used, and we also need to have a new\nREINDEXOPT_* flag to control this behavior conditionally.\n\n2) and 1) are complementary, but 2) is invasive, so based on the lack\nof complaints we have seen that does not seem really worth\nbackpatching to me, and I think that we could also just have 1) on\nstable branches as a minimal fix, to take care of most of the\nproblems that could show up to users.\n\nAttached is a patch to fix all that, with a cheap isolation test that\nfails on HEAD with a cache lookup failure. I am adding that to the\nnext CF for now, and I would rather fix this issue before moving on\nwith REINDEX for partitioned relations as fixing this issue reduces\nthe use of session locks for partition trees. \n\nAny thoughts? \n--\nMichael",
"msg_date": "Thu, 13 Aug 2020 13:38:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "REINDEX SCHEMA/DATABASE/SYSTEM weak with dropped relations"
},
{
"msg_contents": "On 13.08.2020 07:38, Michael Paquier wrote:\n> Hi all,\n>\n> While working on support for REINDEX for partitioned relations, I have\n> noticed an old bug in the logic of ReindexMultipleTables(): the list\n> of relations to process is built in a first transaction, and then each\n> table is done in an independent transaction, but we don't actually\n> check that the relation still exists when doing the real work. I\n> think that a complete fix involves two things:\n> 1) Addition of one SearchSysCacheExists1() at the beginning of the\n> loop processing each item in the list in ReindexMultipleTables().\n> This would protect from a relation dropped, but that would not be\n> enough if ReindexMultipleTables() is looking at a relation dropped\n> before we lock it in reindex_table() or ReindexRelationConcurrently().\n> Still that's simple, cheaper, and would protect from most problems.\n> 2) Be completely water-proof and adopt a logic close to what we do for\n> VACUUM where we try to open a relation, but leave if it does not\n> exist. This can be achieved with just some try_relation_open() calls\n> with the correct lock used, and we also need to have a new\n> REINDEXOPT_* flag to control this behavior conditionally.\n>\n> 2) and 1) are complementary, but 2) is invasive, so based on the lack\n> of complaints we have seen that does not seem really worth\n> backpatching to me, and I think that we could also just have 1) on\n> stable branches as a minimal fix, to take care of most of the\n> problems that could show up to users.\n>\n> Attached is a patch to fix all that, with a cheap isolation test that\n> fails on HEAD with a cache lookup failure. I am adding that to the\n> next CF for now, and I would rather fix this issue before moving on\n> with REINDEX for partitioned relations as fixing this issue reduces\n> the use of session locks for partition trees.\n>\n> Any thoughts?\n> --\n> Michael\n\nHi,\nI reviewed the patch. It does work and the code is clean and sane. It \nimplements a logic that we already had in CLUSTER, so I think it was \nsimply an oversight. Thank you for fixing this.\n\nI noticed that REINDEXOPT_MISSING_OK can be passed to the TOAST table \nreindex. I think it would be better to reset the flag in this \nreindex_relation() call, as we don't expect a concurrent DROP here.\n\n ��� /*\n ��� �* If the relation has a secondary toast rel, reindex that too while we\n ��� �* still hold the lock on the main table.\n ��� �*/\n ��� if ((flags & REINDEX_REL_PROCESS_TOAST) && OidIsValid(toast_relid))\n ��� ��� result |= reindex_relation(toast_relid, flags, options);\n\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 31 Aug 2020 18:10:46 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX SCHEMA/DATABASE/SYSTEM weak with dropped relations"
},
{
"msg_contents": "On Mon, Aug 31, 2020 at 06:10:46PM +0300, Anastasia Lubennikova wrote:\n> I reviewed the patch. It does work and the code is clean and sane. It\n> implements a logic that we already had in CLUSTER, so I think it was simply\n> an oversight. Thank you for fixing this.\n\nThanks Anastasia for the review.\n\n> I noticed that REINDEXOPT_MISSING_OK can be passed to the TOAST table\n> reindex. I think it would be better to reset the flag in this\n> reindex_relation() call, as we don't expect a concurrent DROP here.\n\nGood point, and fixed by resetting the flag here if it is set.\n\nI have added some extra comments. There is one in\nReindexRelationConcurrently() to mention that there should be no extra\nuse of MISSING_OK once the list of indexes is built as session locks\nare taken where needed.\n\nDoes the version attached look fine to you? I have done one round of\nindentation while on it.\n--\nMichael",
"msg_date": "Tue, 1 Sep 2020 10:38:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX SCHEMA/DATABASE/SYSTEM weak with dropped relations"
},
{
"msg_contents": "On 01.09.2020 04:38, Michael Paquier wrote:\n> I have added some extra comments. There is one in\n> ReindexRelationConcurrently() to mention that there should be no extra\n> use of MISSING_OK once the list of indexes is built as session locks\n> are taken where needed.\nGreat, it took me a moment to understand the logic around index list \ncheck at first pass.\n> Does the version attached look fine to you? I have done one round of\n> indentation while on it.\n\nYes, this version is good.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 1 Sep 2020 13:25:27 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX SCHEMA/DATABASE/SYSTEM weak with dropped relations"
},
{
"msg_contents": "On Tue, Sep 01, 2020 at 01:25:27PM +0300, Anastasia Lubennikova wrote:\n> Yes, this version is good.\n\nThanks. I have added an extra comment for the case of RELKIND_INDEX\nwith REINDEXOPT_MISSING_OK while on it, as it was not really obvious\nwhy the parent relation needs to be locked (at least attempted to) at\nthis stage. And applied the change. Thanks for the review,\nAnastasia.\n--\nMichael",
"msg_date": "Wed, 2 Sep 2020 09:20:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX SCHEMA/DATABASE/SYSTEM weak with dropped relations"
},
{
"msg_contents": "> diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h\n> index 47d4c07306..23840bb8e6 100644\n> --- a/src/include/nodes/parsenodes.h\n> +++ b/src/include/nodes/parsenodes.h\n> @@ -3352,6 +3352,7 @@ typedef struct ConstraintsSetStmt\n> /* Reindex options */\n> #define REINDEXOPT_VERBOSE (1 << 0) /* print progress info */\n> #define REINDEXOPT_REPORT_PROGRESS (1 << 1) /* report pgstat progress */\n> +#define REINDEXOPT_MISSING_OK (2 << 1)\t/* skip missing relations */\n\nI think you probably intended to write: 1<<2\n\nEven though it's the same, someone is likely to be confused if they try to use\n3<<1 vs 1<<3.\n\nI noticed while resolving merge conflict.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 1 Sep 2020 21:41:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX SCHEMA/DATABASE/SYSTEM weak with dropped relations"
},
{
"msg_contents": "On Tue, Sep 01, 2020 at 09:41:48PM -0500, Justin Pryzby wrote:\n> I think you probably intended to write: 1<<2\n\nThanks, fixed.\n--\nMichael",
"msg_date": "Wed, 2 Sep 2020 14:58:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX SCHEMA/DATABASE/SYSTEM weak with dropped relations"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nRight now jsonb functions are treated as non-shippable by postgres_fdw \nand so predicates with them are not pushed down to foreign server:\n\ncreate table jt(content jsonb);\ncreate extension postgres_fdw;\ncreate server pg_fdw FOREIGN DATA WRAPPER postgres_fdw options(host \n'127.0.0.1', dbname 'postgres');\ncreate user mapping for current_user server pg_fdw options (user \n'postgres');\ncreate foreign table fjt(content jsonb) server pg_fdw options \n(table_name 'jt');\npostgres=# explain select * from fjt where jsonb_exists(content, 'some');\n QUERY PLAN\n--------------------------------------------------------------\n Foreign Scan on fjt (cost=100.00..157.50 rows=487 width=32)\n Filter: jsonb_exists(content, 'some'::text)\n\nIt is because of the following check in postgres_fdw:\n\n /*\n * If function's input collation is not derived from a \nforeign\n * Var, it can't be sent to remote.\n */\n if (fe->inputcollid == InvalidOid)\n /* OK, inputs are all noncollatable */ ;\n else if (inner_cxt.state != FDW_COLLATE_SAFE ||\n fe->inputcollid != inner_cxt.collation)\n return false;\n\nIn my case\n(gdb) p fe->inputcollid\n$1 = 100\n(gdb) p inner_cxt.collation\n$3 = 0\n(gdb) p inner_cxt.state\n$4 = FDW_COLLATE_NONE\n\n\nI wonder if there is some way of making postgres_fdw to push this this \nfunction to foreign server?\nMay be this check should be changed to:\n\n if (fe->inputcollid == InvalidOid || inner_cxt.state == \nFDW_COLLATE_NONE)\n /* OK, inputs are all noncollatable */ ;\n\n\n\n",
"msg_date": "Thu, 13 Aug 2020 18:24:37 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "jsonb, collection & postgres_fdw"
},
{
"msg_contents": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n> Right now jsonb functions are treated as non-shippable by postgres_fdw \n> and so predicates with them are not pushed down to foreign server:\n\nYeah, that's kind of annoying, but breaking the collation check\nis not an acceptable fix. And what you're proposing *does* break it.\nThe issue here is that the function's input collation is coming from\nthe default collation applied to the text constant, and we can't assume\nthat that will be the same on the remote side.\n\nIn reality, of course, jsonb_exists doesn't care about its input collation\n--- but postgres_fdw has no way to know that. I don't see any easy way\naround that.\n\nOne idea that would probably work in a lot of postgres_fdw usage scenarios\nis to have a foreign-server-level flag that says \"all the collations on\nthat server behave the same as the local ones, and the default collation\nis the same too\", and then we just skip the collation checking altogether.\nBut I'm a bit worried that if someone mistakenly sets that flag, the\nmisbehavior will be very hard to detect.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Aug 2020 13:00:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: jsonb, collection & postgres_fdw"
},
{
"msg_contents": "\n\nOn 13.08.2020 20:00, Tom Lane wrote:\n> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n>> Right now jsonb functions are treated as non-shippable by postgres_fdw\n>> and so predicates with them are not pushed down to foreign server:\n> Yeah, that's kind of annoying, but breaking the collation check\n> is not an acceptable fix. And what you're proposing *does* break it.\n> The issue here is that the function's input collation is coming from\n> the default collation applied to the text constant, and we can't assume\n> that that will be the same on the remote side.\n>\n> In reality, of course, jsonb_exists doesn't care about its input collation\n> --- but postgres_fdw has no way to know that. I don't see any easy way\n> around that.\n>\n> One idea that would probably work in a lot of postgres_fdw usage scenarios\n> is to have a foreign-server-level flag that says \"all the collations on\n> that server behave the same as the local ones, and the default collation\n> is the same too\", and then we just skip the collation checking altogether.\n> But I'm a bit worried that if someone mistakenly sets that flag, the\n> misbehavior will be very hard to detect.\n>\n> \t\t\tregards, tom lane\nThank you for clarification.\nAnd sorry for mistyping in topic (there should be \"collation\" instead of \n\"collection\").\nActually I do not know much about handling collations in Postgres and \nparticularly in postgres_fdw.\nCan you (or somebody else) provide more information about this fragment \nof code:\n /*\n * If function's input collation is not derived from a \nforeign\n * Var, it can't be sent to remote.\n */\n if (fe->inputcollid == InvalidOid)\n /* OK, inputs are all noncollatable */ ;\n else if (inner_cxt.state != FDW_COLLATE_SAFE ||\n fe->inputcollid != inner_cxt.collation)\n return false;\n\nSo we have function call expression which arguments have associated \ncollation,\nbut function itself is collection-neutral: funccollid = 0\nWhy it is not safe to push this function call to the remote server?\nWhy it breaks collation check?\nIf there are some unsafe operations with collations during argument \nevaluation, then\nwe detect it while recursive processing of arguments.\n\nI agree that my proposed fix is not correct.\nBut what about this check:\n\n if (fe->inputcollid == InvalidOid)\n /* OK, inputs are all noncollatable */ ;\n else if (fe->funccollid == InvalidOid)\n /* OK, function is noncollatable */ ;\n\nOr funccollid=0 doesn't mean that collations of function arguments do \nnot affect function behavior?\n\n\n\n",
"msg_date": "Thu, 13 Aug 2020 20:46:31 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: jsonb, collection & postgres_fdw"
},
{
"msg_contents": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n> Or funccollid=0 doesn't mean that collations of function arguments do \n> not affect function behavior?\n\nNo, it does not. As I said already, there is no way to tell from outside\na function whether it pays attention to collation or not. funccollid\nis the collation to ascribe to the function's *output*, but that's always\nzero for a non-collatable output type such as boolean. An example\nis text_lt(), which returns boolean but surely does depend on the input\ncollation. We don't really have any way to distinguish between that and\njsonb_exists().\n\nIn hindsight, it was probably a bad idea not to have a way to mark whether\nfunctions care about collation. I don't know if it'd be practical to\nretrofit such a marker now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Aug 2020 14:04:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: jsonb, collection & postgres_fdw"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 8:54 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n>\n> Right now jsonb functions are treated as non-shippable by postgres_fdw\n> and so predicates with them are not pushed down to foreign server:\n>\n> I wonder if there is some way of making postgres_fdw to push this this\n> function to foreign server?\n> May be this check should be changed to:\n>\n> if (fe->inputcollid == InvalidOid || inner_cxt.state ==\n> FDW_COLLATE_NONE)\n> /* OK, inputs are all noncollatable */ ;\n>\n\nI think, in general, we may want to push the some of the local\nfunctions that may filter out tuples/rows to remote backend to reduce\nthe data transfer(assuming collation and other settings are similar to\nthat of the local backend), but definitely, not this way. One possible\nissue could be that, what if these functions are supported/installed\non the local server, but not on the remote? May be because the remote\npostgres server version is different than that of the local? Is there\na version check between local and remote servers in postgres_fdw?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 14 Aug 2020 12:10:11 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb, collection & postgres_fdw"
},
{
"msg_contents": "\n\nOn 14.08.2020 09:40, Bharath Rupireddy wrote:\n> On Thu, Aug 13, 2020 at 8:54 PM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> Right now jsonb functions are treated as non-shippable by postgres_fdw\n>> and so predicates with them are not pushed down to foreign server:\n>>\n>> I wonder if there is some way of making postgres_fdw to push this this\n>> function to foreign server?\n>> May be this check should be changed to:\n>>\n>> if (fe->inputcollid == InvalidOid || inner_cxt.state ==\n>> FDW_COLLATE_NONE)\n>> /* OK, inputs are all noncollatable */ ;\n>>\n> I think, in general, we may want to push the some of the local\n> functions that may filter out tuples/rows to remote backend to reduce\n> the data transfer(assuming collation and other settings are similar to\n> that of the local backend), but definitely, not this way. One possible\n> issue could be that, what if these functions are supported/installed\n> on the local server, but not on the remote? May be because the remote\n> postgres server version is different than that of the local? Is there\n> a version check between local and remote servers in postgres_fdw?\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n\nRight now postgres_fdw treat as shippable only builtin functions or \nfunctions from extensions explicitly specified as shippable extensions \nin parameters of this FDW server. So I do no see a problem here. Yes, \nforeign server may have different version of Postgres which doesn't have\nthis built-in function or its profile is different. It can happen if \npostgres_fdw is used to connect two different servers which are \nmaintained independently. But in most cases I think, postgres_fdw is \nused to organize some kind of cluster. In this case all nodes are \nidentical (hardware, OS, postgres version) and performance is very \ncritical (because scalability - of one of the goal of replacing single \nnode with cluster).\nThis is why push down of predicates is very critical in this case.\n\nI still do not completely understand current criteria of shippable \nfunctions.\nI understood Tom's explanation, but:\n\npostgres=# create table t1(t text collate \"C\");\nCREATE TABLE\npostgres=# create foreign table ft1(t text collate \"ru_RU\") server \npg_fdw options (table_name 't1');\nCREATE FOREIGN TABLE\npostgres=# explain select * from ft1 where lower(t)='some';\n QUERY PLAN\n------------------------------------------------------------\n Foreign Scan on ft1 (cost=100.00..132.07 rows=7 width=32)\n(1 row)\n\nlower(t) is pushed to remote server despite to the fact that \"t\" has \ndifferent collations at local and remote servers.\nAlso when initialize postgres database, you can specify default collation.\nI have not found any place in postgres_fdw which tries to check if \ndefault collation of remote and local servers are the same\nor specify collation explicitly when them are different.\n\n From my point of view, it will be nice to have flag in postgres_fdw \nserver indicating that foreign and remote servers are identical\nand treat all functions as shippable in this case (not only built-in \nones are belonging to explicitly specified shippable extensions).\nIt will simplify using postres_fdw in clusters and makes it more efficient.\n\n\n\n\n",
"msg_date": "Fri, 14 Aug 2020 10:16:31 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: jsonb, collection & postgres_fdw"
},
{
"msg_contents": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n> I still do not completely understand current criteria of shippable \n> functions.\n> I understood Tom's explanation, but:\n\n> postgres=# create table t1(t text collate \"C\");\n> CREATE TABLE\n> postgres=# create foreign table ft1(t text collate \"ru_RU\") server \n> pg_fdw options (table_name 't1');\n> CREATE FOREIGN TABLE\n> postgres=# explain select * from ft1 where lower(t)='some';\n> QUERY PLAN\n> ------------------------------------------------------------\n> Foreign Scan on ft1 (cost=100.00..132.07 rows=7 width=32)\n> (1 row)\n\n> lower(t) is pushed to remote server despite to the fact that \"t\" has \n> different collations at local and remote servers.\n\nWell, that's the case because you lied while creating the foreign\ntable. We have no practical way to cross-check whether the foreign\ntable's declaration is an accurate representation of the remote table,\nso we just take it on faith that it is.\n\nThe problem that the collation check is trying to solve is that we\ncan't safely push COLLATE clauses to the remote server, because it\nmay not have the same set of collation names as the local server.\nSo we can only push clauses whose collation is entirely derivable\nfrom the table column(s) they use. And then, per the above, we rely on\nthe user to make sure that the local and remote columns have equivalent\ncollations. (Which conceivably would have different names.)\n\n> From my point of view, it will be nice to have flag in postgres_fdw \n> server indicating that foreign and remote servers are identical\n> and treat all functions as shippable in this case (not only built-in \n> ones are belonging to explicitly specified shippable extensions).\n\nPerhaps, but not everyone has that use-case. I'd even argue that it's\na minority use-case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Aug 2020 10:54:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: jsonb, collection & postgres_fdw"
},
{
"msg_contents": "On Fri, Aug 14, 2020 at 12:46 PM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n>\n> Right now postgres_fdw treat as shippable only builtin functions or\n> functions from extensions explicitly specified as shippable extensions\n> in parameters of this FDW server. So I do no see a problem here. Yes,\n> foreign server may have different version of Postgres which doesn't have\n> this built-in function or its profile is different. It can happen if\n> postgres_fdw is used to connect two different servers which are\n> maintained independently. But in most cases I think, postgres_fdw is\n> used to organize some kind of cluster. In this case all nodes are\n> identical (hardware, OS, postgres version) and performance is very\n> critical (because scalability - of one of the goal of replacing single\n> node with cluster).\n> This is why push down of predicates is very critical in this case.\n>\n\nAgree, push down of predicates(with functions) to the remote backend helps\na lot. But, is it safe to push all the functions? For instance, functions\nthat deal with time/time zones, volatile functions etc. I'm not exactly\nsure whether we will have some issues here. Since postgres_fdw can also be\nused for independently maintained postgres servers(may be with different\nversions), we must have a mechanism to know the compatibility.\n\n>\n> From my point of view, it will be nice to have flag in postgres_fdw\n> server indicating that foreign and remote servers are identical\n> and treat all functions as shippable in this case (not only built-in\n> ones are belonging to explicitly specified shippable extensions).\n> It will simplify using postres_fdw in clusters and makes it more\nefficient.\n>\n\nI think it's better not to have a flag for this. As we have to deal with\nthe compatibility not only at the server version level, but also at each\nfunction level. We could have something like a configuration file which\nallows the user to specify the list of functions that are safely pushable\nto remote in his/her own postgres_fdw setup, and let the postgres_fdw refer\nthis configuration file, while checking the pushability of the functions to\nremote. This way, the user has some control over what's pushed and what's\nnot. Of course, this pushability check can only happen after the mandatory\nchecks happening currently such as remote backend configuration settings\nsuch as collations etc.\n\nFeel free to correct me.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\nOn Fri, Aug 14, 2020 at 12:46 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:>> Right now postgres_fdw treat as shippable only builtin functions or> functions from extensions explicitly specified as shippable extensions> in parameters of this FDW server. So I do no see a problem here. Yes,> foreign server may have different version of Postgres which doesn't have> this built-in function or its profile is different. It can happen if> postgres_fdw is used to connect two different servers which are> maintained independently. But in most cases I think, postgres_fdw is> used to organize some kind of cluster. In this case all nodes are> identical (hardware, OS, postgres version) and performance is very> critical (because scalability - of one of the goal of replacing single> node with cluster).> This is why push down of predicates is very critical in this case.>Agree, push down of predicates(with functions) to the remote backend helps a lot. But, is it safe to push all the functions? For instance, functions that deal with time/time zones, volatile functions etc. I'm not exactly sure whether we will have some issues here. Since postgres_fdw can also be used for independently maintained postgres servers(may be with different versions), we must have a mechanism to know the compatibility.>> From my point of view, it will be nice to have flag in postgres_fdw> server indicating that foreign and remote servers are identical> and treat all functions as shippable in this case (not only built-in> ones are belonging to explicitly specified shippable extensions).> It will simplify using postres_fdw in clusters and makes it more efficient.>I think it's better not to have a flag for this. As we have to deal with the compatibility not only at the server version level, but also at each function level. We could have something like a configuration file which allows the user to specify the list of functions that are safely pushable to remote in his/her own postgres_fdw setup, and let the postgres_fdw refer this configuration file, while checking the pushability of the functions to remote. This way, the user has some control over what's pushed and what's not. Of course, this pushability check can only happen after the mandatory checks happening currently such as remote backend configuration settings such as collations etc.Feel free to correct me.With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 17 Aug 2020 19:32:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb, collection & postgres_fdw"
},
{
"msg_contents": "On Mon, Aug 17, 2020 at 7:32 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Aug 14, 2020 at 12:46 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n> >\n> > Right now postgres_fdw treat as shippable only builtin functions or\n> > functions from extensions explicitly specified as shippable extensions\n> > in parameters of this FDW server. So I do no see a problem here. Yes,\n> > foreign server may have different version of Postgres which doesn't have\n> > this built-in function or its profile is different. It can happen if\n> > postgres_fdw is used to connect two different servers which are\n> > maintained independently. But in most cases I think, postgres_fdw is\n> > used to organize some kind of cluster. In this case all nodes are\n> > identical (hardware, OS, postgres version) and performance is very\n> > critical (because scalability - of one of the goal of replacing single\n> > node with cluster).\n> > This is why push down of predicates is very critical in this case.\n> >\n>\n> Agree, push down of predicates(with functions) to the remote backend helps a lot. But, is it safe to push all the functions? For instance, functions that deal with time/time zones, volatile functions etc. I'm not exactly sure whether we will have some issues here. Since postgres_fdw can also be used for independently maintained postgres servers(may be with different versions), we must have a mechanism to know the compatibility.\n>\n> >\n> > From my point of view, it will be nice to have flag in postgres_fdw\n> > server indicating that foreign and remote servers are identical\n> > and treat all functions as shippable in this case (not only built-in\n> > ones are belonging to explicitly specified shippable extensions).\n> > It will simplify using postres_fdw in clusters and makes it more efficient.\n> >\n>\n> I think it's better not to have a flag for this. As we have to deal with the compatibility not only at the server version level, but also at each function level. We could have something like a configuration file which allows the user to specify the list of functions that are safely pushable to remote in his/her own postgres_fdw setup, and let the postgres_fdw refer this configuration file, while checking the pushability of the functions to remote. This way, the user has some control over what's pushed and what's not. Of course, this pushability check can only happen after the mandatory checks happening currently such as remote backend configuration settings such as collations etc.\n\nI agree with most of this. We need a way for a user to tell us which\nfunction is safe to be executed on the foreign server (not just\npostgres_fdw, but other kinds of FDWs as well). But maintaining that\nas a configurable file and associating safety with an FDW isn't\nsufficient. We should maintain that as a catalog. A function may be\nsafe to push down based on the FDW (a given function always behaves in\nthe same way on any of the servers of an FDW as its peer locally), or\nmay be associated with a server (a function is available and behaves\nsame as its local peer on certain server/s but not all). Going further\na local function may map to a function with a different name on the\nremote server/fdw, so that same catalog may maintain the function\nmapping. An FDW may decide to cache relevant information, update the\ncatalog using IMPORT FOREIGN SCHEMA(or ROUTINE), or add some defaults\nwhen installing the extension.\n\nMore details are required to be worked out but here my initial thoughts on this.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 18 Aug 2020 17:36:35 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb, collection & postgres_fdw"
},
{
"msg_contents": "On Tue, 18 Aug 2020 at 17:36, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Mon, Aug 17, 2020 at 7:32 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, Aug 14, 2020 at 12:46 PM Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> wrote:\n> > >\n> > > Right now postgres_fdw treat as shippable only builtin functions or\n> > > functions from extensions explicitly specified as shippable extensions\n> > > in parameters of this FDW server. So I do no see a problem here. Yes,\n> > > foreign server may have different version of Postgres which doesn't\n> have\n> > > this built-in function or its profile is different. It can happen if\n> > > postgres_fdw is used to connect two different servers which are\n> > > maintained independently. But in most cases I think, postgres_fdw is\n> > > used to organize some kind of cluster. In this case all nodes are\n> > > identical (hardware, OS, postgres version) and performance is very\n> > > critical (because scalability - of one of the goal of replacing single\n> > > node with cluster).\n> > > This is why push down of predicates is very critical in this case.\n> > >\n> >\n> > Agree, push down of predicates(with functions) to the remote backend\n> helps a lot. But, is it safe to push all the functions? For instance,\n> functions that deal with time/time zones, volatile functions etc. I'm not\n> exactly sure whether we will have some issues here. Since postgres_fdw can\n> also be used for independently maintained postgres servers(may be with\n> different versions), we must have a mechanism to know the compatibility.\n> >\n> > >\n> > > From my point of view, it will be nice to have flag in postgres_fdw\n> > > server indicating that foreign and remote servers are identical\n> > > and treat all functions as shippable in this case (not only built-in\n> > > ones are belonging to explicitly specified shippable extensions).\n> > > It will simplify using postres_fdw in clusters and makes it more\n> efficient.\n> > >\n> >\n> > I think it's better not to have a flag for this. As we have to deal with\n> the compatibility not only at the server version level, but also at each\n> function level. We could have something like a configuration file which\n> allows the user to specify the list of functions that are safely pushable\n> to remote in his/her own postgres_fdw setup, and let the postgres_fdw refer\n> this configuration file, while checking the pushability of the functions to\n> remote. This way, the user has some control over what's pushed and what's\n> not. Of course, this pushability check can only happen after the mandatory\n> checks happening currently such as remote backend configuration settings\n> such as collations etc.\n>\nI agree with most of this. We need a way for a user to tell us which\n> function is safe to be executed on the foreign server (not just\n> postgres_fdw, but other kinds of FDWs as well). But maintaining that\n> as a configurable file and associating safety with an FDW isn't\n> sufficient. We should maintain that as a catalog. A function may be\n> safe to push down based on the FDW (a given function always behaves in\n> the same way on any of the servers of an FDW as its peer locally), or\n> may be associated with a server (a function is available and behaves\n> same as its local peer on certain server/s but not all). Going further\n> a local function may map to a function with a different name on the\n> remote server/fdw, so that same catalog may maintain the function\n> mapping. An FDW may decide to cache relevant information, update the\n> catalog using IMPORT FOREIGN SCHEMA(or ROUTINE), or add some defaults\n> when installing the extension.\n>\n\nWhile looking at something else in postgres_fdw, I came across an old\nfeature which I had completely forgotten about. We allow extensions to be\nadded to server options. Any object belonging to these extensions,\nincluding functions, can be shipped to the foreign server. See\npostres_fdw/sql/postgres_fdw.sql for examples. This is an awkward way since\nthere is no way to control individual functions and a UDF has to be part of\nan extension to be shippable. It doesn't provide flexibility to map a local\nfunction to a remote one if their names differ. But we have something. May\nbe we could dig past conversations to understand why it was done this way.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Tue, 18 Aug 2020 at 17:36, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Mon, Aug 17, 2020 at 7:32 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Aug 14, 2020 at 12:46 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n> >\n> > Right now postgres_fdw treat as shippable only builtin functions or\n> > functions from extensions explicitly specified as shippable extensions\n> > in parameters of this FDW server. So I do no see a problem here. Yes,\n> > foreign server may have different version of Postgres which doesn't have\n> > this built-in function or its profile is different. It can happen if\n> > postgres_fdw is used to connect two different servers which are\n> > maintained independently. But in most cases I think, postgres_fdw is\n> > used to organize some kind of cluster. In this case all nodes are\n> > identical (hardware, OS, postgres version) and performance is very\n> > critical (because scalability - of one of the goal of replacing single\n> > node with cluster).\n> > This is why push down of predicates is very critical in this case.\n> >\n>\n> Agree, push down of predicates(with functions) to the remote backend helps a lot. But, is it safe to push all the functions? For instance, functions that deal with time/time zones, volatile functions etc. I'm not exactly sure whether we will have some issues here. Since postgres_fdw can also be used for independently maintained postgres servers(may be with different versions), we must have a mechanism to know the compatibility.\n>\n> >\n> > From my point of view, it will be nice to have flag in postgres_fdw\n> > server indicating that foreign and remote servers are identical\n> > and treat all functions as shippable in this case (not only built-in\n> > ones are belonging to explicitly specified shippable extensions).\n> > It will simplify using postres_fdw in clusters and makes it more efficient.\n> >\n>\n> I think it's better not to have a flag for this. As we have to deal with the compatibility not only at the server version level, but also at each function level. We could have something like a configuration file which allows the user to specify the list of functions that are safely pushable to remote in his/her own postgres_fdw setup, and let the postgres_fdw refer this configuration file, while checking the pushability of the functions to remote. This way, the user has some control over what's pushed and what's not. Of course, this pushability check can only happen after the mandatory checks happening currently such as remote backend configuration settings such as collations etc.\nI agree with most of this. We need a way for a user to tell us which\nfunction is safe to be executed on the foreign server (not just\npostgres_fdw, but other kinds of FDWs as well). But maintaining that\nas a configurable file and associating safety with an FDW isn't\nsufficient. We should maintain that as a catalog. A function may be\nsafe to push down based on the FDW (a given function always behaves in\nthe same way on any of the servers of an FDW as its peer locally), or\nmay be associated with a server (a function is available and behaves\nsame as its local peer on certain server/s but not all). Going further\na local function may map to a function with a different name on the\nremote server/fdw, so that same catalog may maintain the function\nmapping. An FDW may decide to cache relevant information, update the\ncatalog using IMPORT FOREIGN SCHEMA(or ROUTINE), or add some defaults\nwhen installing the extension.While looking at something else in postgres_fdw, I came across an old feature which I had completely forgotten about. We allow extensions to be added to server options. Any object belonging to these extensions, including functions, can be shipped to the foreign server. See postres_fdw/sql/postgres_fdw.sql for examples. This is an awkward way since there is no way to control individual functions and a UDF has to be part of an extension to be shippable. It doesn't provide flexibility to map a local function to a remote one if their names differ. But we have something. May be we could dig past conversations to understand why it was done this way.-- Best Wishes,Ashutosh",
"msg_date": "Mon, 24 Aug 2020 18:43:36 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb, collection & postgres_fdw"
}
] |
[
{
"msg_contents": "I ran into this while running pg_upgrade from beta2 to beta3:\n$ time sudo -u postgres sh -c 'cd /var/lib/pgsql; /usr/pgsql-13/bin/pg_upgrade -b /usr/pgsql-13b2/bin/ -d ./13b2/data -D ./13/data --link'\n\treal 94m18.335s\n\nThis instances has many table partitions, and the production instance uses\ntablespaces. Some of our tables are wide. This VM is not idle, but does not\naccount for being 20x slower.\n\npg_dump -v --section=pre-data ts |wc\n\t1846659 4697507 59575253\n\treal 39m8.524s\n\nCompare v12 and v13:\n\n|$ /usr/pgsql-12/bin/initdb -D 12\n|$ /usr/pgsql-12/bin/postgres -D 12 -c shared_buffers=256MB -c max_locks_per_transaction=128 -c port=5678 -c unix_socket_directories=/tmp&\n|$ psql -h /tmp -p 5678 postgres </srv/cdrperfbackup/ts/2020-08-10/pg_dumpall-g \n|$ time pg_restore /srv/cdrperfbackup/ts/2020-08-10/pg_dump-section\\=pre-data -d postgres -h /tmp -p 5678 --no-tablespaces # --clean --if-exist\n|\treal 4m56.627s\n|$ time pg_dump --section=pre-data postgres -h /tmp -p 5678 |wc\n|\t1823612 4504584 58379810\n|\treal 1m4.452s\n\n|/usr/pgsql-13/bin/initdb -D 13\n|/usr/pgsql-13/bin/postgres -D 13 -c shared_buffers=256MB -c max_locks_per_transaction=128 -c port=5678 -c unix_socket_directories=/tmp&\n|psql -h /tmp -p 5678 postgres </srv/cdrperfbackup/ts/2020-08-10/pg_dumpall-g \n|time pg_restore /srv/cdrperfbackup/ts/2020-08-10/pg_dump-section\\=pre-data -d postgres -h /tmp -p 5678 --no-tablespaces # --clean --if-exist \n|\treal 6m49.964s\n|$ time pg_dump --section=pre-data postgres -h /tmp -p 5678 |wc\n|\t1823612 4504584 58379813\n|\treal 19m42.918s\n\nI'm trying to narrow this down, but I'd be very happy for suggestions.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 13 Aug 2020 17:48:23 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg_dump from v13 is slow"
},
{
"msg_contents": "On 2020-Aug-13, Justin Pryzby wrote:\n\n> I'm trying to narrow this down, but I'd be very happy for suggestions.\n\nMaybe you can time \"pg_dump --binary-upgrade\" to see if the slowness\ncomes from there.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Aug 2020 19:28:32 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump from v13 is slow"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-13 17:48:23 -0500, Justin Pryzby wrote:\n> I'm trying to narrow this down, but I'd be very happy for suggestions.\n\nWould be worth knowing how much of the time pgbench is 100% CPU\nutilized, and how much of the time it is basically waiting for server\nside queries and largely idle.\n\nIf it's close to 100% busy for a significant part of that time, it'd be\nuseful to get a perf profile. If it's largely queries to the server that\nare the issue, logging those would be relevant.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 13 Aug 2020 16:30:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump from v13 is slow"
},
{
"msg_contents": "I can reproduce the issue with generated data:\n\npryzbyj=# SELECT format('create table t%s(i int)', i) FROM generate_series(1,9999)i;\n\\set ECHO errors\n\\set QUIET\n\\gexec\n\n$ time pg_dump --section=pre-data -d pryzbyj -h /tmp -p 5613 |wc \n 110015 240049 1577087\nreal 0m50.445s\n\n$ time pg_dump --section=pre-data -d pryzbyj -h /tmp -p 5612 |wc\n 110015 240049 1577084\nreal 0m11.203s\n\nOn Thu, Aug 13, 2020 at 04:30:14PM -0700, Andres Freund wrote:\n> Would be worth knowing how much of the time pgbench is 100% CPU\n> utilized, and how much of the time it is basically waiting for server\n> side queries and largely idle.\n\nGood question - I guess you mean pg_dump.\n\n$ command time -v pg_dump --section=pre-data -d pryzbyj -h /tmp -p 5612 |wc \n Command being timed: \"pg_dump --section=pre-data -d pryzbyj -h /tmp -p 5612\"\n User time (seconds): 0.65\n System time (seconds): 0.52\n Percent of CPU this job got: 9%\n Elapsed (wall clock) time (h:mm:ss or m:ss): 0:11.85\n\n$ command time -v pg_dump --section=pre-data -d pryzbyj -h /tmp -p 5613 |wc\n Command being timed: \"pg_dump --section=pre-data -d pryzbyj -h /tmp -p 5613\"\n User time (seconds): 0.79\n System time (seconds): 0.49\n Percent of CPU this job got: 2%\n Elapsed (wall clock) time (h:mm:ss or m:ss): 0:48.51\n\nSo v13 was 4.5x slower and it seems to be all on the server side.\n\nI looked queries like this:\ntime strace -ts999 -e sendto pg_dump --section=pre-data -d pryzbyj -h /tmp -p 5613 2>strace-13-3 |wc\ncut -c1-66 strace-13-3 |sort |uniq |less\n\nMost of the time is spent on these three queries:\n\n|12:58:11 sendto(3, \"Q\\0\\0\\3\\215SELECT\\na.attnum,\\na.attname,\\na.atttypmod,\\na.attstattarget,\\na.attstorage,\\nt.typstorage,\\na.attn\n|...\n|12:58:30 sendto(3, \"Q\\0\\0\\3\\215SELECT\\na.attnum,\\na.attname,\\na.atttypmod,\\na.attstattarget,\\na.attstorage,\\nt.typstorage,\\na.attn\n\n|12:58:32 sendto(3, \"Q\\0\\0\\1\\314SELECT oid, tableoid, pol.polname, pol.polcmd, pol.polpermissive, CASE WHEN pol.polroles = '{0}' TH\n|...\n|12:58:47 sendto(3, \"Q\\0\\0\\1\\314SELECT oid, tableoid, pol.polname, pol.polcmd, pol.polpermissive, CASE WHEN pol.polroles = '{0}' TH\n\n|12:58:49 sendto(3, \"Q\\0\\0\\0\\213SELECT pr.tableoid, pr.oid, p.pubname FROM pg_publication_rel pr, pg_publication p WHERE pr.prrelid\n|...\n|12:59:01 sendto(3, \"Q\\0\\0\\0\\213SELECT pr.tableoid, pr.oid, p.pubname FROM pg_publication_rel pr, pg_publication p WHERE pr.prrelid\n\nCompare with v12:\n\n|12:57:58 sendto(3, \"Q\\0\\0\\3\\215SELECT\\na.attnum,\\na.attname,\\na.atttypmod,\\na.attstattarget,\\na.attstorage,\\nt.typstorage,\\na.attn\n|...\n|12:58:03 sendto(3, \"Q\\0\\0\\3\\215SELECT\\na.attnum,\\na.attname,\\na.atttypmod,\\na.attstattarget,\\na.attstorage,\\nt.typstorage,\\na.attn\n\n|12:58:05 sendto(3, \"Q\\0\\0\\1\\314SELECT oid, tableoid, pol.polname, pol.polcmd, pol.polpermissive, CASE WHEN pol.polroles = '{0}' TH\n|...\n|12:58:07 sendto(3, \"Q\\0\\0\\1\\314SELECT oid, tableoid, pol.polname, pol.polcmd, pol.polpermissive, CASE WHEN pol.polroles = '{0}' TH\n\n|12:58:09 sendto(3, \"Q\\0\\0\\0\\213SELECT pr.tableoid, pr.oid, p.pubname FROM pg_publication_rel pr, pg_publication p WHERE pr.prrelid\n|...\n|12:58:11 sendto(3, \"Q\\0\\0\\0\\213SELECT pr.tableoid, pr.oid, p.pubname FROM pg_publication_rel pr, pg_publication p WHERE pr.prrelid\n\nThe first query regressed the worst.\n\n$ psql -h /tmp -Ap 5612 pryzbyj\npsql (13beta3, server 12.4)\npryzbyj=# explain analyze SELECT a.attnum,a.attname,a.atttypmod,a.attstattarget,a.attstorage,t.typstorage,a.attnotnull,a.atthasdef,a.attisdropped,a.attlen,a.attalign,a.attislocal,pg_catalog.format_type(t.oid, a.atttypmod) AS atttypname,a.attgenerated,CASE WHEN a.atthasmissing AND NOT a.attisdropped THEN a.attmissingval ELSE null END AS attmissingval,a.attidentity,pg_catalog.array_to_string(ARRAY(SELECT pg_catalog.quote_ident(option_name) || ' ' || pg_catalog.quote_literal(option_value) FROM pg_catalog.pg_options_to_table(attfdwoptions) ORDER BY option_name), E', ') AS attfdwoptions,CASE WHEN a.attcollation <> t.typcollation THEN a.attcollation ELSE 0 END AS attcollation,array_to_string(a.attoptions, ', ') AS attoptions FROM pg_catalog.pg_attribute a LEFT JOIN pg_catalog.pg_type t ON a.atttypid = t.oid WHERE a.attrelid = '191444'::pg_catalog.oid AND a.attnum > 0::pg_catalog.int2 ORDER BY a.attnum;\nQUERY PLAN\nNested Loop Left Join (cost=0.58..16.72 rows=1 width=217) (actual time=0.205..0.209 rows=1 loops=1)\n -> Index Scan using pg_attribute_relid_attnum_index on pg_attribute a (cost=0.29..8.31 rows=1 width=189) (actual time=0.030..0.032 rows=1 loops=1)\n Index Cond: ((attrelid = '191444'::oid) AND (attnum > '0'::smallint))\n -> Index Scan using pg_type_oid_index on pg_type t (cost=0.29..8.30 rows=1 width=9) (actual time=0.011..0.011 rows=1 loops=1)\n Index Cond: (oid = a.atttypid)\n SubPlan 1\n -> Sort (cost=0.09..0.09 rows=3 width=64) (actual time=0.119..0.119 rows=0 loops=1)\n Sort Key: pg_options_to_table.option_name\n Sort Method: quicksort Memory: 25kB\n -> Function Scan on pg_options_to_table (cost=0.00..0.06 rows=3 width=64) (actual time=0.010..0.010 rows=0 loops=1)\nPlanning Time: 1.702 ms\nExecution Time: 0.422 ms\n\n$ psql -h /tmp -Ap 5613 pryzbyj\npsql (13beta3)\npryzbyj=# explain analyze SELECT a.attnum,a.attname,a.atttypmod,a.attstattarget,a.attstorage,t.typstorage,a.attnotnull,a.atthasdef,a.attisdropped,a.attlen,a.attalign,a.attislocal,pg_catalog.format_type(t.oid, a.atttypmod) AS atttypname,a.attgenerated,CASE WHEN a.atthasmissing AND NOT a.attisdropped THEN a.attmissingval ELSE null END AS attmissingval,a.attidentity,pg_catalog.array_to_string(ARRAY(SELECT pg_catalog.quote_ident(option_name) || ' ' || pg_catalog.quote_literal(option_value) FROM pg_catalog.pg_options_to_table(attfdwoptions) ORDER BY option_name), E', ') AS attfdwoptions,CASE WHEN a.attcollation <> t.typcollation THEN a.attcollation ELSE 0 END AS attcollation,array_to_string(a.attoptions, ', ') AS attoptions FROM pg_catalog.pg_attribute a LEFT JOIN pg_catalog.pg_type t ON a.atttypid = t.oid WHERE a.attrelid = '164518'::pg_catalog.oid AND a.attnum > 0::pg_catalog.int2 ORDER BY a.attnum;\nQUERY PLAN\nNested Loop Left Join (cost=0.58..16.72 rows=1 width=217) (actual time=0.134..0.139 rows=1 loops=1)\n -> Index Scan using pg_attribute_relid_attnum_index on pg_attribute a (cost=0.29..8.31 rows=1 width=189) (actual time=0.028..0.030 rows=1 loops=1)\n Index Cond: ((attrelid = '164518'::oid) AND (attnum > '0'::smallint))\n -> Index Scan using pg_type_oid_index on pg_type t (cost=0.29..8.30 rows=1 width=9) (actual time=0.008..0.008 rows=1 loops=1)\n Index Cond: (oid = a.atttypid)\n SubPlan 1\n -> Sort (cost=0.09..0.09 rows=3 width=64) (actual time=0.065..0.065 rows=0 loops=1)\n Sort Key: pg_options_to_table.option_name\n Sort Method: quicksort Memory: 25kB\n -> Function Scan on pg_options_to_table (cost=0.00..0.06 rows=3 width=64) (actual time=0.005..0.005 rows=0 loops=1)\nPlanning Time: 1.457 ms\nExecution Time: 0.431 ms\n\nI don't know if it's any issue, but I found that pg12 can process \"null\nstatements\" almost 2x as fast:\n\n$ time for a in `seq 1 9999`; do echo ';'; done |psql -h /tmp -p 5613 postgres \n\treal 0m0.745s\n$ time for a in `seq 1 9999`; do echo ';'; done |psql -h /tmp -p 5612 postgres \n\treal 0m0.444s\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 13 Aug 2020 19:47:10 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump from v13 is slow"
},
{
"msg_contents": "Hmm, I wonder if you're comparing an assert-enabled pg13 build to a\nnon-assert-enabled pg12 build, or something like that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Aug 2020 20:53:46 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump from v13 is slow"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 08:53:46PM -0400, Alvaro Herrera wrote:\n> Hmm, I wonder if you're comparing an assert-enabled pg13 build to a\n> non-assert-enabled pg12 build, or something like that.\n\nGreat question - I thought of it myself but then forgot to look..\n\n$ rpm -q postgresql1{2,3}-server\npostgresql12-server-12.4-1PGDG.rhel7.x86_64\npostgresql13-server-13-beta3_1PGDG.rhel7.x86_64\n\n$ /usr/pgsql-12/bin/pg_config |grep -o cassert || echo not found\nnot found\n$ /usr/pgsql-13/bin/pg_config |grep -o cassert || echo not found\ncassert\n\nIt looks like the beta packages are compiled with cassert, which makes sense.\n\nThanks and sorry for noise.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 13 Aug 2020 19:59:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump from v13 is slow"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I can reproduce the issue with generated data:\n> pryzbyj=# SELECT format('create table t%s(i int)', i) FROM generate_series(1,9999)i;\n\nHm, I tried this case and didn't really detect much runtime difference\nbetween v12 and HEAD.\n\n> I don't know if it's any issue, but I found that pg12 can process \"null\n> statements\" almost 2x as fast:\n\nNow I'm suspicious that you're comparing an assert-enabled v13 build\nto a non-assert-enabled v12 build. Check the output of\n\"pg_config --configure\" from each installation to see if they're\nconfigured alike.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Aug 2020 21:04:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump from v13 is slow"
}
] |
[
{
"msg_contents": "Hi,\n\nThe following sentence in high-availability.sgml is not true:\n\n The background writer is active during recovery and will perform\n restartpoints (similar to checkpoints on the primary) and normal block\n cleaning activities.\n\nI think this is an oversight of the commit 806a2ae in 2011; the\ncheckpointer process started to be responsible for creating\ncheckpoints.\n\nI've attached the patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 14 Aug 2020 16:53:54 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Fix an old description in high-availability.sgml"
},
{
"msg_contents": "On Fri, Aug 14, 2020 at 04:53:54PM +0900, Masahiko Sawada wrote:\n> The following sentence in high-availability.sgml is not true:\n> \n> The background writer is active during recovery and will perform\n> restartpoints (similar to checkpoints on the primary) and normal block\n> cleaning activities.\n> \n> I think this is an oversight of the commit 806a2ae in 2011; the\n> checkpointer process started to be responsible for creating\n> checkpoints.\n\nGood catch it is. Your phrasing looks good to me.\n--\nMichael",
"msg_date": "Fri, 14 Aug 2020 17:15:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix an old description in high-availability.sgml"
},
{
"msg_contents": "On Fri, Aug 14, 2020 at 05:15:20PM +0900, Michael Paquier wrote:\n> Good catch it is. Your phrasing looks good to me.\n\nFixed as b4f1639. Thanks.\n--\nMichael",
"msg_date": "Mon, 17 Aug 2020 10:32:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix an old description in high-availability.sgml"
},
{
"msg_contents": "On Mon, 17 Aug 2020 at 10:32, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Aug 14, 2020 at 05:15:20PM +0900, Michael Paquier wrote:\n> > Good catch it is. Your phrasing looks good to me.\n>\n> Fixed as b4f1639. Thanks.\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 17 Aug 2020 14:29:03 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix an old description in high-availability.sgml"
}
] |
[
{
"msg_contents": "While hacking on pg_rewind, this in pg_rewind's main() function caught \nmy eye:\n\n progress_report(true);\n printf(\"\\n\");\n\nIt is peculiar, because progress_report() uses fprintf(stderr, ...) for \nall its printing, and in fact the only other use of printf() in \npg_rewind is in printing the \"pg_rewind --help\" text.\n\nI think the idea here was to move to the next line, after \nprogress_report() has updated the progress line for the last time. It \nprobably also should not be printed, when \"--progress\" is not used.\n\nAttached is a patch to fix this, as well as a similar issue in \npg_checksums. pg_basebackup and pgbench also print progres reports like \nthis, but they seem correct to me.\n\n- Heikki",
"msg_date": "Fri, 14 Aug 2020 10:57:10 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Newline after --progress report"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> While hacking on pg_rewind, this in pg_rewind's main() function caught \n> my eye:\n\nGood catch.\n\n> Attached is a patch to fix this, as well as a similar issue in \n> pg_checksums. pg_basebackup and pgbench also print progres reports like \n> this, but they seem correct to me.\n\nI wonder whether it'd be better to push the responsibility for this\ninto progress_report(), by adding an additional parameter \"bool last\"\nor the like. Then the callers would not need such an unseemly amount\nof knowledge about what progress_report() is doing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Aug 2020 09:51:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Newline after --progress report"
},
{
"msg_contents": "On 14/08/2020 16:51, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> Attached is a patch to fix this, as well as a similar issue in\n>> pg_checksums. pg_basebackup and pgbench also print progres reports like\n>> this, but they seem correct to me.\n> \n> I wonder whether it'd be better to push the responsibility for this\n> into progress_report(), by adding an additional parameter \"bool last\"\n> or the like. Then the callers would not need such an unseemly amount\n> of knowledge about what progress_report() is doing.\n\nGood point. Pushed a patch along those lines.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 17 Aug 2020 10:14:38 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Newline after --progress report"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> Good point. Pushed a patch along those lines.\n\nUh ... you patched v12 but not v13?\n\nAlso, I'd recommend that you NOT do this:\n\n+ fprintf(stderr, (!finished && isatty(fileno(stderr))) ? \"\\r\" : \"\\n\");\n\nas it breaks printf format verification in many/most compilers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Aug 2020 09:59:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Newline after --progress report"
},
{
"msg_contents": "On 17/08/2020 16:59, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> Good point. Pushed a patch along those lines.\n> \n> Uh ... you patched v12 but not v13?\n\nDarn, I forgot it exists.\n\n> Also, I'd recommend that you NOT do this:\n> \n> + fprintf(stderr, (!finished && isatty(fileno(stderr))) ? \"\\r\" : \"\\n\");\n> \n> as it breaks printf format verification in many/most compilers.\n\nOk. I pushed the same commit to v12 as to other branches now, to keep \nthem in sync. I'll go fix that as a separate commit. Thanks!\n\n- Heikki\n\n\n",
"msg_date": "Mon, 17 Aug 2020 17:20:46 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Newline after --progress report"
}
] |
[
{
"msg_contents": "While hacking on pg_rewind, I noticed that commit and abort WAL records \nare never marked with the XLR_SPECIAL_REL_UPDATE flag. But if the record \ncontains \"dropped relfilenodes\", surely it should be?\n\nIt's harmless as far as the backend and all the programs in PostgreSQL \nrepository are concerned, but the point of XLR_SPECIAL_REL_UPDATE is to \naid external tools that try to track which files are modified. Attached \nis a patch to fix it.\n\nIt's always been like that, but I am not going backport, for fear of \nbreaking existing applications. If a program reads the WAL, and would \nactually need to do something with commit records dropping relations, \nthat seems like such a common scenario that the author should've thought \nabout it and handled it even without the flag reminding about it. Fixing \nit in master ought to be enough.\n\nThoughts?\n\n- Heikki",
"msg_date": "Fri, 14 Aug 2020 11:47:24 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Commit/abort WAL records with dropped rels missing\n XLR_SPECIAL_REL_UPDATE"
},
{
"msg_contents": "On Fri, Aug 14, 2020 at 2:17 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> While hacking on pg_rewind, I noticed that commit and abort WAL records\n> are never marked with the XLR_SPECIAL_REL_UPDATE flag. But if the record\n> contains \"dropped relfilenodes\", surely it should be?\n>\n\nRight.\n\n> It's harmless as far as the backend and all the programs in PostgreSQL\n> repository are concerned, but the point of XLR_SPECIAL_REL_UPDATE is to\n> aid external tools that try to track which files are modified. Attached\n> is a patch to fix it.\n>\n> It's always been like that, but I am not going backport, for fear of\n> breaking existing applications. If a program reads the WAL, and would\n> actually need to do something with commit records dropping relations,\n> that seems like such a common scenario that the author should've thought\n> about it and handled it even without the flag reminding about it. Fixing\n> it in master ought to be enough.\n>\n\n+1 for doing it in master only. Even if someone comes up with such a\nscenario for back-branches, we can revisit our decision to backport\nthis but like you, I also don't see any pressing need to do it now.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 15 Aug 2020 11:05:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit/abort WAL records with dropped rels missing\n XLR_SPECIAL_REL_UPDATE"
},
{
"msg_contents": "On Sat, Aug 15, 2020 at 11:05:43AM +0530, Amit Kapila wrote:\n> On Fri, Aug 14, 2020 at 2:17 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> It's always been like that, but I am not going backport, for fear of\n>> breaking existing applications. If a program reads the WAL, and would\n>> actually need to do something with commit records dropping relations,\n>> that seems like such a common scenario that the author should've thought\n>> about it and handled it even without the flag reminding about it. Fixing\n>> it in master ought to be enough.\n> \n> +1 for doing it in master only. Even if someone comes up with such a\n> scenario for back-branches, we can revisit our decision to backport\n> this but like you, I also don't see any pressing need to do it now.\n\n+1.\n--\nMichael",
"msg_date": "Mon, 17 Aug 2020 16:00:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commit/abort WAL records with dropped rels missing\n XLR_SPECIAL_REL_UPDATE"
},
{
"msg_contents": "On 17/08/2020 10:00, Michael Paquier wrote:\n> On Sat, Aug 15, 2020 at 11:05:43AM +0530, Amit Kapila wrote:\n>> On Fri, Aug 14, 2020 at 2:17 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>> It's always been like that, but I am not going backport, for fear of\n>>> breaking existing applications. If a program reads the WAL, and would\n>>> actually need to do something with commit records dropping relations,\n>>> that seems like such a common scenario that the author should've thought\n>>> about it and handled it even without the flag reminding about it. Fixing\n>>> it in master ought to be enough.\n>>\n>> +1 for doing it in master only. Even if someone comes up with such a\n>> scenario for back-branches, we can revisit our decision to backport\n>> this but like you, I also don't see any pressing need to do it now.\n> \n> +1.\n\nPushed, thanks!\n\n- Heikki\n\n\n",
"msg_date": "Mon, 17 Aug 2020 10:55:13 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Commit/abort WAL records with dropped rels missing\n XLR_SPECIAL_REL_UPDATE"
}
] |
[
{
"msg_contents": "Hello\n\nI would like to implement a new data type next to char, number, varchar... for example a special \"Money\" type, but\nI don't want to use extensions and the Create type command. I want to implement it directly inside source code,\nbecause I want to implement my new type at lower level, in order to perform some more sophisticated functions after.\nJust as an example, help the optimizer in its decisions.\nHow should I proceed ? Is it an easy task ?\n\nThanks\nMohand\n\n\n\n\n\n\n\n\n\nHello\n\n\nI would like to implement a new data type next to char, number, varchar... for example a special \"Money\" type, but\nI don't want to use extensions and the Create type command. I want to implement it directly inside source code,\n\nbecause I want to implement my new type at lower level, in order to perform some more sophisticated functions after.\n\nJust as an example, help the optimizer in its decisions.\nHow should I proceed ? Is it an easy task ?\n\n\n\nThanks\nMohand",
"msg_date": "Fri, 14 Aug 2020 15:48:50 +0000",
"msg_from": "mohand oubelkacem makhoukhene <mohand-oubelkacem@outlook.com>",
"msg_from_op": true,
"msg_subject": "Implement a new data type"
},
{
"msg_contents": "mohand oubelkacem makhoukhene <mohand-oubelkacem@outlook.com> writes:\n> I would like to implement a new data type next to char, number, varchar... for example a special \"Money\" type, but\n> I don't want to use extensions and the Create type command. I want to implement it directly inside source code,\n> because I want to implement my new type at lower level, in order to perform some more sophisticated functions after.\n\nWhy, and exactly what do you think you'd accomplish?\n\nPostgres is meant to be an extensible system, which in general means that\nanything that can be done in core code could be done in an extension.\nAdmittedly there are lots of ways that we fall short of that goal, but\nnew data types tend not to be one of them. The only big difference\nbetween an extension datatype and a core one is that for a core type\nyou have to construct all the catalog entries \"by hand\" by making\nadditions to the include/catalog/*.dat files, which is tedious and\nerror-prone.\n\n> Just as an example, help the optimizer in its decisions.\n\nThe core optimizer is pretty darn data-type-agnostic, and should\nremain so IMO. There are callbacks, such as selectivity estimators\nand planner support functions, that contain specific knowledge of\nparticular functions and data types; and those facilities are available\nto extensions too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Aug 2020 12:09:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Implement a new data type"
}
] |
[
{
"msg_contents": "Yesterday's releases included some fixes meant to make it harder\nfor a malicious user to sabotage an extension installation/update\nscript. There are some things remaining to be done in the area,\nthough:\n\n1. We don't have a way to make things adequately secure for extensions\nthat depend on other extensions. As an example, suppose that Alice\ninstalls the \"cube\" extension into schema1 and then installs\n\"earthdistance\" into schema2. The earthdistance install script will\nrun with search_path \"schema2, schema1\". If schema2 is writable by\nBob, then Bob can create a type \"schema2.cube\" ahead of time, and\nthat will capture all the references to cube in the earthdistance\nscript. Bob can similarly capture the references to cube functions\nin earthdistance's domain constraints. A partial solution to this\nmight be to extend the @extschema@ notation to allow specification\nof a depended-on extension's schema. Inventing freely, we could\nimagine writing earthdistance's domain creation command like\n\nCREATE DOMAIN earth AS @extschema(cube)@.cube\n CONSTRAINT not_point check(@extschema(cube)@.cube_is_point(value))\n CONSTRAINT not_3d check(@extschema(cube)@.cube_dim(value) <= 3)\n CONSTRAINT on_surface check(abs(@extschema(cube)@.cube_distance(value,\n '(0)'::@extschema(cube)@.cube) /\n earth() - '1'::float8) < '10e-7'::float8);\n\nThis is pretty tedious and error-prone; but as long as one is careful\nto write only exact matches of function and operator argument types,\nit's safe against CVE-2018-1058-style attacks, even if both extensions\nare in publicly writable schemas.\n\nHowever, in itself this can only fix references that are resolved during\nexecution of the extension script. I don't see a good way to use the\nidea to make earthdistance's SQL functions fully secure. It won't do\nto write, say,\n\nCREATE FUNCTION ll_to_earth(float8, float8)\n...\nAS 'SELECT @extschema(cube)@.cube(...)';\n\nbecause this will not survive somebody doing \"ALTER EXTENSION cube SET\nSCHEMA schema3\". I don't have a proposal for what to do about that.\nAdmittedly, we already disclaim security if you run queries with a\nsearch_path that contains any untrusted schemas ... but it would be\nnice if extensions could be written that (in themselves) are safe\nregardless. Peter E's proposal for parsing SQL function bodies at\ncreation time could perhaps fix this for SQL functions, but we still\nhave the issue for other PLs.\n\n2. For the most part, the sorts of DDL commands you might use in an\nextension script are not very subject to CVE-2018-1058-style attacks\nbecause they specify relevant functions and operators exactly. As long\nas the objects you want are at the front of the search_path, there's no\nway for an attacker to inject another, better match. I did find one\nexception that is not fixed as of today: lookup_agg_function() allows\ninexact argument type matches for an aggregate's support functions,\nso it could be possible to capture a reference if the intended support\nfunction doesn't exactly match the aggregate's declared input and\ntransition data types. This doesn't occur in any contrib modules\nfortunately. I do not see a way to make this better without breaking\nthe ability to use polymorphic support functions in a non-polymorphic\naggregate, since those are inherently not exact matches.\n\n3. As Christoph Berg noted, the fixes in some extension update scripts\nmean that plpgsql has to be installed while those scripts run. How\nmuch do we care, and if we do, what should we do about it? I suggested\na band-aid fix of updating the base install scripts so that users don't\ntypically need to run the update scripts, but that's just a band-aid.\nMaybe we could extend SQL enough so that plpgsql isn't needed to do\nwhat those scripts have to do. I'd initially thought of doing the\nsearch path save-and-restore via\n\n\tSAVEPOINT s1;\n\tSET LOCAL search_path = pg_catalog, pg_temp;\n\t... protected code here ...\n\tRELEASE SAVEPOINT s1;\n\nbut this does not work because SET LOCAL persists to the end of the\nouter transaction. Maybe we could invent a variant that only lasts\nfor the current subtransaction.\n\n4. I noticed while testing that hstore--1.0--1.1.sql is completely\nuseless nowadays, so it might as well get dropped. It fails with a\nsyntax error in every still-supported server version, since \"=>\" is\nno longer a legal operator name. There's no way to load hstore 1.0\ninto a modern server because of that, either.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Aug 2020 14:50:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Loose ends after CVE-2020-14350 (extension installation hazards)"
},
{
"msg_contents": "On 08/14/20 14:50, Tom Lane wrote:\n> \tSAVEPOINT s1;\n> \tSET LOCAL search_path = pg_catalog, pg_temp;\n> \t... protected code here ...\n> \tRELEASE SAVEPOINT s1;\n> \n> but this does not work because SET LOCAL persists to the end of the\n> outer transaction. Maybe we could invent a variant that only lasts\n> for the current subtransaction.\n\nThis reminds me of the way the SQL standard overloads WITH to supply\nlexically-scoped settings of things, as well as CTEs, mentioned a while\nback. [1]\n\nWould this provide additional incentive to implement that syntax,\ngeneralized to support arbitrary GUCs and not just the handful of\nspecific settings the standard uses it for?\n\nRegards,\n-Chap\n\n\n\n[1] https://www.postgresql.org/message-id/5AAEAE0F.20006%40anastigmatix.net\n\n\n",
"msg_date": "Fri, 14 Aug 2020 15:07:51 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Loose ends after CVE-2020-14350 (extension installation hazards)"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 08/14/20 14:50, Tom Lane wrote:\n>> SAVEPOINT s1;\n>> SET LOCAL search_path = pg_catalog, pg_temp;\n>> ... protected code here ...\n>> RELEASE SAVEPOINT s1;\n>> but this does not work because SET LOCAL persists to the end of the\n>> outer transaction. Maybe we could invent a variant that only lasts\n>> for the current subtransaction.\n\n> This reminds me of the way the SQL standard overloads WITH to supply\n> lexically-scoped settings of things, as well as CTEs, mentioned a while\n> back. [1]\n> Would this provide additional incentive to implement that syntax,\n> generalized to support arbitrary GUCs and not just the handful of\n> specific settings the standard uses it for?\n\nHmm. I see a few things not to like about that:\n\n(1) It's hard to see how the WITH approach could work for GUCs\nthat need to take effect during raw parsing, such as the much-detested\nstandard_conforming_strings. Ideally we'd not have any such GUCs, for\nthe reasons explained at the top of gram.y, but I dunno that we'll ever\nget there.\n\n(2) We only have WITH for DML (SELECT/INSERT/UPDATE/DELETE), not utility\ncommands. Maybe that's enough for the cases at hand. Or maybe we'd be\nwilling to do whatever's needful to handle WITH attached to a utility\ncommand, but that could be a pretty large addition of work.\n\n(3) If the SQL syntax is really just \"WITH variable value [, ...]\"\nthen I'm afraid we're going to have a lot of parse-ambiguity problems\nwith wedging full SET syntax into that. The ability for the righthand\nside to be a comma-separated list is certainly going to go out the\nwindow, and we have various other special cases like \"SET TIME ZONE\"\nthat aren't going to work. Again, maybe we don't need a full solution,\nbut it seems like it's gonna be a kluge.\n\n(4) You'd need to repeat the WITH for each SQL command, IIUC. Could\nget tedious.\n\nSo maybe this is worth doing just for more standards compliance, but\nit doesn't really seem like a nicer solution than subtransaction-\nscoped SET.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Aug 2020 15:38:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Loose ends after CVE-2020-14350 (extension installation hazards)"
},
{
"msg_contents": "On 08/14/20 15:38, Tom Lane wrote:\n\n> (3) If the SQL syntax is really just \"WITH variable value [, ...]\"\n> then I'm afraid we're going to have a lot of parse-ambiguity problems\n> with wedging full SET syntax into that. The ability for the righthand\n\nThere is precedent in the SET command for having one general syntax\nusable for any GUC, and specialized ones for a few 'special' GUCs\n(search_path, client_encoding, timezone).\n\nMaybe WITH could be done the same way, inventing some less thorny syntax\nfor the general case\n\n WITH (foo = bar, baz), (quux = 42), XMLBINARY BASE64, a AS (SELECT...)\n\nand treating just the few like XMLBINARY that appear in the standard\nas having equivalent specialized productions?\n\nThe only examples of the syntax in the standard that are coming to mind\nright now are those I've seen in the SQL/XML part, but I feel like I have\nseen others, as if the committee kind of likes their WITH local-setting-\nof-something syntax, and additional examples may continue to appear.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 14 Aug 2020 16:19:18 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Loose ends after CVE-2020-14350 (extension installation hazards)"
},
{
"msg_contents": "On Fri, Aug 14, 2020 at 02:50:32PM -0400, Tom Lane wrote:\n> However, in itself this can only fix references that are resolved during\n> execution of the extension script. I don't see a good way to use the\n> idea to make earthdistance's SQL functions fully secure. It won't do\n> to write, say,\n> \n> CREATE FUNCTION ll_to_earth(float8, float8)\n> ...\n> AS 'SELECT @extschema(cube)@.cube(...)';\n> \n> because this will not survive somebody doing \"ALTER EXTENSION cube SET\n> SCHEMA schema3\". I don't have a proposal for what to do about that.\n\nAnother challenge is verifying that the body qualified everything. Via a\nsimple matter of programming, CREATE EXTENSION could verify that CREATE-time\ncode observes schema qualification rules. That would not extend to function\nbodies.\n\n> Admittedly, we already disclaim security if you run queries with a\n> search_path that contains any untrusted schemas ... but it would be\n> nice if extensions could be written that (in themselves) are safe\n> regardless.\n\nYes. Even when safety is not a concern, it's a quality problem for the\nfunctions to error out when search_path lacks some schema. As you know, we\nget recurring reports about that, e.g.\nhttps://www.postgresql.org/message-id/flat/16534-69f25077c45f34a5%40postgresql.org\n\n> Peter E's proposal for parsing SQL function bodies at\n> creation time could perhaps fix this for SQL functions, but we still\n> have the issue for other PLs.\n\nYes. The SQL-specific feature could do enough to let a future version of\nearthdistance be trusted.\n\n> 2. [...] lookup_agg_function() allows\n> inexact argument type matches for an aggregate's support functions,\n> so it could be possible to capture a reference if the intended support\n> function doesn't exactly match the aggregate's declared input and\n> transition data types.\n\nShould CREATE AGGREGATE support \"FINALFUNC = foo(sometype)\" input to constrain\nthe lookup? (It does accept the syntax, but \"sometype\" is unused and need not\neven denote an extant type.)\n\n> 3. As Christoph Berg noted, the fixes in some extension update scripts\n> mean that plpgsql has to be installed while those scripts run. How\n> much do we care, and if we do, what should we do about it?\n\nI propose not caring at all. Since we have dump/reload of \"REVOKE USAGE ON\nLANGUAGE plpgsql FROM PUBLIC\", extensions requiring plpgsql are fine. (It\ncould be a problem in a \"superuser = false\" extension, but core isn't doing\nthose.) Even saddling plpgsql with a pin dependency would be fine.\n\n> 4. I noticed while testing that hstore--1.0--1.1.sql is completely\n> useless nowadays, so it might as well get dropped. It fails with a\n> syntax error in every still-supported server version, since \"=>\" is\n> no longer a legal operator name. There's no way to load hstore 1.0\n> into a modern server because of that, either.\n\nThe chance of this getting reported from the field has been dropping for\nseveral years. It's negligible now.\n\nThanks,\nnm\n\n\n",
"msg_date": "Wed, 14 Oct 2020 01:46:15 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Loose ends after CVE-2020-14350 (extension installation hazards)"
}
] |
[
{
"msg_contents": "Hi,\n\nI wonder what caused this[1] one-off failure to see tuples in clustered order:\n\ndiff -U3 /home/pgbfarm/buildroot/REL_13_STABLE/pgsql.build/src/test/regress/expected/cluster.out\n/home/pgbfarm/buildroot/REL_13_STABLE/pgsql.build/src/test/regress/results/cluster.out\n--- /home/pgbfarm/buildroot/REL_13_STABLE/pgsql.build/src/test/regress/expected/cluster.out\n2020-06-11 07:58:23.738084255 +0300\n+++ /home/pgbfarm/buildroot/REL_13_STABLE/pgsql.build/src/test/regress/results/cluster.out\n2020-07-05 02:35:06.396023210 +0300\n@@ -462,7 +462,8 @@\n where row(hundred, thousand, tenthous) <= row(lhundred, lthousand, ltenthous);\n hundred | lhundred | thousand | lthousand | tenthous | ltenthous\n ---------+----------+----------+-----------+----------+-----------\n-(0 rows)\n+ 0 | 99 | 0 | 999 | 0 | 9999\n+(1 row)\n\nI guess a synchronised scan could cause that, but I wouldn't expect one here.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2020-07-04%2023:10:22\n\n\n",
"msg_date": "Mon, 17 Aug 2020 13:03:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "One-off failure in \"cluster\" test"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I wonder what caused this[1] one-off failure to see tuples in clustered order:\n> ...\n> I guess a synchronised scan could cause that, but I wouldn't expect one here.\n\nLooking at its configuration, chipmunk uses\n\n 'extra_config' => {\n ...\n 'shared_buffers = 10MB',\n\nwhich I think means that clstr_4 would be large enough to trigger a\nsyncscan. Ordinarily that's not a problem since no other session would\nbe touching clstr_4 ... but I wonder whether (a) autovacuum had decided\nto look at clstr_4 and (b) syncscan can trigger on vacuum-driven scans.\n(a) would explain the non-reproducibility.\n\nI kinda think that (b), if true, is a bad idea and should be suppressed.\nautovacuum would typically fail to keep up with other syncscans thanks\nto vacuum delay settings, so letting it participate seems unhelpful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Aug 2020 21:20:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: One-off failure in \"cluster\" test"
},
{
"msg_contents": "On Mon, Aug 17, 2020 at 1:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I wonder what caused this[1] one-off failure to see tuples in clustered order:\n> > ...\n> > I guess a synchronised scan could cause that, but I wouldn't expect one here.\n>\n> Looking at its configuration, chipmunk uses\n>\n> 'extra_config' => {\n> ...\n> 'shared_buffers = 10MB',\n>\n> which I think means that clstr_4 would be large enough to trigger a\n> syncscan. Ordinarily that's not a problem since no other session would\n> be touching clstr_4 ... but I wonder whether (a) autovacuum had decided\n> to look at clstr_4 and (b) syncscan can trigger on vacuum-driven scans.\n> (a) would explain the non-reproducibility.\n>\n> I kinda think that (b), if true, is a bad idea and should be suppressed.\n> autovacuum would typically fail to keep up with other syncscans thanks\n> to vacuum delay settings, so letting it participate seems unhelpful.\n\nYeah, I wondered that as well and found my way to historical\ndiscussions concluding that autovacuum should not participate in sync\nscans. Now I'm wondering if either table AM refactoring or parallel\nvacuum refactoring might have inadvertently caused that to become a\npossibility in REL_13_STABLE.\n\n\n",
"msg_date": "Mon, 17 Aug 2020 13:27:57 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: One-off failure in \"cluster\" test"
},
{
"msg_contents": "On Mon, Aug 17, 2020 at 1:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Mon, Aug 17, 2020 at 1:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > I wonder what caused this[1] one-off failure to see tuples in clustered order:\n> > > ...\n> > > I guess a synchronised scan could cause that, but I wouldn't expect one here.\n> >\n> > Looking at its configuration, chipmunk uses\n> >\n> > 'extra_config' => {\n> > ...\n> > 'shared_buffers = 10MB',\n\nAhh, I see what's happening. You don't need a concurrent process\nscanning *your* table for scan order to be nondeterministic. The\npreceding CLUSTER command can leave the start block anywhere if its\ncall to ss_report_location() fails to acquire SyncScanLock\nconditionally. So I think we just need to disable that for this test,\nlike in the attached.",
"msg_date": "Mon, 17 Aug 2020 14:51:56 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: One-off failure in \"cluster\" test"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Ahh, I see what's happening. You don't need a concurrent process\n> scanning *your* table for scan order to be nondeterministic. The\n> preceding CLUSTER command can leave the start block anywhere if its\n> call to ss_report_location() fails to acquire SyncScanLock\n> conditionally. So I think we just need to disable that for this test,\n> like in the attached.\n\nHmm. I'm not terribly thrilled about band-aiding one unstable test\ncase at a time.\n\nheapgettup makes a point of ensuring that its scan end position\ngets reported:\n\n page++;\n if (page >= scan->rs_nblocks)\n page = 0;\n finished = (page == scan->rs_startblock) ||\n (scan->rs_numblocks != InvalidBlockNumber ? --scan->rs_numblocks == 0 : false);\n\n /*\n * Report our new scan position for synchronization purposes. We\n * don't do that when moving backwards, however. That would just\n * mess up any other forward-moving scanners.\n *\n * Note: we do this before checking for end of scan so that the\n * final state of the position hint is back at the start of the\n * rel. That's not strictly necessary, but otherwise when you run\n * the same query multiple times the starting position would shift\n * a little bit backwards on every invocation, which is confusing.\n * We don't guarantee any specific ordering in general, though.\n */\n if (scan->rs_base.rs_flags & SO_ALLOW_SYNC)\n ss_report_location(scan->rs_base.rs_rd, page);\n\nSeems like the conditional LWLockAcquire is pissing away that attempt\nat stability. Maybe we should adjust things so that the final\nlocation report isn't done conditionally.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Aug 2020 10:50:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: One-off failure in \"cluster\" test"
}
] |
[
{
"msg_contents": "Hi\n\nI am working on tracing support to plpgsql_check\n\nhttps://github.com/okbob/plpgsql_check\n\nI would like to print content of variables - and now, I have to go some\ndeeper than I would like. I need to separate between scalar, row, and\nrecord variables. PLpgSQL has code for it - but it is private.\n\nNow plpgsql debug API has an API for expression evaluation - and it is\nworking fine, but there is a need to know the necessary namespace.\nUnfortunately, the plpgsql variables have not assigned any info about\nrelated namespaces. It increases the necessary work for implementing\nconditional breakpoints or just printing all variables (and maintaining a\nlot of plpgsql code outside plpgsql core).\n\nSo my proposals:\n\n1. enhancing debug api about method\n\nchar *get_cstring_valule(PLpgSQL_variable *var, bool *isnull)\n\n2. enhancing PLpgSQL_var structure about related namespace \"struct\nPLpgSQL_nsitem *ns\",\nPLpgSQL_stmt *scope statement (statement that limits scope of variable's\nvisibility). For usage in debuggers, tracers can be nice to have a info\nabout kind of variable (function argument, local variable, automatic custom\nvariable (FORC), automatic internal variable (SQLERRM, FOUND, TG_OP, ...).\n\nComments, notes?\n\nRegards\n\nPavel\n\nHiI am working on tracing support to plpgsql_checkhttps://github.com/okbob/plpgsql_checkI would like to print content of variables - and now, I have to go some deeper than I would like. I need to separate between scalar, row, and record variables. PLpgSQL has code for it - but it is private.Now plpgsql debug API has an API for expression evaluation - and it is working fine, but there is a need to know the necessary namespace. Unfortunately, the plpgsql variables have not assigned any info about related namespaces. It increases the necessary work for implementing conditional breakpoints or just printing all variables (and maintaining a lot of plpgsql code outside plpgsql core).So my proposals:1. enhancing debug api about methodchar *get_cstring_valule(PLpgSQL_variable *var, bool *isnull)2. enhancing PLpgSQL_var structure about related namespace \"struct PLpgSQL_nsitem *ns\",PLpgSQL_stmt *scope statement (statement that limits scope of variable's visibility). For usage in debuggers, tracers can be nice to have a info about kind of variable (function argument, local variable, automatic custom variable (FORC), automatic internal variable (SQLERRM, FOUND, TG_OP, ...).Comments, notes?RegardsPavel",
"msg_date": "Mon, 17 Aug 2020 08:40:26 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "po 17. 8. 2020 v 8:40 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> I am working on tracing support to plpgsql_check\n>\n> https://github.com/okbob/plpgsql_check\n>\n> I would like to print content of variables - and now, I have to go some\n> deeper than I would like. I need to separate between scalar, row, and\n> record variables. PLpgSQL has code for it - but it is private.\n>\n> Now plpgsql debug API has an API for expression evaluation - and it is\n> working fine, but there is a need to know the necessary namespace.\n> Unfortunately, the plpgsql variables have not assigned any info about\n> related namespaces. It increases the necessary work for implementing\n> conditional breakpoints or just printing all variables (and maintaining a\n> lot of plpgsql code outside plpgsql core).\n>\n> So my proposals:\n>\n> 1. enhancing debug api about method\n>\n> char *get_cstring_valule(PLpgSQL_variable *var, bool *isnull)\n>\n> 2. enhancing PLpgSQL_var structure about related namespace \"struct\n> PLpgSQL_nsitem *ns\",\n> PLpgSQL_stmt *scope statement (statement that limits scope of variable's\n> visibility). For usage in debuggers, tracers can be nice to have a info\n> about kind of variable (function argument, local variable, automatic custom\n> variable (FORC), automatic internal variable (SQLERRM, FOUND, TG_OP, ...).\n>\n> Comments, notes?\n>\n\nThere are two patches\n\nThe first patch enhances dbg api by two functions - eval_datum and\ncast_value - it is an interface for functions exec_eval_datum and\ndo_cast_value. With this API it is easy to take a value of any PLpgSQL\nvariable (without the necessity to duplicate a lot of plpgsql's code), and\nit easy to transform this value to any expected type - usually it should\nprovide the cast to the text type.\n\nSecond patch injects pointer to related namespace to any plpgsql statement.\nReference to namespace is required for building custom expressions that can\nbe evaluated by assign_expr function. I would like to use it for\nconditional breakpoints or conditional tracing. Without this patch it is\ndifficult to detect the correct namespace and ensure the correct variable's\nvisibility.\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>",
"msg_date": "Tue, 18 Aug 2020 20:04:24 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "Hi\n\nrebase\n\nRegards\n\nPavel",
"msg_date": "Fri, 8 Jan 2021 10:11:24 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel",
"msg_date": "Sun, 7 Feb 2021 19:09:48 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "ne 7. 2. 2021 v 19:09 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> fresh rebase\n>\n\nonly rebase\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>",
"msg_date": "Mon, 31 May 2021 20:50:57 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "Hi Pavel,\n\n> I would like to print content of variables - and now, I have to go some\n> deeper than I would like. I need to separate between scalar, row, and\n> record variables. PLpgSQL has code for it - but it is private.\n> [...]\n\nThe patch seems OK, but I wonder - would it be possible to write a test on it?\n\n--\nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 16 Jul 2021 16:05:44 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "Hi\n\npá 16. 7. 2021 v 15:05 odesílatel Aleksander Alekseev <\naleksander@timescale.com> napsal:\n\n> Hi Pavel,\n>\n> > I would like to print content of variables - and now, I have to go some\n> > deeper than I would like. I need to separate between scalar, row, and\n> > record variables. PLpgSQL has code for it - but it is private.\n> > [...]\n>\n> The patch seems OK, but I wonder - would it be possible to write a test on\n> it?\n>\n\n Sure, it is possible - unfortunately - the size of this test will be\nsignificantly bigger than patch self.\n\nI'll try to write it some simply tracer, where this API can be used\n\n\n\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\nHipá 16. 7. 2021 v 15:05 odesílatel Aleksander Alekseev <aleksander@timescale.com> napsal:Hi Pavel,\n\n> I would like to print content of variables - and now, I have to go some\n> deeper than I would like. I need to separate between scalar, row, and\n> record variables. PLpgSQL has code for it - but it is private.\n> [...]\n\nThe patch seems OK, but I wonder - would it be possible to write a test on it? Sure, it is possible - unfortunately - the size of this test will be significantly bigger than patch self. I'll try to write it some simply tracer, where this API can be used\n\n--\nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 16 Jul 2021 18:40:38 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "Hi\n\npá 16. 7. 2021 v 18:40 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> pá 16. 7. 2021 v 15:05 odesílatel Aleksander Alekseev <\n> aleksander@timescale.com> napsal:\n>\n>> Hi Pavel,\n>>\n>> > I would like to print content of variables - and now, I have to go some\n>> > deeper than I would like. I need to separate between scalar, row, and\n>> > record variables. PLpgSQL has code for it - but it is private.\n>> > [...]\n>>\n>> The patch seems OK, but I wonder - would it be possible to write a test\n>> on it?\n>>\n>\n> Sure, it is possible - unfortunately - the size of this test will be\n> significantly bigger than patch self.\n>\n> I'll try to write it some simply tracer, where this API can be used\n>\n\nI am sending an enhanced patch about the regress test for plpgsql's debug\nAPI.\n\nRegards\n\nPavel\n\n\n>\n>\n>> --\n>> Best regards,\n>> Aleksander Alekseev\n>>\n>",
"msg_date": "Wed, 21 Jul 2021 22:23:29 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "st 21. 7. 2021 v 22:23 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> pá 16. 7. 2021 v 18:40 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>> Hi\n>>\n>> pá 16. 7. 2021 v 15:05 odesílatel Aleksander Alekseev <\n>> aleksander@timescale.com> napsal:\n>>\n>>> Hi Pavel,\n>>>\n>>> > I would like to print content of variables - and now, I have to go some\n>>> > deeper than I would like. I need to separate between scalar, row, and\n>>> > record variables. PLpgSQL has code for it - but it is private.\n>>> > [...]\n>>>\n>>> The patch seems OK, but I wonder - would it be possible to write a test\n>>> on it?\n>>>\n>>\n>> Sure, it is possible - unfortunately - the size of this test will be\n>> significantly bigger than patch self.\n>>\n>> I'll try to write it some simply tracer, where this API can be used\n>>\n>\n> I am sending an enhanced patch about the regress test for plpgsql's debug\n> API.\n>\n\nwith modified Makefile to force use option\n-I$(top_srcdir)/src/pl/plpgsql/src\n\noverride CPPFLAGS := $(CPPFLAGS) -I$(top_srcdir)/src/pl/plpgsql/src\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n>>\n>>\n>>> --\n>>> Best regards,\n>>> Aleksander Alekseev\n>>>\n>>",
"msg_date": "Thu, 22 Jul 2021 06:12:57 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "Hi Pavel,\n\n>> I am sending an enhanced patch about the regress test for plpgsql's\ndebug API.\n\nThanks for the test! I noticed some little issues with formatting and\ntypos. The corrected patch is attached.\n\n> override CPPFLAGS := $(CPPFLAGS) -I$(top_srcdir)/src/pl/plpgsql/src\n\nYou probably already noticed, but for the record - AppVeyor doesn't seem to\nbe happy still [1]:\n\n```\nsrc/test/modules/test_dbgapi/test_dbgapi.c(17): fatal error C1083: Cannot\nopen include file: 'plpgsql.h': No such file or directory\n[C:\\projects\\postgresql\\test_dbgapi.vcxproj]\n```\n\n[1]:\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.141500\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 22 Jul 2021 15:54:08 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "čt 22. 7. 2021 v 14:54 odesílatel Aleksander Alekseev <\naleksander@timescale.com> napsal:\n\n> Hi Pavel,\n>\n> >> I am sending an enhanced patch about the regress test for plpgsql's\n> debug API.\n>\n> Thanks for the test! I noticed some little issues with formatting and\n> typos. The corrected patch is attached.\n>\n> > override CPPFLAGS := $(CPPFLAGS) -I$(top_srcdir)/src/pl/plpgsql/src\n>\n> You probably already noticed, but for the record - AppVeyor doesn't seem\n> to be happy still [1]:\n>\n> ```\n> src/test/modules/test_dbgapi/test_dbgapi.c(17): fatal error C1083: Cannot\n> open include file: 'plpgsql.h': No such file or directory\n> [C:\\projects\\postgresql\\test_dbgapi.vcxproj]\n>\n```\n>\n> [1]:\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.141500\n>\n\nI know it. Attached patch try to fix this issue\n\nI merged you patch (thank you)\n\nRegards\n\nPavel\n\n\n\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>",
"msg_date": "Thu, 22 Jul 2021 18:17:01 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "Hi Pavel,\n\n> I know it. Attached patch try to fix this issue\n>\n> I merged you patch (thank you)\n\nThanks! I did some more minor changes, mostly in the comments. See the\nattached patch. Other than that it looks OK. I think it's Ready for\nCommitter now.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 23 Jul 2021 11:30:23 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "pá 23. 7. 2021 v 10:30 odesílatel Aleksander Alekseev <\naleksander@timescale.com> napsal:\n\n> Hi Pavel,\n>\n> > I know it. Attached patch try to fix this issue\n> >\n> > I merged you patch (thank you)\n>\n> Thanks! I did some more minor changes, mostly in the comments. See the\n> attached patch. Other than that it looks OK. I think it's Ready for\n> Committer now.\n>\n\nlooks well,\n\nthank you very much\n\nPavel\n\n\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\npá 23. 7. 2021 v 10:30 odesílatel Aleksander Alekseev <aleksander@timescale.com> napsal:Hi Pavel,> I know it. Attached patch try to fix this issue>> I merged you patch (thank you)Thanks! I did some more minor changes, mostly in the comments. See the attached patch. Other than that it looks OK. I think it's Ready for Committer now.looks well, thank you very muchPavel-- Best regards,Aleksander Alekseev",
"msg_date": "Fri, 23 Jul 2021 10:47:17 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "pá 23. 7. 2021 v 10:47 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> pá 23. 7. 2021 v 10:30 odesílatel Aleksander Alekseev <\n> aleksander@timescale.com> napsal:\n>\n>> Hi Pavel,\n>>\n>> > I know it. Attached patch try to fix this issue\n>> >\n>> > I merged you patch (thank you)\n>>\n>> Thanks! I did some more minor changes, mostly in the comments. See the\n>> attached patch. Other than that it looks OK. I think it's Ready for\n>> Committer now.\n>>\n>\n> looks well,\n>\n> thank you very much\n>\n> Pavel\n>\n\nrebase\n\nPavel\n\n\n>\n>> --\n>> Best regards,\n>> Aleksander Alekseev\n>>\n>",
"msg_date": "Wed, 28 Jul 2021 11:01:47 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "Hi\n\nst 28. 7. 2021 v 11:01 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> pá 23. 7. 2021 v 10:47 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> pá 23. 7. 2021 v 10:30 odesílatel Aleksander Alekseev <\n>> aleksander@timescale.com> napsal:\n>>\n>>> Hi Pavel,\n>>>\n>>> > I know it. Attached patch try to fix this issue\n>>> >\n>>> > I merged you patch (thank you)\n>>>\n>>> Thanks! I did some more minor changes, mostly in the comments. See the\n>>> attached patch. Other than that it looks OK. I think it's Ready for\n>>> Committer now.\n>>>\n>>\n>> looks well,\n>>\n>> thank you very much\n>>\n>> Pavel\n>>\n>\n> rebase\n>\n\nunfortunately, previous patch that I sent was broken, so I am sending fixed\npatch and fresh rebase\n\nRegards\n\nPavel\n\n\n> Pavel\n>\n>\n>>\n>>> --\n>>> Best regards,\n>>> Aleksander Alekseev\n>>>\n>>",
"msg_date": "Sun, 22 Aug 2021 19:38:39 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "ne 22. 8. 2021 v 19:38 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> st 28. 7. 2021 v 11:01 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> pá 23. 7. 2021 v 10:47 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n>> napsal:\n>>\n>>>\n>>>\n>>> pá 23. 7. 2021 v 10:30 odesílatel Aleksander Alekseev <\n>>> aleksander@timescale.com> napsal:\n>>>\n>>>> Hi Pavel,\n>>>>\n>>>> > I know it. Attached patch try to fix this issue\n>>>> >\n>>>> > I merged you patch (thank you)\n>>>>\n>>>> Thanks! I did some more minor changes, mostly in the comments. See the\n>>>> attached patch. Other than that it looks OK. I think it's Ready for\n>>>> Committer now.\n>>>>\n>>>\n>>> looks well,\n>>>\n>>> thank you very much\n>>>\n>>> Pavel\n>>>\n>>\n>> rebase\n>>\n>\n> unfortunately, previous patch that I sent was broken, so I am sending\n> fixed patch and fresh rebase\n>\n\nThis version set $contrib_extraincludes to fix windows build\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>> Pavel\n>>\n>>\n>>>\n>>>> --\n>>>> Best regards,\n>>>> Aleksander Alekseev\n>>>>\n>>>",
"msg_date": "Mon, 23 Aug 2021 07:15:07 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "It looks like this is -- like a lot of plpgsql patches -- having\ndifficulty catching the attention of reviewers and committers.\nAleksander asked for a test and Pavel put quite a bit of work into\nadding a good test case. I actually like that there's a test because\nit shows the API can be used effectively.\n\n From my quick skim of this code it does indeed look like it's ready to\ncommit. It's mostly pretty mechanical code to expose a couple fields\nso that a debugger can see them.\n\nPavel, are you planning to add a debugger to contrib using this? The\ntest example code looks like it would already be kind of useful even\nin this form.\n\n\n",
"msg_date": "Thu, 17 Mar 2022 15:58:10 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "Hi\n\nčt 17. 3. 2022 v 20:58 odesílatel Greg Stark <stark@mit.edu> napsal:\n\n> It looks like this is -- like a lot of plpgsql patches -- having\n> difficulty catching the attention of reviewers and committers.\n> Aleksander asked for a test and Pavel put quite a bit of work into\n> adding a good test case. I actually like that there's a test because\n> it shows the API can be used effectively.\n>\n> From my quick skim of this code it does indeed look like it's ready to\n> commit. It's mostly pretty mechanical code to expose a couple fields\n> so that a debugger can see them.\n>\n> Pavel, are you planning to add a debugger to contrib using this? The\n> test example code looks like it would already be kind of useful even\n> in this form.\n>\n\nI had a plan to use the new API in plpgsql_check\nhttps://github.com/okbob/plpgsql_check for integrated tracer\n\nThere is the pldebugger https://github.com/EnterpriseDB/pldebugger and I\nthink this extension can use this API well too. You can see - the PL\ndebugger has about 6000 lines, and more, there are some extensions for EDB.\nSo, unfortunately, It doesn't looks like good candidate for contrib.\nWriting new debugger from zero doesn't looks like effective work. I am open\nany discussion about it. Maybe some form of more sophisticated tracer can\nbe invited, but I think so better is enhancing widely used pldebugger\nextension.\n\nRegards\n\nPavel\n\nHičt 17. 3. 2022 v 20:58 odesílatel Greg Stark <stark@mit.edu> napsal:It looks like this is -- like a lot of plpgsql patches -- having\ndifficulty catching the attention of reviewers and committers.\nAleksander asked for a test and Pavel put quite a bit of work into\nadding a good test case. I actually like that there's a test because\nit shows the API can be used effectively.\n\n From my quick skim of this code it does indeed look like it's ready to\ncommit. It's mostly pretty mechanical code to expose a couple fields\nso that a debugger can see them.\n\nPavel, are you planning to add a debugger to contrib using this? The\ntest example code looks like it would already be kind of useful even\nin this form.I had a plan to use the new API in plpgsql_check https://github.com/okbob/plpgsql_check for integrated tracer There is the pldebugger https://github.com/EnterpriseDB/pldebugger and I think this extension can use this API well too. You can see - the PL debugger has about 6000 lines, and more, there are some extensions for EDB. So, unfortunately, It doesn't looks like good candidate for contrib. Writing new debugger from zero doesn't looks like effective work. I am open any discussion about it. Maybe some form of more sophisticated tracer can be invited, but I think so better is enhancing widely used pldebugger extension.RegardsPavel",
"msg_date": "Thu, 17 Mar 2022 21:33:41 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> It looks like this is -- like a lot of plpgsql patches -- having\n> difficulty catching the attention of reviewers and committers.\n\nI was hoping that someone with more familiarity with pldebugger\nwould comment on the suitableness of this patch for their desires.\nBut nobody's stepped up, so I took a look through this. It looks\nlike there are several different things mashed into this patch:\n\n1. Expose exec_eval_datum() to plugins. OK; I see that pldebugger\nhas code that duplicates that functionality (and not terribly well).\n\n2. Expose do_cast_value() to plugins. Mostly OK, but shouldn't we\nexpose exec_cast_value() instead? Otherwise it's on the caller\nto make sure it doesn't ask for a no-op cast, which seems like a\nbad idea; not least because the example usage in get_string_value()\nfails to do so.\n\n3. Store relevant PLpgSQL_nsitem chain link in each PLpgSQL_stmt.\nThis makes me itch, for a number of reasons:\n* I was a bit astonished that it even works; I'd thought that the\nnsitem data structure is transient data thrown away when we finish\ncompiling. I see now that that's not so, but do we really want to\nnail down that that can't ever be improved?\n* This ties us forevermore to the present, very inefficient, nsitem\nlist data structure. Sooner or later somebody is going to want to\nimprove that linear search, and what then?\n* The space overhead seems nontrivial; many PLpgSQL_stmt nodes are\nnot very big.\n* The code implications are way more subtle than you would think\nfrom inspecting this totally-comment-free patch implementation.\nIn particular, the fact that the nsitem chain pointed to by a\nplpgsql_block is the right thing depends heavily on exactly where\nin the parse sequence we capture the value of plpgsql_ns_top().\nThat could be improved with a comment, perhaps.\n\nI think that using the PLpgSQL_nsitem chains to look up variables\nin a debugger is just the wrong thing. The right thing is to\ncrawl up the statement tree, and when you see a PLpgSQL_stmt_block\nor loop construct, examine the associated datums. I'll concede\nthat crawling *up* the tree is hard, as we only store down-links.\nNow a plugin could fix that by itself, by recursively traversing the\nstatement tree one time and recording parent relationships in its own\ndata structure (say, an array of parent-statement pointers indexed by\nstmtid). Or we could add parent links in the statement tree, though\nI remain concerned about the space cost. On the whole I prefer the\nfirst way, because (a) we don't pay the overhead when it's not needed,\nand (b) a plugin could use it even in existing release branches.\n\nBTW, crawling up the statement tree would also be a far better answer\nthan what's shown in the patch for locating surrounding for-loops.\n\nSo my inclination is to accept the additional function pointers\n(modulo pointing to exec_cast_value) but reject the nsitem additions.\n\nNot sure what to do with test_dbgapi. There's some value in exercising\nthe find_rendezvous_variable mechanism, but I'm dubious that that\njustifies a whole test module.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Mar 2022 15:09:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "Hi\n\nst 30. 3. 2022 v 21:09 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Greg Stark <stark@mit.edu> writes:\n> > It looks like this is -- like a lot of plpgsql patches -- having\n> > difficulty catching the attention of reviewers and committers.\n>\n> I was hoping that someone with more familiarity with pldebugger\n> would comment on the suitableness of this patch for their desires.\n> But nobody's stepped up, so I took a look through this. It looks\n> like there are several different things mashed into this patch:\n>\n> 1. Expose exec_eval_datum() to plugins. OK; I see that pldebugger\n> has code that duplicates that functionality (and not terribly well).\n>\n> 2. Expose do_cast_value() to plugins. Mostly OK, but shouldn't we\n> expose exec_cast_value() instead? Otherwise it's on the caller\n> to make sure it doesn't ask for a no-op cast, which seems like a\n> bad idea; not least because the example usage in get_string_value()\n> fails to do so.\n>\n\ngood idea, changed\n\n\n>\n> 3. Store relevant PLpgSQL_nsitem chain link in each PLpgSQL_stmt.\n> This makes me itch, for a number of reasons:\n> * I was a bit astonished that it even works; I'd thought that the\n> nsitem data structure is transient data thrown away when we finish\n> compiling. I see now that that's not so, but do we really want to\n> nail down that that can't ever be improved?\n> * This ties us forevermore to the present, very inefficient, nsitem\n> list data structure. Sooner or later somebody is going to want to\n> improve that linear search, and what then?\n> * The space overhead seems nontrivial; many PLpgSQL_stmt nodes are\n> not very big.\n> * The code implications are way more subtle than you would think\n> from inspecting this totally-comment-free patch implementation.\n> In particular, the fact that the nsitem chain pointed to by a\n> plpgsql_block is the right thing depends heavily on exactly where\n> in the parse sequence we capture the value of plpgsql_ns_top().\n> That could be improved with a comment, perhaps.\n>\n>\nI think that using the PLpgSQL_nsitem chains to look up variables\n> in a debugger is just the wrong thing. The right thing is to\n> crawl up the statement tree, and when you see a PLpgSQL_stmt_block\n> or loop construct, examine the associated datums. I'll concede\n> that crawling *up* the tree is hard, as we only store down-links.\n> Now a plugin could fix that by itself, by recursively traversing the\n> statement tree one time and recording parent relationships in its own\n> data structure (say, an array of parent-statement pointers indexed by\n> stmtid). Or we could add parent links in the statement tree, though\n> I remain concerned about the space cost. On the whole I prefer the\n> first way, because (a) we don't pay the overhead when it's not needed,\n> and (b) a plugin could use it even in existing release branches.\n>\n\nI removed this part\n\n\n>\n> BTW, crawling up the statement tree would also be a far better answer\n> than what's shown in the patch for locating surrounding for-loops.\n>\n> So my inclination is to accept the additional function pointers\n> (modulo pointing to exec_cast_value) but reject the nsitem additions.\n>\n> Not sure what to do with test_dbgapi. There's some value in exercising\n> the find_rendezvous_variable mechanism, but I'm dubious that that\n> justifies a whole test module.\n>\n\nI removed this test\n\nI am sending updated patch\n\nRegards\n\nPavel\n\n\n>\n> regards, tom lane\n>",
"msg_date": "Thu, 31 Mar 2022 10:01:02 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I am sending updated patch\n\nAfter studying the list of exposed functions for awhile, it seemed\nto me that we should also expose exec_assign_value. The new pointers\nallow a plugin to compute a value in Datum+isnull format, but then it\ncan't do much of anything with it: exec_assign_expr is a completely\ninconvenient API if what you want to do is put a specific Datum\nvalue into a variable. Adding exec_assign_value provides \"store\"\nand \"fetch\" APIs that are more or less inverses, which should be\neasier to work with.\n\nSo I did that and pushed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 31 Mar 2022 17:12:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
},
{
"msg_contents": "čt 31. 3. 2022 v 23:12 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I am sending updated patch\n>\n> After studying the list of exposed functions for awhile, it seemed\n> to me that we should also expose exec_assign_value. The new pointers\n> allow a plugin to compute a value in Datum+isnull format, but then it\n> can't do much of anything with it: exec_assign_expr is a completely\n> inconvenient API if what you want to do is put a specific Datum\n> value into a variable. Adding exec_assign_value provides \"store\"\n> and \"fetch\" APIs that are more or less inverses, which should be\n> easier to work with.\n>\n> So I did that and pushed it.\n>\n\ngreat\n\nThank you\n\nPavel\n\n\n>\n> regards, tom lane\n>\n\nčt 31. 3. 2022 v 23:12 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I am sending updated patch\n\nAfter studying the list of exposed functions for awhile, it seemed\nto me that we should also expose exec_assign_value. The new pointers\nallow a plugin to compute a value in Datum+isnull format, but then it\ncan't do much of anything with it: exec_assign_expr is a completely\ninconvenient API if what you want to do is put a specific Datum\nvalue into a variable. Adding exec_assign_value provides \"store\"\nand \"fetch\" APIs that are more or less inverses, which should be\neasier to work with.\n\nSo I did that and pushed it.greatThank youPavel \n\n regards, tom lane",
"msg_date": "Fri, 1 Apr 2022 06:51:16 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal: enhancing plpgsql debug API - returns text value of\n variable content"
}
] |
[
{
"msg_contents": "Hi\n\nplpgsql_check extension is almost complete now. This extension is available\non all environments and for all supported Postgres releases. It is probably\ntoo big to be part of contrib, but I think so it can be referenced in\nhttps://www.postgresql.org/docs/current/plpgsql-development-tips.html\nchapter.\n\nWhat do you think about it?\n\nRegards\n\nPavel\n\nHiplpgsql_check extension is almost complete now. This extension is available on all environments and for all supported Postgres releases. It is probably too big to be part of contrib, but I think so it can be referenced in https://www.postgresql.org/docs/current/plpgsql-development-tips.html chapter.What do you think about it?RegardsPavel",
"msg_date": "Mon, 17 Aug 2020 08:46:39 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "proposal - reference to plpgsql_check from plpgsql doc"
},
{
"msg_contents": "On Mon, Aug 17, 2020 at 8:47 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> plpgsql_check extension is almost complete now. This extension is\n> available on all environments and for all supported Postgres releases. It\n> is probably too big to be part of contrib, but I think so it can be\n> referenced in\n> https://www.postgresql.org/docs/current/plpgsql-development-tips.html\n> chapter.\n>\n> What do you think about it?\n>\n>\nWithout making any valuation on this particular tool, I think we should be\nvery very careful and restrictive about putting such links in the main\ndocumentation.\n\nThe appropriate location for such references are in the product catalog on\nthe website and on the wiki. (I'd be happy to have a link from the docs to\na generic \"pl/pgsql tips\" page on the wiki, though, if people would think\nthat helpful -- because that would be linking to a destination that we can\neasily update/fix should it go stale)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Aug 17, 2020 at 8:47 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hiplpgsql_check extension is almost complete now. This extension is available on all environments and for all supported Postgres releases. It is probably too big to be part of contrib, but I think so it can be referenced in https://www.postgresql.org/docs/current/plpgsql-development-tips.html chapter.What do you think about it?Without making any valuation on this particular tool, I think we should be very very careful and restrictive about putting such links in the main documentation. The appropriate location for such references are in the product catalog on the website and on the wiki. (I'd be happy to have a link from the docs to a generic \"pl/pgsql tips\" page on the wiki, though, if people would think that helpful -- because that would be linking to a destination that we can easily update/fix should it go stale)-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 17 Aug 2020 10:36:52 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: proposal - reference to plpgsql_check from plpgsql doc"
},
{
"msg_contents": "po 17. 8. 2020 v 10:37 odesílatel Magnus Hagander <magnus@hagander.net>\nnapsal:\n\n>\n>\n> On Mon, Aug 17, 2020 at 8:47 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> plpgsql_check extension is almost complete now. This extension is\n>> available on all environments and for all supported Postgres releases. It\n>> is probably too big to be part of contrib, but I think so it can be\n>> referenced in\n>> https://www.postgresql.org/docs/current/plpgsql-development-tips.html\n>> chapter.\n>>\n>> What do you think about it?\n>>\n>>\n> Without making any valuation on this particular tool, I think we should be\n> very very careful and restrictive about putting such links in the main\n> documentation.\n>\n> The appropriate location for such references are in the product catalog on\n> the website and on the wiki. (I'd be happy to have a link from the docs to\n> a generic \"pl/pgsql tips\" page on the wiki, though, if people would think\n> that helpful -- because that would be linking to a destination that we can\n> easily update/fix should it go stale)\n>\n\ngood idea\n\nPavel\n\n\n> --\n> Magnus Hagander\n> Me: https://www.hagander.net/ <http://www.hagander.net/>\n> Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n>\n\npo 17. 8. 2020 v 10:37 odesílatel Magnus Hagander <magnus@hagander.net> napsal:On Mon, Aug 17, 2020 at 8:47 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hiplpgsql_check extension is almost complete now. This extension is available on all environments and for all supported Postgres releases. It is probably too big to be part of contrib, but I think so it can be referenced in https://www.postgresql.org/docs/current/plpgsql-development-tips.html chapter.What do you think about it?Without making any valuation on this particular tool, I think we should be very very careful and restrictive about putting such links in the main documentation. The appropriate location for such references are in the product catalog on the website and on the wiki. (I'd be happy to have a link from the docs to a generic \"pl/pgsql tips\" page on the wiki, though, if people would think that helpful -- because that would be linking to a destination that we can easily update/fix should it go stale)good ideaPavel-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 17 Aug 2020 11:03:22 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal - reference to plpgsql_check from plpgsql doc"
},
{
"msg_contents": "po 17. 8. 2020 v 11:03 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> po 17. 8. 2020 v 10:37 odesílatel Magnus Hagander <magnus@hagander.net>\n> napsal:\n>\n>>\n>>\n>> On Mon, Aug 17, 2020 at 8:47 AM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>> Hi\n>>>\n>>> plpgsql_check extension is almost complete now. This extension is\n>>> available on all environments and for all supported Postgres releases. It\n>>> is probably too big to be part of contrib, but I think so it can be\n>>> referenced in\n>>> https://www.postgresql.org/docs/current/plpgsql-development-tips.html\n>>> chapter.\n>>>\n>>> What do you think about it?\n>>>\n>>>\n>> Without making any valuation on this particular tool, I think we should\n>> be very very careful and restrictive about putting such links in the main\n>> documentation.\n>>\n>> The appropriate location for such references are in the product catalog\n>> on the website and on the wiki. (I'd be happy to have a link from the docs\n>> to a generic \"pl/pgsql tips\" page on the wiki, though, if people would\n>> think that helpful -- because that would be linking to a destination that\n>> we can easily update/fix should it go stale)\n>>\n>\n> good idea\n>\n\nI created this page\n\nhttps://wiki.postgresql.org/wiki/Tools_and_tips_for_develpment_in_PL/pgSQL_language\n\nNow, there is just a list of available tools. Please, can somebody check it\nand clean and fix my Czechisms in text?\n\nRegards\n\nPavel\n\n\n> Pavel\n>\n>\n>> --\n>> Magnus Hagander\n>> Me: https://www.hagander.net/ <http://www.hagander.net/>\n>> Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n>>\n>\n\npo 17. 8. 2020 v 11:03 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 17. 8. 2020 v 10:37 odesílatel Magnus Hagander <magnus@hagander.net> napsal:On Mon, Aug 17, 2020 at 8:47 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hiplpgsql_check extension is almost complete now. This extension is available on all environments and for all supported Postgres releases. It is probably too big to be part of contrib, but I think so it can be referenced in https://www.postgresql.org/docs/current/plpgsql-development-tips.html chapter.What do you think about it?Without making any valuation on this particular tool, I think we should be very very careful and restrictive about putting such links in the main documentation. The appropriate location for such references are in the product catalog on the website and on the wiki. (I'd be happy to have a link from the docs to a generic \"pl/pgsql tips\" page on the wiki, though, if people would think that helpful -- because that would be linking to a destination that we can easily update/fix should it go stale)good ideaI created this page https://wiki.postgresql.org/wiki/Tools_and_tips_for_develpment_in_PL/pgSQL_languageNow, there is just a list of available tools. Please, can somebody check it and clean and fix my Czechisms in text?RegardsPavelPavel-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 8 Sep 2020 17:21:00 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal - reference to plpgsql_check from plpgsql doc"
}
] |
[
{
"msg_contents": "Dear all\n\nIn MobilityDB\nhttps://github.com/MobilityDB/MobilityDB\nwe use extensively the range types.\n\nIs there any possibility to make the function range_union_internal available to\nuse by other extensions ? Otherwise we need to copy/paste it verbatim. For\nexample lines 114-153 in\nhttps://github.com/MobilityDB/MobilityDB/blob/develop/src/rangetypes_ext.c\n\nRegards\n\nEsteban\n\nDear allIn MobilityDBhttps://github.com/MobilityDB/MobilityDB we use extensively the range types.Is there any possibility to make the function range_union_internal available to use by other extensions ? Otherwise we need to copy/paste it verbatim. For example lines 114-153 inhttps://github.com/MobilityDB/MobilityDB/blob/develop/src/rangetypes_ext.cRegardsEsteban",
"msg_date": "Mon, 17 Aug 2020 10:14:34 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "Making the function range_union_internal available to other\n extensions"
}
] |
[
{
"msg_contents": "Hi,\n\nWith [1] applied so that you can get crash recovery to be CPU bound\nwith a pgbench workload, we spend an awful lot of time in qsort(),\ncalled from compactify_tuples(). I tried replacing that with a\nspecialised sort, and I got my test crash recovery time from ~55.5s\ndown to ~49.5s quite consistently.\n\nI've attached a draft patch. The sort_utils.h thing (which I've\nproposed before in another context where it didn't turn out to be\nneeded) probably needs better naming, and a few more parameterisations\nso that it could entirely replace the existing copies of the algorithm\nrather than adding yet one more. The header also contains some more\nrelated algorithms that don't have a user right now; maybe I should\nremove them.\n\nWhile writing this email, I checked the archives and discovered that a\ncouple of other people have complained about this hot spot before and\nproposed faster sorts already[2][3], and then there was a wide ranging\ndiscussion of various options which ultimately seemed to conclude that\nwe should do what I'm now proposing ... and then it stalled. The\npresent work is independent; I wrote this for some other sorting\nproblem, and then tried it out here when perf told me that it was the\nnext thing to fix to make recovery go faster. So I guess what I'm\nreally providing here is the compelling workload and numbers that were\nperhaps missing from that earlier thread, but I'm open to other\nsolutions too.\n\n[1] https://commitfest.postgresql.org/29/2669/\n[2] https://www.postgresql.org/message-id/flat/3c6ff1d3a2ff429ee80d0031e6c69cb7%40postgrespro.ru\n[3] https://www.postgresql.org/message-id/flat/546B89DE.7030906%40vmware.com",
"msg_date": "Mon, 17 Aug 2020 23:00:34 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Optimising compactify_tuples()"
},
{
"msg_contents": "On Mon, Aug 17, 2020 at 4:01 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> While writing this email, I checked the archives and discovered that a\n> couple of other people have complained about this hot spot before and\n> proposed faster sorts already[2][3], and then there was a wide ranging\n> discussion of various options which ultimately seemed to conclude that\n> we should do what I'm now proposing ... and then it stalled.\n\nI saw compactify_tuples() feature prominently in profiles when testing\nthe deduplication patch. We changed the relevant nbtdedup.c logic to\nuse a temp page rather than incrementally rewriting the authoritative\npage in shared memory, which sidestepped the problem.\n\nI definitely think that we should have something like this, though.\nIt's a relatively easy win. There are plenty of workloads that spend\nlots of time on pruning.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 17 Aug 2020 11:52:49 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Tue, Aug 18, 2020 at 6:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I definitely think that we should have something like this, though.\n> It's a relatively easy win. There are plenty of workloads that spend\n> lots of time on pruning.\n\nAlright then, here's an attempt to flesh the idea out a bit more, and\nreplace the three other copies of qsort() while I'm at it.",
"msg_date": "Wed, 19 Aug 2020 23:41:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Wed, Aug 19, 2020 at 11:41 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Aug 18, 2020 at 6:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I definitely think that we should have something like this, though.\n> > It's a relatively easy win. There are plenty of workloads that spend\n> > lots of time on pruning.\n>\n> Alright then, here's an attempt to flesh the idea out a bit more, and\n> replace the three other copies of qsort() while I'm at it.\n\nI fixed up the copyright messages, and removed some stray bits of\nbuild scripting relating to the Perl-generated file. Added to\ncommitfest.",
"msg_date": "Thu, 20 Aug 2020 11:27:28 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Thu, 20 Aug 2020 at 11:28, Thomas Munro <thomas.munro@gmail.com> wrote:\n> I fixed up the copyright messages, and removed some stray bits of\n> build scripting relating to the Perl-generated file. Added to\n> commitfest.\n\nI'm starting to look at this. So far I've only just done a quick\nperformance test on it. With the workload I ran, using 0001+0002.\n\nThe test replayed ~2.2 GB of WAL. master took 148.581 seconds and\nmaster+0001+0002 took 115.588 seconds. That's about 28% faster. Pretty\nnice!\n\nI found running a lower heap fillfactor will cause quite a few more\nheap cleanups to occur. Perhaps that's one of the reasons the speedup\nI got was more than the 12% you reported.\n\nMore details of the test:\n\nSetup:\n\ndrowley@amd3990x:~$ cat recoverbench.sh\n#!/bin/bash\n\npg_ctl stop -D pgdata -m smart\npg_ctl start -D pgdata -l pg.log -w\npsql -c \"drop table if exists t1;\" postgres > /dev/null\npsql -c \"create table t1 (a int primary key, b int not null) with\n(fillfactor = 85);\" postgres > /dev/null\npsql -c \"insert into t1 select x,0 from generate_series(1,10000000)\nx;\" postgres > /dev/null\npsql -c \"drop table if exists log_wal;\" postgres > /dev/null\npsql -c \"create table log_wal (lsn pg_lsn not null);\" postgres > /dev/null\npsql -c \"insert into log_wal values(pg_current_wal_lsn());\" postgres > /dev/null\npgbench -n -f update.sql -t 60000 -c 200 -j 200 -M prepared postgres > /dev/null\npsql -c \"select 'Used ' ||\npg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), lsn)) || ' of\nWAL' from log_wal limit 1;\" postgres\npg_ctl stop -D pgdata -m immediate -w\necho Starting Postgres...\npg_ctl start -D pgdata -l pg.log\n\ndrowley@amd3990x:~$ cat update.sql\n\\set i random(1,10000000)\nupdate t1 set b = b+1 where a = :i;\n\nResults:\n\nmaster\n\nRecovery times are indicated in the postgresql log:\n\n2020-09-06 22:38:58.992 NZST [6487] LOG: redo starts at 3/16E4A988\n2020-09-06 22:41:27.570 NZST [6487] LOG: invalid record length at\n3/F67F8B48: wanted 24, got 0\n2020-09-06 22:41:27.573 NZST [6487] LOG: redo done at 3/F67F8B20\n\nrecovery duration = 00:02:28.581\n\ndrowley@amd3990x:~$ ./recoverbench.sh\nwaiting for server to shut down.... done\nserver stopped\nwaiting for server to start.... done\nserver started\n ?column?\n---------------------\n Used 2333 MB of WAL\n(1 row)\n\nwaiting for server to shut down.... done\nserver stopped\nStarting Postgres...\n\nrecovery profile:\n 28.79% postgres postgres [.] pg_qsort\n 13.58% postgres postgres [.] itemoffcompare\n 12.27% postgres postgres [.] PageRepairFragmentation\n 8.26% postgres libc-2.31.so [.] 0x000000000018e48f\n 5.90% postgres postgres [.] swapfunc\n 4.86% postgres postgres [.] hash_search_with_hash_value\n 2.95% postgres postgres [.] XLogReadBufferExtended\n 1.83% postgres postgres [.] PinBuffer\n 1.80% postgres postgres [.] compactify_tuples\n 1.71% postgres postgres [.] med3\n 0.99% postgres postgres [.] hash_bytes\n 0.90% postgres libc-2.31.so [.] 0x000000000018e470\n 0.89% postgres postgres [.] StartupXLOG\n 0.84% postgres postgres [.] XLogReadRecord\n 0.72% postgres postgres [.] LWLockRelease\n 0.71% postgres postgres [.] PageGetHeapFreeSpace\n 0.61% postgres libc-2.31.so [.] 0x000000000018e499\n 0.50% postgres postgres [.] heap_xlog_update\n 0.50% postgres postgres [.] DecodeXLogRecord\n 0.50% postgres postgres [.] pg_comp_crc32c_sse42\n 0.45% postgres postgres [.] LWLockAttemptLock\n 0.40% postgres postgres [.] ReadBuffer_common\n 0.40% postgres [kernel.kallsyms] [k] copy_user_generic_string\n 0.36% postgres libc-2.31.so [.] 0x000000000018e49f\n 0.33% postgres postgres [.] SlruSelectLRUPage\n 0.32% postgres postgres [.] PageAddItemExtended\n 0.31% postgres postgres [.] ReadPageInternal\n\nPatched v2-0001 + v2-0002:\n\nRecovery times are indicated in the postgresql log:\n\n2020-09-06 22:54:25.532 NZST [13252] LOG: redo starts at 3/F67F8C70\n2020-09-06 22:56:21.120 NZST [13252] LOG: invalid record length at\n4/D633FCD0: wanted 24, got 0\n2020-09-06 22:56:21.120 NZST [13252] LOG: redo done at 4/D633FCA8\n\nrecovery duration = 00:01:55.588\n\n\ndrowley@amd3990x:~$ ./recoverbench.sh\nwaiting for server to shut down.... done\nserver stopped\nwaiting for server to start.... done\nserver started\n ?column?\n---------------------\n Used 2335 MB of WAL\n(1 row)\n\nwaiting for server to shut down.... done\nserver stopped\nStarting Postgres...\n\nrecovery profile:\n 32.29% postgres postgres [.] qsort_itemoff\n 17.73% postgres postgres [.] PageRepairFragmentation\n 10.98% postgres libc-2.31.so [.] 0x000000000018e48f\n 5.54% postgres postgres [.] hash_search_with_hash_value\n 3.60% postgres postgres [.] XLogReadBufferExtended\n 2.32% postgres postgres [.] compactify_tuples\n 2.14% postgres postgres [.] PinBuffer\n 1.39% postgres postgres [.] PageGetHeapFreeSpace\n 1.38% postgres postgres [.] hash_bytes\n 1.36% postgres postgres [.] qsort_itemoff_med3\n 0.94% postgres libc-2.31.so [.] 0x000000000018e499\n 0.89% postgres postgres [.] XLogReadRecord\n 0.74% postgres postgres [.] LWLockRelease\n 0.74% postgres postgres [.] DecodeXLogRecord\n 0.73% postgres postgres [.] heap_xlog_update\n 0.66% postgres postgres [.] LWLockAttemptLock\n 0.65% postgres libc-2.31.so [.] 0x000000000018e470\n 0.64% postgres postgres [.] pg_comp_crc32c_sse42\n 0.63% postgres postgres [.] StartupXLOG\n 0.61% postgres [kernel.kallsyms] [k] copy_user_generic_string\n 0.60% postgres postgres [.] PageAddItemExtended\n 0.60% postgres libc-2.31.so [.] 0x000000000018e49f\n 0.56% postgres libc-2.31.so [.] 0x000000000018e495\n 0.54% postgres postgres [.] ReadBuffer_common\n\n\nSettings:\nshared_buffers = 10GB\ncheckpoint_timeout = 1 hour\nmax_wal_size = 100GB\n\nHardware:\n\nAMD 3990x\nSamsung 970 EVO SSD\n64GB DDR4 3600MHz\n\nI'll spend some time looking at the code soon.\n\nDavid\n\n\n",
"msg_date": "Sun, 6 Sep 2020 23:37:40 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Sun, 6 Sep 2020 at 23:37, David Rowley <dgrowleyml@gmail.com> wrote:\n> The test replayed ~2.2 GB of WAL. master took 148.581 seconds and\n> master+0001+0002 took 115.588 seconds. That's about 28% faster. Pretty\n> nice!\n\nI was wondering today if we could just get rid of the sort in\ncompactify_tuples() completely. It seems to me that the existing sort\nis there just so that the memmove() is done in order of tuple at the\nend of the page first. We seem to be just shunting all the tuples to\nthe end of the page so we need to sort the line items in reverse\noffset so as not to overwrite memory for other tuples during the copy.\n\nI wondered if we could get around that just by having another buffer\nsomewhere and memcpy the tuples into that first then copy the tuples\nout that buffer back into the page. No need to worry about the order\nwe do that in as there's no chance to overwrite memory belonging to\nother tuples.\n\nDoing that gives me 79.166 seconds in the same recovery test. Or about\n46% faster, instead of 22% (I mistakenly wrote 28% yesterday)\n\nThe top of perf report says:\n\n 24.19% postgres postgres [.] PageRepairFragmentation\n 8.37% postgres postgres [.] hash_search_with_hash_value\n 7.40% postgres libc-2.31.so [.] 0x000000000018e74b\n 5.59% postgres libc-2.31.so [.] 0x000000000018e741\n 5.49% postgres postgres [.] XLogReadBufferExtended\n 4.05% postgres postgres [.] compactify_tuples\n 3.27% postgres postgres [.] PinBuffer\n 2.88% postgres libc-2.31.so [.] 0x000000000018e470\n 2.02% postgres postgres [.] hash_bytes\n\n(I'll need to figure out why libc's debug symbols are not working)\n\nI was thinking that there might be a crossover point to where this\nmethod becomes faster than the sort method. e.g sorting 1 tuple is\npretty cheap, but copying the memory for the entire tuple space might\nbe expensive as that includes the tuples we might be getting rid of.\nSo if we did go down that path we might need some heuristics to decide\nwhich method is likely best. Maybe that's based on the number of\ntuples, I'm not really sure. I've not made any attempt to try to give\nit a worst-case workload to see if there is a crossover point that's\nworth worrying about.\n\nThe attached patch is what I used to test this. It kinda goes and\nsticks a page-sized variable on the stack, which is not exactly ideal.\nI think we'd likely want to figure some other way to do that, but I\njust don't know what that would look like yet. I just put the attached\ntogether quickly to test out the idea.\n\n(I don't want to derail the sort improvements here. I happen to think\nthose are quite important improvements, so I'll continue to review\nthat patch still. Longer term, we might just end up with something\nslightly different for compactify_tuples)\n\nDavid",
"msg_date": "Mon, 7 Sep 2020 19:47:59 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Mon, Sep 7, 2020 at 7:48 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I was wondering today if we could just get rid of the sort in\n> compactify_tuples() completely. It seems to me that the existing sort\n> is there just so that the memmove() is done in order of tuple at the\n> end of the page first. We seem to be just shunting all the tuples to\n> the end of the page so we need to sort the line items in reverse\n> offset so as not to overwrite memory for other tuples during the copy.\n>\n> I wondered if we could get around that just by having another buffer\n> somewhere and memcpy the tuples into that first then copy the tuples\n> out that buffer back into the page. No need to worry about the order\n> we do that in as there's no chance to overwrite memory belonging to\n> other tuples.\n>\n> Doing that gives me 79.166 seconds in the same recovery test. Or about\n> 46% faster, instead of 22% (I mistakenly wrote 28% yesterday)\n\nWow.\n\nOne thought is that if we're going to copy everything out and back in\nagain, we might want to consider doing it in a\nmemory-prefetcher-friendly order. Would it be a good idea to\nrearrange the tuples to match line pointer order, so that the copying\nwork and also later sequential scans are in a forward direction? The\ncopying could also perhaps be done with single memcpy() for ranges of\nadjacent tuples. Another thought is that it might be possible to\nidentify some easy cases that it can handle with an alternative\nin-place shifting algorithm without having to go to the\ncopy-out-and-back-in path. For example, when the offset order already\nmatches line pointer order but some dead tuples just need to be\nsqueezed out by shifting ranges of adjacent tuples, and maybe some\nslightly more complicated cases, but nothing requiring hard work like\nsorting.\n\n> (I don't want to derail the sort improvements here. I happen to think\n> those are quite important improvements, so I'll continue to review\n> that patch still. Longer term, we might just end up with something\n> slightly different for compactify_tuples)\n\nYeah. Perhaps qsort specialisation needs to come back in a new thread\nwith a new use case.\n\n\n",
"msg_date": "Tue, 8 Sep 2020 12:07:51 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Tue, 8 Sep 2020 at 12:08, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> One thought is that if we're going to copy everything out and back in\n> again, we might want to consider doing it in a\n> memory-prefetcher-friendly order. Would it be a good idea to\n> rearrange the tuples to match line pointer order, so that the copying\n> work and also later sequential scans are in a forward direction?\n\nThat's an interesting idea but wouldn't that require both the copy to\nthe separate buffer *and* a qsort? That's the worst of both\nimplementations. We'd need some other data structure too in order to\nget the index of the sorted array by reverse lineitem point, which\nmight require an additional array and an additional sort.\n\n> The\n> copying could also perhaps be done with single memcpy() for ranges of\n> adjacent tuples.\n\nI wonder if the additional code required to check for that would be\ncheaper than the additional function call. If it was then it might be\nworth trying, but since the tuples can be in any random order then\nit's perhaps not likely to pay off that often. I'm not really sure\nhow often adjacent line items will also be neighbouring tuples for\npages we call compactify_tuples() for. It's certainly going to be\ncommon with INSERT only tables, but if we're calling\ncompactify_tuples() then it's not read-only.\n\n> Another thought is that it might be possible to\n> identify some easy cases that it can handle with an alternative\n> in-place shifting algorithm without having to go to the\n> copy-out-and-back-in path. For example, when the offset order already\n> matches line pointer order but some dead tuples just need to be\n> squeezed out by shifting ranges of adjacent tuples, and maybe some\n> slightly more complicated cases, but nothing requiring hard work like\n> sorting.\n\nIt's likely worth experimenting. The only thing is that the workload\nI'm using seems to end up with the tuples with line items not in the\nsame order as the tuple offset. So adding a precheck to check the\nordering will regress the test I'm doing. We'd need to see if there is\nany other workload that would keep the tuples more in order then\ndetermine how likely that is to occur in the real world.\n\n> > (I don't want to derail the sort improvements here. I happen to think\n> > those are quite important improvements, so I'll continue to review\n> > that patch still. Longer term, we might just end up with something\n> > slightly different for compactify_tuples)\n>\n> Yeah. Perhaps qsort specialisation needs to come back in a new thread\n> with a new use case.\n\nhmm, yeah, perhaps that's a better way given the subject here is about\ncompactify_tuples()\n\nDavid\n\n\n",
"msg_date": "Wed, 9 Sep 2020 03:47:05 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Mon, 7 Sep 2020 at 19:47, David Rowley <dgrowleyml@gmail.com> wrote:\n> I wondered if we could get around that just by having another buffer\n> somewhere and memcpy the tuples into that first then copy the tuples\n> out that buffer back into the page. No need to worry about the order\n> we do that in as there's no chance to overwrite memory belonging to\n> other tuples.\n>\n> Doing that gives me 79.166 seconds in the same recovery test. Or about\n> 46% faster, instead of 22% (I mistakenly wrote 28% yesterday)\n\nI did some more thinking about this and considered if there's a way to\njust get rid of the sorting version of compactify_tuples() completely.\nIn the version from yesterday, I fell back on the sort version for\nwhen more than half the tuples from the page were being pruned. I'd\nthought that in this case copying out *all* of the page from pd_upper\nup to the pd_special (the tuple portion of the page) would maybe be\nmore costly since that would include (needlessly) copying all the\npruned tuples too. The sort also becomes cheaper in that case since\nthe number of items to sort is less, hence I thought it was a good\nidea to keep the old version for some cases. However, I now think we\ncan just fix this by conditionally copying all tuples when in 1 big\nmemcpy when not many tuples have been pruned and when more tuples are\npruned we can just do individual memcpys into the separate buffer.\n\nI wrote a little .c program to try to figure out of there's some good\ncut off point to where one method becomes better than the other and I\nfind that generally if we're pruning away about 75% of tuples then\ndoing a memcpy() per non-pruned tuple is faster, otherwise, it seems\nbetter just to copy the entire tuple area of the page. See attached\ncompact_test.c\n\nI ran this and charted the cut off at (nitems < totaltups / 4) and\n(nitems < totaltups / 2), and nitems < 16)\n./compact_test 32 192\n./compact_test 64 96\n./compact_test 128 48\n./compact_test 256 24\n./compact_test 512 12\n\nThe / 4 one gives me the graphs with the smallest step when the method\nswitches. See attached 48_tuples_per_page.png for comparison.\n\nI've so far come up with the attached\ncompactify_tuples_dgr_v2.patch.txt. Thomas pointed out to me off-list\nthat using PGAlignedBlock is the general way to allocate a page-sized\ndata on the stack. I'm still classing this patch as PoC grade. I'll\nneed to look a bit harder at the correctness of it all.\n\nI did spend quite a bit of time trying to find a case where this is\nslower than master's version. I can't find a case where there's any\nnoticeable slowdown. Using the same script from [1] I tried a few\nvariations of the t1 table by adding an additional column to pad out\nthe tuple to make it wider. Obviously a wider tuple means fewer\ntuples on the page so less tuples for master's qsort to sort during\ncompactify_tuples(). I did manage to squeeze a bit more performance\nout of the test cases. Yesterday I got 79.166 seconds. This version\ngets me 76.623 seconds.\n\nHere are the results of the various tuple widths:\n\nnarrow width row test: insert into t1 select x,0 from\ngenerate_series(1,10000000) x; (32 byte tuples)\n\npatched: 76.623\nmaster: 137.593\n\nmedium width row test: insert into t1 select x,0,md5(x::text) ||\nmd5((x+1)::Text) from generate_series(1,10000000) x; (about 64 byte\ntuples)\n\npatched: 64.411\nmaster: 95.576\n\nwide row test: insert into t1 select x,0,(select\nstring_agg(md5(y::text),'') from generate_Series(x,x+30) y) from\ngenerate_series(1,1000000)x; (1024 byte tuples)\n\npatched: 88.653\nmaster: 90.077\n\nChanging the test so instead of having 10 million rows in the table\nand updating a random row 12 million times. I put just 10 rows in the\ntable and updated them 12 million times. This results in\ncompactify_tuples() pruning all but 1 row (since autovac can't keep up\nwith this, each row does end up on a page by itself). I wanted to\nensure I didn't regress a workload that master's qsort() version would\nhave done very well at. qsorting 1 element is pretty fast.\n\n10-row narrow test:\n\npatched: 10.633 <--- small regression\nmaster: 10.366\n\nI could special case this and do a memmove without copying the tuple\nto another buffer, but I don't think the slowdown is enough to warrant\nhaving such a special case.\n\nAnother thing I tried was to instead of compacting the page in\ncompactify_tuples(), I just get rid of that function and did the\ncompacting in the existing loop in PageRepairFragmentation(). This\ndoes require changing the ERROR check to a PANIC since we may have\nalready started shuffling tuples around when we find the corrupted\nline pointer. However, I was just unable to make this faster than the\nattached version. I'm still surprised at this as I can completely get\nrid of the itemidbase array. The best run-time I got with this method\nout the original test was 86 seconds, so 10 seconds slower than what\nthe attached can do. So I threw that idea away.\n\nDavid\n\n\n[1] https://www.postgresql.org/message-id/CAApHDvoKwqAzhiuxEt8jSquPJKDpH8DNUZDFUSX9P7DXrJdc3Q@mail.gmail.com",
"msg_date": "Wed, 9 Sep 2020 04:30:15 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Wed, Sep 9, 2020 at 3:47 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Tue, 8 Sep 2020 at 12:08, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > One thought is that if we're going to copy everything out and back in\n> > again, we might want to consider doing it in a\n> > memory-prefetcher-friendly order. Would it be a good idea to\n> > rearrange the tuples to match line pointer order, so that the copying\n> > work and also later sequential scans are in a forward direction?\n>\n> That's an interesting idea but wouldn't that require both the copy to\n> the separate buffer *and* a qsort? That's the worst of both\n> implementations. We'd need some other data structure too in order to\n> get the index of the sorted array by reverse lineitem point, which\n> might require an additional array and an additional sort.\n\nWell I may not have had enough coffee yet but I thought you'd just\nhave to spin though the item IDs twice. Once to compute sum(lp_len)\nso you can compute the new pd_upper, and the second time to copy the\ntuples from their random locations on the temporary page to new\nsequential locations, so that afterwards item ID order matches offset\norder.\n\n\n",
"msg_date": "Wed, 9 Sep 2020 05:37:54 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Wed, 9 Sep 2020 at 05:38, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Sep 9, 2020 at 3:47 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > On Tue, 8 Sep 2020 at 12:08, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > One thought is that if we're going to copy everything out and back in\n> > > again, we might want to consider doing it in a\n> > > memory-prefetcher-friendly order. Would it be a good idea to\n> > > rearrange the tuples to match line pointer order, so that the copying\n> > > work and also later sequential scans are in a forward direction?\n> >\n> > That's an interesting idea but wouldn't that require both the copy to\n> > the separate buffer *and* a qsort? That's the worst of both\n> > implementations. We'd need some other data structure too in order to\n> > get the index of the sorted array by reverse lineitem point, which\n> > might require an additional array and an additional sort.\n>\n> Well I may not have had enough coffee yet but I thought you'd just\n> have to spin though the item IDs twice. Once to compute sum(lp_len)\n> so you can compute the new pd_upper, and the second time to copy the\n> tuples from their random locations on the temporary page to new\n> sequential locations, so that afterwards item ID order matches offset\n> order.\n\nI think you were adequately caffeinated. You're right that this is\nfairly simple to do, but it looks even more simple than looping twice\nof the array. I think it's just a matter of looping over the\nitemidbase backwards and putting the higher itemid tuples at the end\nof the page. I've done it this way in the attached patch.\n\nI also added a presorted path which falls back on doing memmoves\nwithout the temp buffer when the itemidbase array indicates that\nhigher lineitems all have higher offsets. I'm doing the presort check\nin the calling function since that loops over the lineitems already.\nWe can just memmove the tuples in reverse order without overwriting\nany yet to be moved tuples when these are in order.\n\nAlso, I added code to collapse the memcpy and memmoves for adjacent\ntuples so that we perform the minimal number of calls to those\nfunctions. Once we've previously compacted a page it seems that the\ncode is able to reduce the number of calls significantly. I added\nsome logging and reviewed at after a run of the benchmark and saw that\nfor about 192 tuples we're mostly just doing 3-4 memcpys in the\nnon-presorted path and just 2 memmoves, for the presorted code path.\nI also found that in my test the presorted path was only taken 12.39%\nof the time. Trying with 120 million UPDATEs instead of 12 million in\nthe test ended up reducing this to just 10.89%. It seems that it'll\njust be 1 or 2 tuples spoiling it since new tuples will still be added\nearlier in the page after we free up space to add more.\n\nI also experimented seeing what would happen if I also tried to\ncollapse the memcpys for copying to the temp buffer. The performance\ngot a little worse from doing that. So I left that code #ifdef'd out\n\nWith the attached v3, performance is better. The test now runs\nrecovery 65.6 seconds, vs master's 148.5 seconds. So about 2.2x\nfaster.\n\nWe should probably consider what else can be done to try to write\npages with tuples for earlier lineitems earlier in the page. VACUUM\nFULLs and friends will switch back to the opposite order when\nrewriting the heap.\n\nAlso fixed my missing libc debug symbols:\n\n 24.90% postgres postgres [.] PageRepairFragmentation\n 15.26% postgres libc-2.31.so [.] __memmove_avx_unaligned_erms\n 9.61% postgres postgres [.] hash_search_with_hash_value\n 8.03% postgres postgres [.] compactify_tuples\n 6.25% postgres postgres [.] XLogReadBufferExtended\n 3.74% postgres postgres [.] PinBuffer\n 2.25% postgres postgres [.] hash_bytes\n 1.79% postgres postgres [.] heap_xlog_update\n 1.47% postgres postgres [.] LWLockRelease\n 1.44% postgres postgres [.] XLogReadRecord\n 1.33% postgres postgres [.] PageGetHeapFreeSpace\n 1.16% postgres postgres [.] DecodeXLogRecord\n 1.13% postgres postgres [.] pg_comp_crc32c_sse42\n 1.12% postgres postgres [.] LWLockAttemptLock\n 1.09% postgres postgres [.] StartupXLOG\n 0.90% postgres postgres [.] ReadBuffer_common\n 0.84% postgres postgres [.] SlruSelectLRUPage\n 0.74% postgres libc-2.31.so [.] __memcmp_avx2_movbe\n 0.68% postgres [kernel.kallsyms] [k] copy_user_generic_string\n 0.66% postgres postgres [.] PageAddItemExtended\n 0.66% postgres postgres [.] PageIndexTupleOverwrite\n 0.62% postgres postgres [.] smgropen\n 0.60% postgres postgres [.] ReadPageInternal\n 0.57% postgres postgres [.] GetPrivateRefCountEntry\n 0.52% postgres postgres [.] heap_redo\n 0.51% postgres postgres [.] AdvanceNextFullTransactionIdPastXid\n\nDavid",
"msg_date": "Thu, 10 Sep 2020 02:33:56 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Thu, Sep 10, 2020 at 2:34 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I think you were adequately caffeinated. You're right that this is\n> fairly simple to do, but it looks even more simple than looping twice\n> of the array. I think it's just a matter of looping over the\n> itemidbase backwards and putting the higher itemid tuples at the end\n> of the page. I've done it this way in the attached patch.\n\nYeah, I was wondering how to make as much of the algorithm work in a\nmemory-forwards direction as possible (even the item pointer access),\nbut it was just a hunch. Once you have the adjacent-tuple merging\nthing so you're down to just a couple of big memcpy calls, it's\nprobably moot anyway.\n\n> I also added a presorted path which falls back on doing memmoves\n> without the temp buffer when the itemidbase array indicates that\n> higher lineitems all have higher offsets. I'm doing the presort check\n> in the calling function since that loops over the lineitems already.\n> We can just memmove the tuples in reverse order without overwriting\n> any yet to be moved tuples when these are in order.\n\nGreat.\n\nI wonder if we could also identify a range at the high end that is\nalready correctly sorted and maximally compacted so it doesn't even\nneed to be copied out.\n\n+ * Do the tuple compactification. Collapse memmove calls for adjacent\n+ * tuples.\n\ns/memmove/memcpy/\n\n> With the attached v3, performance is better. The test now runs\n> recovery 65.6 seconds, vs master's 148.5 seconds. So about 2.2x\n> faster.\n\nOn my machine I'm seeing 57s, down from 86s on unpatched master, for\nthe simple pgbench workload from\nhttps://github.com/macdice/redo-bench/. That's not quite what you're\nreporting but it still blows the doors off the faster sorting patch,\nwhich does it in 74s.\n\n> We should probably consider what else can be done to try to write\n> pages with tuples for earlier lineitems earlier in the page. VACUUM\n> FULLs and friends will switch back to the opposite order when\n> rewriting the heap.\n\nYeah, and also bulk inserts/COPY. Ultimately if we flipped our page\nformat on its head that'd come for free, but that'd be a bigger\nproject with more ramifications.\n\nSo one question is whether we want to do the order-reversing as part\nof this patch, or wait for a more joined-up project to make lots of\ncode paths collude on making scan order match memory order\n(corellation = 1). Most or all of the gain from your patch would\npresumably still apply if did the exact opposite and forced offset\norder to match reverse-item ID order (correlation = -1), which also\nhappens to be the initial state when you insert tuples today; you'd\nstill tend towards a state that allows nice big memmov/memcpy calls\nduring page compaction. IIUC currently we start with correlation -1\nand then tend towards correlation = 0 after many random updates\nbecause we can't change the order, so it gets scrambled over time.\nI'm not sure what I think about that.\n\nPS You might as well post future patches with .patch endings so that\nthe cfbot tests them; it seems pretty clear now that a patch to\noptimise sorting (as useful as it may be for future work) can't beat a\npatch to skip it completely. I took the liberty of switching the\nauthor and review names in the commitfest entry to reflect this.\n\n\n",
"msg_date": "Thu, 10 Sep 2020 10:39:56 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Thu, 10 Sep 2020 at 10:40, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> I wonder if we could also identify a range at the high end that is\n> already correctly sorted and maximally compacted so it doesn't even\n> need to be copied out.\n\nI've experimented quite a bit with this patch today. I think I've\ntested every idea you've mentioned here, so there's quite a lot of\ninformation to share.\n\nI did write code to skip the copy to the separate buffer for tuples\nthat are already in the correct place and with a version of the patch\nwhich keeps tuples in their traditional insert order (later lineitem's\ntuple located earlier in the page) I see a generally a very large\nnumber of tuples being skipped with this method. See attached\nv4b_skipped_tuples.png. The vertical axis is the number of\ncompactify_tuple() calls during the benchmark where we were able to\nskip that number of tuples. The average skipped tuples overall calls\nduring recovery was 81 tuples, so we get to skip about half the tuples\nin the page doing this on this benchmark.\n\n> > With the attached v3, performance is better. The test now runs\n> > recovery 65.6 seconds, vs master's 148.5 seconds. So about 2.2x\n> > faster.\n>\n> On my machine I'm seeing 57s, down from 86s on unpatched master, for\n> the simple pgbench workload from\n> https://github.com/macdice/redo-bench/. That's not quite what you're\n> reporting but it still blows the doors off the faster sorting patch,\n> which does it in 74s.\n\nThanks for running the numbers on that. I might be seeing a bit more\ngain as I dropped the fillfactor down to 85. That seems to cause more\ncalls to compactify_tuples().\n\n> So one question is whether we want to do the order-reversing as part\n> of this patch, or wait for a more joined-up project to make lots of\n> code paths collude on making scan order match memory order\n> (corellation = 1). Most or all of the gain from your patch would\n> presumably still apply if did the exact opposite and forced offset\n> order to match reverse-item ID order (correlation = -1), which also\n> happens to be the initial state when you insert tuples today; you'd\n> still tend towards a state that allows nice big memmov/memcpy calls\n> during page compaction. IIUC currently we start with correlation -1\n> and then tend towards correlation = 0 after many random updates\n> because we can't change the order, so it gets scrambled over time.\n> I'm not sure what I think about that.\n\nSo I did lots of benchmarking with both methods and my conclusion is\nthat I think we should stick to the traditional INSERT order with this\npatch. But we should come back and revisit that more generally one\nday. The main reason that I'm put off flipping the tuple order is that\nit significantly reduces the number of times we hit the preordered\ncase. We go to all the trouble of reversing the order only to have it\nbroken again when we add 1 more tuple to the page. If we keep this\nthe traditional way, then it's much more likely that we'll maintain\nthat pre-order and hit the more optimal memmove code path.\n\nTo put that into numbers, using my benchmark, I see 13.25% of calls to\ncompactify_tuples() when the tuple order is reversed (i.e earlier\nlineitems earlier in the page). However, if I keep the lineitems in\ntheir proper order where earlier lineitems are at the end of the page\nthen I hit the preordered case 60.37% of the time. Hitting the\npresorted case really that much more often is really speeding things\nup even further.\n\nI've attached some benchmark results as benchmark_table.txt, and\nbenchmark_chart.png.\n\nThe v4 patch implements your copy skipping idea for the prefix of\ntuples which are already in the correct location. v4b is that patch\nbut changed to keep the tuples in the traditional order. v5 was me\nexperimenting further by adding a precalculated end of tuple Offset to\nsave having to calculate it each time by adding itemoff and alignedlen\ntogether. It's not an improvement, so but I just wanted to mention\nthat I tried it.\n\nIf you look the benchmark results, you'll see that v4b is the winner.\nThe v4b + NOTUSED is me changing the #ifdef NOTUSED part so that we\nuse the smarter code to populate the backup buffer. Remember that I\ngot 60.37% of calls hitting the preordered case in v4b, so less than\n40% had to do the backup buffer. So the slowness of that code is more\nprominent when you compare v5 to v5 NOTUSED since the benchmark is\nhitting the non-preordered code 86.75% of the time with that version.\n\n\n> PS You might as well post future patches with .patch endings so that\n> the cfbot tests them; it seems pretty clear now that a patch to\n> optimise sorting (as useful as it may be for future work) can't beat a\n> patch to skip it completely. I took the liberty of switching the\n> author and review names in the commitfest entry to reflect this.\n\nThank you.\n\nI've attached v4b (b is for backwards since the traditional backwards\ntuple order is maintained). v4b seems to be able to run my benchmark\nin 63 seconds. I did 10 runs today of yesterday's v3 patch and got an\naverage of 72.8 seconds, so quite a big improvement from yesterday.\n\nThe profile indicates there's now bigger fish to fry:\n\n 25.25% postgres postgres [.] PageRepairFragmentation\n 13.57% postgres libc-2.31.so [.] __memmove_avx_unaligned_erms\n 10.87% postgres postgres [.] hash_search_with_hash_value\n 7.07% postgres postgres [.] XLogReadBufferExtended\n 5.57% postgres postgres [.] compactify_tuples\n 4.06% postgres postgres [.] PinBuffer\n 2.78% postgres postgres [.] heap_xlog_update\n 2.42% postgres postgres [.] hash_bytes\n 1.65% postgres postgres [.] XLogReadRecord\n 1.55% postgres postgres [.] LWLockRelease\n 1.42% postgres postgres [.] SlruSelectLRUPage\n 1.38% postgres postgres [.] PageGetHeapFreeSpace\n 1.20% postgres postgres [.] DecodeXLogRecord\n 1.16% postgres postgres [.] pg_comp_crc32c_sse42\n 1.15% postgres postgres [.] StartupXLOG\n 1.14% postgres postgres [.] LWLockAttemptLock\n 0.90% postgres postgres [.] ReadBuffer_common\n 0.81% postgres libc-2.31.so [.] __memcmp_avx2_movbe\n 0.71% postgres postgres [.] smgropen\n 0.65% postgres postgres [.] PageAddItemExtended\n 0.60% postgres postgres [.] PageIndexTupleOverwrite\n 0.57% postgres postgres [.] ReadPageInternal\n 0.54% postgres postgres [.] UnpinBuffer.constprop.0\n 0.53% postgres postgres [.] AdvanceNextFullTransactionIdPastXid\n\nI'll still class v4b as POC grade. I've not thought too hard about\ncomments or done a huge amount of testing on it. We'd better decide on\nall the exact logic first.\n\nI've also attached another tiny patch that I think is pretty useful\nseparate from this. It basically changes:\n\nLOG: redo done at 0/D518FFD0\n\ninto:\n\nLOG: redo done at 0/D518FFD0 system usage: CPU: user: 58.93 s,\nsystem: 0.74 s, elapsed: 62.31 s\n\n(I was getting sick of having to calculate the time spent from the log\ntimestamps.)\n\nDavid",
"msg_date": "Fri, 11 Sep 2020 01:45:29 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Fri, 11 Sep 2020 at 01:45, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached v4b (b is for backwards since the traditional backwards\n> tuple order is maintained). v4b seems to be able to run my benchmark\n> in 63 seconds. I did 10 runs today of yesterday's v3 patch and got an\n> average of 72.8 seconds, so quite a big improvement from yesterday.\n\nAfter reading the patch back again I realised there are a few more\nthings that can be done to make it a bit faster.\n\n1. When doing the backup buffer, use code to skip over tuples that\ndon't need to be moved at the end of the page and only memcpy() tuples\nearlier than that.\n2. The position that's determined in #1 can be used to start the\nmemcpy() loop at the first tuple that needs to be moved.\n3. In the memmove() code for the preorder check, we can do a similar\nskip of the tuples at the end of the page that don't need to be moved.\n\nI also ditched the #ifdef'd out code as I'm pretty sure #1 and #2 are\na much better way of doing the backup buffer given how many tuples are\nlikely to be skipped due to maintaining the traditional tuple order.\n\nThat gets my benchmark down to 60.8 seconds, so 2.2 seconds better than v4b.\n\nI've attached v6b and an updated chart showing the results of the 10\nruns I did of it.\n\nDavid",
"msg_date": "Fri, 11 Sep 2020 03:53:22 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Fri, Sep 11, 2020 at 1:45 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 10 Sep 2020 at 10:40, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I wonder if we could also identify a range at the high end that is\n> > already correctly sorted and maximally compacted so it doesn't even\n> > need to be copied out.\n>\n> I've experimented quite a bit with this patch today. I think I've\n> tested every idea you've mentioned here, so there's quite a lot of\n> information to share.\n>\n> I did write code to skip the copy to the separate buffer for tuples\n> that are already in the correct place and with a version of the patch\n> which keeps tuples in their traditional insert order (later lineitem's\n> tuple located earlier in the page) I see a generally a very large\n> number of tuples being skipped with this method. See attached\n> v4b_skipped_tuples.png. The vertical axis is the number of\n> compactify_tuple() calls during the benchmark where we were able to\n> skip that number of tuples. The average skipped tuples overall calls\n> during recovery was 81 tuples, so we get to skip about half the tuples\n> in the page doing this on this benchmark.\n\nExcellent.\n\n> > So one question is whether we want to do the order-reversing as part\n> > of this patch, or wait for a more joined-up project to make lots of\n> > code paths collude on making scan order match memory order\n> > (corellation = 1). Most or all of the gain from your patch would\n> > presumably still apply if did the exact opposite and forced offset\n> > order to match reverse-item ID order (correlation = -1), which also\n> > happens to be the initial state when you insert tuples today; you'd\n> > still tend towards a state that allows nice big memmov/memcpy calls\n> > during page compaction. IIUC currently we start with correlation -1\n> > and then tend towards correlation = 0 after many random updates\n> > because we can't change the order, so it gets scrambled over time.\n> > I'm not sure what I think about that.\n>\n> So I did lots of benchmarking with both methods and my conclusion is\n> that I think we should stick to the traditional INSERT order with this\n> patch. But we should come back and revisit that more generally one\n> day. The main reason that I'm put off flipping the tuple order is that\n> it significantly reduces the number of times we hit the preordered\n> case. We go to all the trouble of reversing the order only to have it\n> broken again when we add 1 more tuple to the page. If we keep this\n> the traditional way, then it's much more likely that we'll maintain\n> that pre-order and hit the more optimal memmove code path.\n\nRight, that makes sense. Thanks for looking into it!\n\n> I've also attached another tiny patch that I think is pretty useful\n> separate from this. It basically changes:\n>\n> LOG: redo done at 0/D518FFD0\n>\n> into:\n>\n> LOG: redo done at 0/D518FFD0 system usage: CPU: user: 58.93 s,\n> system: 0.74 s, elapsed: 62.31 s\n\n+1\n\n\n",
"msg_date": "Fri, 11 Sep 2020 17:44:24 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Fri, Sep 11, 2020 at 3:53 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> That gets my benchmark down to 60.8 seconds, so 2.2 seconds better than v4b.\n\n. o O ( I wonder if there are opportunities to squeeze some more out\nof this with __builtin_prefetch... )\n\n> I've attached v6b and an updated chart showing the results of the 10\n> runs I did of it.\n\nOne failure seen like this while running check word (cfbot):\n\n#2 0x000000000091f93f in ExceptionalCondition\n(conditionName=conditionName@entry=0x987284 \"nitems > 0\",\nerrorType=errorType@entry=0x97531d \"FailedAssertion\",\nfileName=fileName@entry=0xa9df0d \"bufpage.c\",\nlineNumber=lineNumber@entry=442) at assert.c:67\n\n\n",
"msg_date": "Fri, 11 Sep 2020 17:48:03 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Fri, 11 Sep 2020 at 17:48, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Sep 11, 2020 at 3:53 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > That gets my benchmark down to 60.8 seconds, so 2.2 seconds better than v4b.\n>\n> . o O ( I wonder if there are opportunities to squeeze some more out\n> of this with __builtin_prefetch... )\n\nI'd be tempted to go down that route if we had macros already defined\nfor that, but it looks like we don't.\n\n> > I've attached v6b and an updated chart showing the results of the 10\n> > runs I did of it.\n>\n> One failure seen like this while running check word (cfbot):\n>\n> #2 0x000000000091f93f in ExceptionalCondition\n> (conditionName=conditionName@entry=0x987284 \"nitems > 0\",\n> errorType=errorType@entry=0x97531d \"FailedAssertion\",\n> fileName=fileName@entry=0xa9df0d \"bufpage.c\",\n> lineNumber=lineNumber@entry=442) at assert.c:67\n\nThanks. I neglected to check the other call site properly checked for\nnitems > 0. Looks like PageIndexMultiDelete() relied on\ncompacify_tuples() to set pd_upper to pd_special when nitems == 0.\nThat's not what PageRepairFragmentation() did, so I've now aligned the\ntwo so they work the same way.\n\nI've attached patches in git format-patch format. I'm proposing to\ncommit these in about 48 hours time unless there's some sort of\nobjection before then.\n\nThanks for reviewing this.\n\nDavid",
"msg_date": "Mon, 14 Sep 2020 17:27:00 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "David Rowley wrote:\n\n> I've attached patches in git format-patch format. I'm proposing to commit these in about 48 hours time unless there's some sort of objection before then.\n\nHi David, no objections at all, I've just got reaffirming results here, as per [1] (SLRU thread but combined results with qsort testing) I've repeated crash-recovery tests here again:\n\nTEST0a: check-world passes\nTEST0b: brief check: DB after recovery returns correct data which was present only into the WAL stream - SELECT sum(c) from sometable\n\nTEST1: workload profile test as per standard TPC-B [2], with majority of records in WAL stream being Heap/HOT_UPDATE on same system with NVMe as described there.\n\nresults of master (62e221e1c01e3985d2b8e4b68c364f8486c327ab) @ 15/09/2020 as baseline:\n15.487, 1.013\n15.789, 1.033\n15.942, 1.118\n\nprofile looks most of the similar:\n 17.14% postgres libc-2.17.so [.] __memmove_ssse3_back\n ---__memmove_ssse3_back\n compactify_tuples\n PageRepairFragmentation\n heap2_redo\n StartupXLOG\n 8.16% postgres postgres [.] hash_search_with_hash_value\n ---hash_search_with_hash_value\n |--4.49%--BufTableLookup\n[..]\n --3.67%--smgropen\n\nmaster with 2 patches by David (v8-0001-Optimize-compactify_tuples-function.patch + v8-0002-Report-resource-usage-at-the-end-of-recovery.patch): \n14.236, 1.02\n14.431, 1.083\n14.256, 1.02\n\nso 9-10% faster in this simple verification check. If I had pgbench running the result would be probably better. Profile is similar:\n\n 13.88% postgres libc-2.17.so [.] __memmove_ssse3_back\n ---__memmove_ssse3_back\n --13.47%--compactify_tuples\n\n 10.61% postgres postgres [.] hash_search_with_hash_value\n ---hash_search_with_hash_value\n |--5.31%--smgropen\n[..]\n --5.31%--BufTableLookup\n\n\nTEST2: update-only test, just as you performed in [3] to trigger the hotspot, with table fillfactor=85 and update.sql (100% updates, ~40% Heap/HOT_UPDATE [N], ~40-50% [record sizes]) with slightly different amount of data.\n\nresults of master as baseline:\n233.377, 0.727\n233.233, 0.72\n234.085, 0.729\n\nwith profile:\n 24.49% postgres postgres [.] pg_qsort\n 17.01% postgres postgres [.] PageRepairFragmentation\n 12.93% postgres postgres [.] itemoffcompare\n(sometimes I saw also a ~13% swapfunc)\n\nresults of master with above 2 patches, 2.3x speedup:\n101.6, 0.709\n101.837, 0.71\n102.243, 0.712\n\nwith profile (so yup the qsort is gone, hurray!):\n\n 32.65% postgres postgres [.] PageRepairFragmentation\n ---PageRepairFragmentation\n heap2_redo\n StartupXLOG\n 10.88% postgres postgres [.] compactify_tuples\n ---compactify_tuples\n 8.84% postgres postgres [.] hash_search_with_hash_value\n\nBTW: this message \"redo done at 0/9749FF70 system usage: CPU: user: 13.46 s, system: 0.78 s, elapsed: 14.25 s\" is priceless addition :) \n\n-J.\n\n[1] - https://www.postgresql.org/message-id/flat/VI1PR0701MB696023DA7815207237196DC8F6570%40VI1PR0701MB6960.eurprd07.prod.outlook.com#188ad4e772615999ec427486d1066948\n[2] - pgbench -i -s 100, pgbench -c8 -j8 -T 240, ~1.6GB DB with 2.3GB after crash in pg_wal to be replayed\n[3] - https://www.postgresql.org/message-id/CAApHDvoKwqAzhiuxEt8jSquPJKDpH8DNUZDFUSX9P7DXrJdc3Q%40mail.gmail.com , in my case: pgbench -c 16 -j 16 -T 240 -f update.sql , ~1GB DB with 4.3GB after crash in pg_wal to be replayed\n\n",
"msg_date": "Tue, 15 Sep 2020 14:10:24 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Wed, 16 Sep 2020 at 02:10, Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> BTW: this message \"redo done at 0/9749FF70 system usage: CPU: user: 13.46 s, system: 0.78 s, elapsed: 14.25 s\" is priceless addition :)\n\nThanks a lot for the detailed benchmark results and profiles. That was\nuseful. I've pushed both patches now. I did a bit of a sweep of the\ncomments on the 0001 patch before pushing it.\n\nI also did some further performance tests of something other than\nrecovery. I can also report a good performance improvement in VACUUM.\nSomething around the ~25% reduction mark\n\npsql -c \"drop table if exists t1;\" postgres > /dev/null\npsql -c \"create table t1 (a int primary key, b int not null) with\n(autovacuum_enabled = false, fillfactor = 85);\" postgres > /dev/null\npsql -c \"insert into t1 select x,0 from generate_series(1,10000000)\nx;\" postgres > /dev/null\npsql -c \"drop table if exists log_wal;\" postgres > /dev/null\npsql -c \"create table log_wal (lsn pg_lsn not null);\" postgres > /dev/null\npsql -c \"insert into log_wal values(pg_current_wal_lsn());\" postgres > /dev/null\npgbench -n -f update.sql -t 60000 -c 200 -j 200 -M prepared postgres\npsql -c \"select 'Used ' ||\npg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), lsn)) || ' of\nWAL' from log_wal limit 1;\" postgres\npsql postgres\n\n\\timing on\nVACUUM t1;\n\nFillfactor = 85\n\npatched:\n\nTime: 2917.515 ms (00:02.918)\nTime: 2944.564 ms (00:02.945)\nTime: 3004.136 ms (00:03.004)\n\nmaster:\nTime: 4050.355 ms (00:04.050)\nTime: 4104.999 ms (00:04.105)\nTime: 4158.285 ms (00:04.158)\n\nFillfactor = 100\n\nPatched:\n\nTime: 4245.676 ms (00:04.246)\nTime: 4251.485 ms (00:04.251)\nTime: 4247.802 ms (00:04.248)\n\nMaster:\nTime: 5459.433 ms (00:05.459)\nTime: 5917.356 ms (00:05.917)\nTime: 5430.986 ms (00:05.431)\n\nDavid\n\n\n",
"msg_date": "Wed, 16 Sep 2020 13:29:50 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Wed, Sep 16, 2020 at 1:30 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> Thanks a lot for the detailed benchmark results and profiles. That was\n> useful. I've pushed both patches now. I did a bit of a sweep of the\n> comments on the 0001 patch before pushing it.\n>\n> I also did some further performance tests of something other than\n> recovery. I can also report a good performance improvement in VACUUM.\n> Something around the ~25% reduction mark\n\nWonderful results. It must be rare for a such a localised patch to\nhave such a large effect on such common workloads.\n\n\n",
"msg_date": "Wed, 16 Sep 2020 14:01:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Tue, Sep 15, 2020 at 7:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Sep 16, 2020 at 1:30 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I also did some further performance tests of something other than\n> > recovery. I can also report a good performance improvement in VACUUM.\n> > Something around the ~25% reduction mark\n>\n> Wonderful results. It must be rare for a such a localised patch to\n> have such a large effect on such common workloads.\n\nYes, that's terrific.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 15 Sep 2020 19:09:43 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Thu, 10 Sep 2020 at 14:45, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I've also attached another tiny patch that I think is pretty useful\n> separate from this. It basically changes:\n>\n> LOG: redo done at 0/D518FFD0\n>\n> into:\n>\n> LOG: redo done at 0/D518FFD0 system usage: CPU: user: 58.93 s,\n> system: 0.74 s, elapsed: 62.31 s\n>\n> (I was getting sick of having to calculate the time spent from the log\n> timestamps.)\n\nI really like this patch, thanks for proposing it.\n\nShould pg_rusage_init(&ru0);\nbe at the start of the REDO loop, since you only use it if we take that path?\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\nMission Critical Databases\n\n\n",
"msg_date": "Wed, 16 Sep 2020 19:54:24 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On Wed, Sep 16, 2020 at 2:54 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> I really like this patch, thanks for proposing it.\n\nI'm pleased to be able to say that I agree completely with this\ncomment from Simon. :-)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 16 Sep 2020 15:21:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "On 2020-09-16 14:01:21 +1200, Thomas Munro wrote:\n> On Wed, Sep 16, 2020 at 1:30 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > Thanks a lot for the detailed benchmark results and profiles. That was\n> > useful. I've pushed both patches now. I did a bit of a sweep of the\n> > comments on the 0001 patch before pushing it.\n> >\n> > I also did some further performance tests of something other than\n> > recovery. I can also report a good performance improvement in VACUUM.\n> > Something around the ~25% reduction mark\n> \n> Wonderful results. It must be rare for a such a localised patch to\n> have such a large effect on such common workloads.\n\nIndeed!\n\n\n",
"msg_date": "Wed, 16 Sep 2020 16:05:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
},
{
"msg_contents": "Hi Simon,\n\nOn Thu, 17 Sep 2020 at 06:54, Simon Riggs <simon@2ndquadrant.com> wrote:\n> Should pg_rusage_init(&ru0);\n> be at the start of the REDO loop, since you only use it if we take that path?\n\nThanks for commenting.\n\nI may be misunderstanding your words, but as far as I see it the\npg_rusage_init() is only called if we're going to go into recovery.\nThe pg_rusage_init() and pg_rusage_show() seem to be in the same\nscope, so I can't quite see how we could do the pg_rusage_init()\nwithout the pg_rusage_show(). Oh wait, there's the possibility that\nif recoveryTargetAction == RECOVERY_TARGET_ACTION_SHUTDOWN that we'll\nexit before we report end of recovery. I'm pretty sure I'm\nmisunderstanding you though.\n\nIf it's easier to explain, please just post a small patch with what you mean.\n\nDavid\n\n\n",
"msg_date": "Thu, 17 Sep 2020 11:21:37 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimising compactify_tuples()"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16583\nLogged by: Jiří Fejfar\nEmail address: jurafejfar@gmail.com\nPostgreSQL version: 12.4\nOperating system: debian 10.5\nDescription: \n\nJoining two identical tables placed on separate DBs with different collation\naccessed through postgres_fdw failed when joined with merge join. Some\nrecords are missing (7 vs. 16 rows in example) in output. See this snippet\nhttps://gitlab.com/-/snippets/2004522 (or code pasted below) for psql script\nreproducing error also with expected output (working fine on alpine linux).\nThe same behavior is also observed on postgres v13.\r\n\r\nRegards, Jiří Fejfar.\r\n\r\n--------------------------------system---------------------\r\ndebian\r\ncat /etc/debian_version \r\n10.5\r\n\r\nldd --version\r\nldd (Debian GLIBC 2.28-10) 2.28\r\nCopyright © 2018 Free Software Foundation, Inc.\r\n\r\n--------\r\nalpine\r\ncat /etc/alpine-release \r\n3.12.0\r\n\r\nldd --version\r\nmusl libc (x86_64)\r\nVersion 1.1.24\r\nDynamic Program Loader\r\nUsage: /lib/ld-musl-x86_64.so.1 [options] [--] pathname\r\n\r\n\r\n------------------------psql script--------------------\r\nDROP DATABASE IF EXISTS db_en; DROP DATABASE IF EXISTS db_cz; DROP DATABASE\nIF EXISTS db_join;\r\nDROP USER IF EXISTS fdw_user_en; DROP USER IF EXISTS fdw_user_cz;\r\n\r\nCREATE DATABASE db_en encoding UTF8 LC_COLLATE 'en_US.UTF-8' LC_CTYPE\n'en_US.UTF-8' TEMPLATE template0;\r\nCREATE DATABASE db_cz encoding UTF8 LC_COLLATE 'cs_CZ.UTF-8' LC_CTYPE\n'cs_CZ.UTF-8' TEMPLATE template0;\r\nCREATE DATABASE db_join encoding UTF8 LC_COLLATE 'en_US.UTF-8' LC_CTYPE\n'en_US.UTF-8' TEMPLATE template0;\r\n\r\n\\c db_en\r\n\r\nCREATE TABLE t_nuts (\r\n id INT PRIMARY KEY,\r\n label text\r\n);\r\n\r\nWITH w_labels AS (\r\n VALUES ('CZ0100'), ('CZ0201'), ('CZ0202'), ('CZ0203'), ('CZ0204'),\n('CZ0205'),\r\n ('CZ0206'), ('CZ0207'), ('CZ0208'), ('CZ0209'), ('CZ020A'), ('CZ020B'),\n('CZ020C'), \r\n ('CZ0311'), ('CZ0312'), ('CZ0313')\r\n)\r\nINSERT INTO t_nuts (id, label)\r\nSELECT\r\n row_number() OVER() AS id,\r\n w_labels.column1 as label FROM w_labels--, generate_series(1, 500)\r\n;\r\n\r\nVACUUM (FULL, ANALYZE) t_nuts;\r\n\r\nSELECT label, count(*) from t_nuts GROUP BY label ORDER BY label;\r\n\r\n\\c db_cz\r\n\r\nCREATE TABLE t_nuts (\r\n id INT PRIMARY KEY,\r\n label text\r\n);\r\n\r\nWITH w_labels AS (\r\n VALUES ('CZ0100'), ('CZ0201'), ('CZ0202'), ('CZ0203'), ('CZ0204'),\n('CZ0205'),\r\n ('CZ0206'), ('CZ0207'), ('CZ0208'), ('CZ0209'), ('CZ020A'), ('CZ020B'),\n('CZ020C'), \r\n ('CZ0311'), ('CZ0312'), ('CZ0313')\r\n)\r\nINSERT INTO t_nuts (id, label)\r\nSELECT\r\n row_number() OVER() AS id,\r\n w_labels.column1 as label FROM w_labels--, generate_series(1, 1000)\r\n;\r\n\r\nVACUUM (FULL, ANALYZE) t_nuts;\r\n\r\nSELECT label, count(*) from t_nuts GROUP BY label ORDER BY label;\r\n\r\n\\c db_en\r\nCREATE USER fdw_user_en WITH PASSWORD 'fdw_pass_en';\r\nGRANT SELECT ON TABLE t_nuts TO fdw_user_en;\r\n\r\n\\c db_join\r\n\r\nCREATE EXTENSION postgres_fdw ;\r\nCREATE SERVER db_en_serv FOREIGN DATA WRAPPER postgres_fdw OPTIONS ( host\n'localhost', port '5432', dbname 'db_en', use_remote_estimate 'True');\r\nCREATE USER MAPPING FOR CURRENT_USER SERVER db_en_serv OPTIONS ( user\n'fdw_user_en', password 'fdw_pass_en');\r\nCREATE SCHEMA en;\r\nIMPORT FOREIGN SCHEMA public LIMIT TO (t_nuts) FROM SERVER db_en_serv INTO\nen;\r\n\r\nSELECT label, count(*) FROM en.t_nuts GROUP BY label ORDER BY label;\r\n\r\n\\c db_cz\r\nCREATE USER fdw_user_cz WITH PASSWORD 'fdw_pass_cz';\r\nGRANT SELECT ON TABLE t_nuts TO fdw_user_cz;\r\n\r\n\\c db_join\r\n\r\nCREATE SERVER db_cz_serv FOREIGN DATA WRAPPER postgres_fdw OPTIONS ( host\n'localhost', port '5432', dbname 'db_cz', use_remote_estimate 'True');\r\nCREATE USER MAPPING FOR CURRENT_USER SERVER db_cz_serv OPTIONS ( user\n'fdw_user_cz', password 'fdw_pass_cz');\r\nCREATE SCHEMA cz;\r\nIMPORT FOREIGN SCHEMA public LIMIT TO (t_nuts) FROM SERVER db_cz_serv INTO\ncz;\r\n\r\nSELECT label, count(*) FROM cz.t_nuts GROUP BY label ORDER BY label;\r\n\r\nEXPLAIN (VERBOSE)\r\nSELECT cz__t_nuts.label, count(*)\r\nFROM cz.t_nuts AS cz__t_nuts\r\nINNER JOIN en.t_nuts AS en__t_nuts ON (cz__t_nuts.label =\nen__t_nuts.label)\r\nGROUP BY cz__t_nuts.label;\r\n\r\nSELECT cz__t_nuts.label, count(*)\r\nFROM cz.t_nuts AS cz__t_nuts\r\nINNER JOIN en.t_nuts AS en__t_nuts ON (cz__t_nuts.label =\nen__t_nuts.label)\r\nGROUP BY cz__t_nuts.label;\r\n\r\nselect version();\r\n\r\n------------------------wrong output (Debian, GLIBC 2.28)----\r\nDROP DATABASE\r\nDROP DATABASE\r\nDROP DATABASE\r\nDROP ROLE\r\nDROP ROLE\r\nCREATE DATABASE\r\nCREATE DATABASE\r\nCREATE DATABASE\r\nNyní jste připojeni k databázi \"db_en\" jako uživatel \"postgres\".\r\nCREATE TABLE\r\nINSERT 0 16\r\nVACUUM\r\n label | count \r\n--------+-------\r\n CZ0100 | 1\r\n CZ0201 | 1\r\n CZ0202 | 1\r\n CZ0203 | 1\r\n CZ0204 | 1\r\n CZ0205 | 1\r\n CZ0206 | 1\r\n CZ0207 | 1\r\n CZ0208 | 1\r\n CZ0209 | 1\r\n CZ020A | 1\r\n CZ020B | 1\r\n CZ020C | 1\r\n CZ0311 | 1\r\n CZ0312 | 1\r\n CZ0313 | 1\r\n(16 řádek)\r\n\r\nNyní jste připojeni k databázi \"db_cz\" jako uživatel \"postgres\".\r\nCREATE TABLE\r\nINSERT 0 16\r\nVACUUM\r\n label | count \r\n--------+-------\r\n CZ0100 | 1\r\n CZ020A | 1\r\n CZ020B | 1\r\n CZ020C | 1\r\n CZ0201 | 1\r\n CZ0202 | 1\r\n CZ0203 | 1\r\n CZ0204 | 1\r\n CZ0205 | 1\r\n CZ0206 | 1\r\n CZ0207 | 1\r\n CZ0208 | 1\r\n CZ0209 | 1\r\n CZ0311 | 1\r\n CZ0312 | 1\r\n CZ0313 | 1\r\n(16 řádek)\r\n\r\nNyní jste připojeni k databázi \"db_en\" jako uživatel \"postgres\".\r\nCREATE ROLE\r\nGRANT\r\nNyní jste připojeni k databázi \"db_join\" jako uživatel \"postgres\".\r\nCREATE EXTENSION\r\nCREATE SERVER\r\nCREATE USER MAPPING\r\nCREATE SCHEMA\r\nIMPORT FOREIGN SCHEMA\r\n label | count \r\n--------+-------\r\n CZ0100 | 1\r\n CZ0201 | 1\r\n CZ0202 | 1\r\n CZ0203 | 1\r\n CZ0204 | 1\r\n CZ0205 | 1\r\n CZ0206 | 1\r\n CZ0207 | 1\r\n CZ0208 | 1\r\n CZ0209 | 1\r\n CZ020A | 1\r\n CZ020B | 1\r\n CZ020C | 1\r\n CZ0311 | 1\r\n CZ0312 | 1\r\n CZ0313 | 1\r\n(16 řádek)\r\n\r\nNyní jste připojeni k databázi \"db_cz\" jako uživatel \"postgres\".\r\nCREATE ROLE\r\nGRANT\r\nNyní jste připojeni k databázi \"db_join\" jako uživatel \"postgres\".\r\nCREATE SERVER\r\nCREATE USER MAPPING\r\nCREATE SCHEMA\r\nIMPORT FOREIGN SCHEMA\r\n label | count \r\n--------+-------\r\n CZ0100 | 1\r\n CZ020A | 1\r\n CZ020B | 1\r\n CZ020C | 1\r\n CZ0201 | 1\r\n CZ0202 | 1\r\n CZ0203 | 1\r\n CZ0204 | 1\r\n CZ0205 | 1\r\n CZ0206 | 1\r\n CZ0207 | 1\r\n CZ0208 | 1\r\n CZ0209 | 1\r\n CZ0311 | 1\r\n CZ0312 | 1\r\n CZ0313 | 1\r\n(16 řádek)\r\n\r\n QUERY PLAN \n \r\n-----------------------------------------------------------------------------------------------\r\n GroupAggregate (cost=203.28..204.16 rows=16 width=40)\r\n Output: cz__t_nuts.label, count(*)\r\n Group Key: cz__t_nuts.label\r\n -> Merge Join (cost=203.28..203.92 rows=16 width=32)\r\n Output: cz__t_nuts.label\r\n Merge Cond: (cz__t_nuts.label = en__t_nuts.label)\r\n -> Foreign Scan on cz.t_nuts cz__t_nuts (cost=101.48..101.84\nrows=16 width=7)\r\n Output: cz__t_nuts.id, cz__t_nuts.label\r\n Remote SQL: SELECT label FROM public.t_nuts ORDER BY label\nASC NULLS LAST\r\n -> Sort (cost=101.80..101.84 rows=16 width=7)\r\n Output: en__t_nuts.label\r\n Sort Key: en__t_nuts.label\r\n -> Foreign Scan on en.t_nuts en__t_nuts \n(cost=100.00..101.48 rows=16 width=7)\r\n Output: en__t_nuts.label\r\n Remote SQL: SELECT label FROM public.t_nuts\r\n(15 řádek)\r\n\r\n label | count \r\n--------+-------\r\n CZ0100 | 1\r\n CZ020A | 1\r\n CZ020B | 1\r\n CZ020C | 1\r\n CZ0311 | 1\r\n CZ0312 | 1\r\n CZ0313 | 1\r\n(7 řádek)\r\n\r\n version \n \r\n------------------------------------------------------------------------------------------------------------------\r\n PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled\nby gcc (Debian 8.3.0-6) 8.3.0, 64-bit\r\n(1 řádka)\r\n\r\n\r\n------------------------correct output (Alpine, musl libc)----\r\n\r\nDROP DATABASE\r\nDROP DATABASE\r\nDROP DATABASE\r\nDROP ROLE\r\nDROP ROLE\r\nCREATE DATABASE\r\nCREATE DATABASE\r\nCREATE DATABASE\r\nYou are now connected to database \"db_en\" as user \"postgres\".\r\nCREATE TABLE\r\nINSERT 0 16\r\nVACUUM\r\n label | count \r\n--------+-------\r\n CZ0100 | 1\r\n CZ0201 | 1\r\n CZ0202 | 1\r\n CZ0203 | 1\r\n CZ0204 | 1\r\n CZ0205 | 1\r\n CZ0206 | 1\r\n CZ0207 | 1\r\n CZ0208 | 1\r\n CZ0209 | 1\r\n CZ020A | 1\r\n CZ020B | 1\r\n CZ020C | 1\r\n CZ0311 | 1\r\n CZ0312 | 1\r\n CZ0313 | 1\r\n(16 rows)\r\n\r\nYou are now connected to database \"db_cz\" as user \"postgres\".\r\nCREATE TABLE\r\nINSERT 0 16\r\nVACUUM\r\n label | count \r\n--------+-------\r\n CZ0100 | 1\r\n CZ0201 | 1\r\n CZ0202 | 1\r\n CZ0203 | 1\r\n CZ0204 | 1\r\n CZ0205 | 1\r\n CZ0206 | 1\r\n CZ0207 | 1\r\n CZ0208 | 1\r\n CZ0209 | 1\r\n CZ020A | 1\r\n CZ020B | 1\r\n CZ020C | 1\r\n CZ0311 | 1\r\n CZ0312 | 1\r\n CZ0313 | 1\r\n(16 rows)\r\n\r\nYou are now connected to database \"db_en\" as user \"postgres\".\r\nCREATE ROLE\r\nGRANT\r\nYou are now connected to database \"db_join\" as user \"postgres\".\r\nCREATE EXTENSION\r\nCREATE SERVER\r\nCREATE USER MAPPING\r\nCREATE SCHEMA\r\nIMPORT FOREIGN SCHEMA\r\n label | count \r\n--------+-------\r\n CZ0100 | 1\r\n CZ0201 | 1\r\n CZ0202 | 1\r\n CZ0203 | 1\r\n CZ0204 | 1\r\n CZ0205 | 1\r\n CZ0206 | 1\r\n CZ0207 | 1\r\n CZ0208 | 1\r\n CZ0209 | 1\r\n CZ020A | 1\r\n CZ020B | 1\r\n CZ020C | 1\r\n CZ0311 | 1\r\n CZ0312 | 1\r\n CZ0313 | 1\r\n(16 rows)\r\n\r\nYou are now connected to database \"db_cz\" as user \"postgres\".\r\nCREATE ROLE\r\nGRANT\r\nYou are now connected to database \"db_join\" as user \"postgres\".\r\nCREATE SERVER\r\nCREATE USER MAPPING\r\nCREATE SCHEMA\r\nIMPORT FOREIGN SCHEMA\r\n label | count \r\n--------+-------\r\n CZ0100 | 1\r\n CZ0201 | 1\r\n CZ0202 | 1\r\n CZ0203 | 1\r\n CZ0204 | 1\r\n CZ0205 | 1\r\n CZ0206 | 1\r\n CZ0207 | 1\r\n CZ0208 | 1\r\n CZ0209 | 1\r\n CZ020A | 1\r\n CZ020B | 1\r\n CZ020C | 1\r\n CZ0311 | 1\r\n CZ0312 | 1\r\n CZ0313 | 1\r\n(16 rows)\r\n\r\n QUERY PLAN \n \r\n-----------------------------------------------------------------------------------------------\r\n GroupAggregate (cost=203.28..204.16 rows=16 width=40)\r\n Output: cz__t_nuts.label, count(*)\r\n Group Key: cz__t_nuts.label\r\n -> Merge Join (cost=203.28..203.92 rows=16 width=32)\r\n Output: cz__t_nuts.label\r\n Merge Cond: (cz__t_nuts.label = en__t_nuts.label)\r\n -> Foreign Scan on cz.t_nuts cz__t_nuts (cost=101.48..101.84\nrows=16 width=7)\r\n Output: cz__t_nuts.id, cz__t_nuts.label\r\n Remote SQL: SELECT label FROM public.t_nuts ORDER BY label\nASC NULLS LAST\r\n -> Sort (cost=101.80..101.84 rows=16 width=7)\r\n Output: en__t_nuts.label\r\n Sort Key: en__t_nuts.label\r\n -> Foreign Scan on en.t_nuts en__t_nuts \n(cost=100.00..101.48 rows=16 width=7)\r\n Output: en__t_nuts.label\r\n Remote SQL: SELECT label FROM public.t_nuts\r\n(15 rows)\r\n\r\n label | count \r\n--------+-------\r\n CZ0100 | 1\r\n CZ0201 | 1\r\n CZ0202 | 1\r\n CZ0203 | 1\r\n CZ0204 | 1\r\n CZ0205 | 1\r\n CZ0206 | 1\r\n CZ0207 | 1\r\n CZ0208 | 1\r\n CZ0209 | 1\r\n CZ020A | 1\r\n CZ020B | 1\r\n CZ020C | 1\r\n CZ0311 | 1\r\n CZ0312 | 1\r\n CZ0313 | 1\r\n(16 rows)\r\n\r\n version \n \r\n---------------------------------------------------------------------------------------\r\n PostgreSQL 12.4 on x86_64-pc-linux-musl, compiled by gcc (Alpine 9.3.0)\n9.3.0, 64-bit\r\n(1 row)",
"msg_date": "Mon, 17 Aug 2020 12:02:41 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #16583: merge join on tables with different DB collation behind\n postgres_fdw fails"
},
{
"msg_contents": "PG Bug reporting form <noreply@postgresql.org> writes:\n> Joining two identical tables placed on separate DBs with different collation\n> accessed through postgres_fdw failed when joined with merge join. Some\n> records are missing (7 vs. 16 rows in example) in output. See this snippet\n> https://gitlab.com/-/snippets/2004522 (or code pasted below) for psql script\n> reproducing error also with expected output (working fine on alpine linux).\n\nSo I think what is happening here is that postgres_fdw's version of\nIMPORT FOREIGN SCHEMA translates \"COLLATE default\" on the remote\nserver to \"COLLATE default\" on the local one, which of course is\na big fail if the defaults don't match. That allows the local\nplanner to believe that remote ORDER BYs on the two foreign tables\nwill give compatible results, causing the merge join to not work\nvery well at all.\n\nWe probably need to figure out some way of substituting the remote\ndatabase's actual lc_collate setting when we see \"COLLATE default\".\n\nI'm also thinking that the documentation is way too cavalier about\ndismissing non-matching collation names by just saying that you\ncan turn off import_collate. The fact is that doing so is likely\nto be disastrous, the more so the more optimization intelligence\nwe add to postgres_fdw.\n\nI wonder if we could do something like this:\n\n* Change postgresImportForeignSchema() as above, so that it will never\napply \"COLLATE default\" to an imported column, except in the case\nwhere you turn off import_collate.\n\n* In postgres_fdw planning, treat \"COLLATE default\" on a foreign table\ncolumn as meaning \"we don't know the collation\"; never believe that\nthat column can be ordered in a way that matches any local collation.\n(It'd be better perhaps if there were an explicit way to say \"COLLATE\nunknown\", but I hesitate to invent such a concept in general.)\n\n* Document that in manual creation of a postgres_fdw foreign table\nwith a text column, you need to explicitly write the correct collation\nif you want the best query plans to be generated.\n\nThis seems like too big a behavioral change to consider back-patching,\nunfortunately.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Aug 2020 11:26:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "I wrote:\n> So I think what is happening here is that postgres_fdw's version of\n> IMPORT FOREIGN SCHEMA translates \"COLLATE default\" on the remote\n> server to \"COLLATE default\" on the local one, which of course is\n> a big fail if the defaults don't match. That allows the local\n> planner to believe that remote ORDER BYs on the two foreign tables\n> will give compatible results, causing the merge join to not work\n> very well at all.\n\n> We probably need to figure out some way of substituting the remote\n> database's actual lc_collate setting when we see \"COLLATE default\".\n\nHere's a draft patch for that part. There's a few things to quibble\nabout:\n\n* It tests for \"COLLATE default\" by checking whether pg_collation.oid\nis DEFAULT_COLLATION_OID, thus assuming that that OID will never change.\nI think this is safer than checking the collation name, but maybe\nsomebody else would have a different opinion? Another idea is to check\nwhether collprovider is 'd', but that only works with v10 and up.\n\n* It might not be able to find a remote collation matching the database's\ndatcollate/datctype. As coded, we'll end up creating the local column\nwith \"COLLATE default\", putting us back in the same hurt we're in now.\nI think this is okay given the other planned change to interpret \"COLLATE\ndefault\" as \"we don't know what collation this is\". In any case it's hard\nto see what else we could do, other than fail entirely.\n\n* Alternatively, it might find more than one such remote collation;\nindeed that's the norm, eg we'd typically find both \"en_US\" and\n\"en_US.utf8\", or the like. I made it choose the shortest collation\nname in such cases, but maybe there is a case for the longest?\nI don't much want it to pick \"ucs_basic\" over \"C\", though.\n\n* The whole thing is certain to fall over whenever we find a way to\nallow ICU collations as database defaults. While we can presumably\nfix the query when we make that change, existing postgres_fdw releases\nwould not work against a newer server. Probably there's little to be\ndone about this, either.\n\n* As shown by the expected-output changes, there are some test cases\nthat expose that we're not picking the default collation anymore.\nThat creates a testing problem: this can't be committed as-is because\nit'll fail with any other locale environment than what the expected\nfile was made with. We could lobotomize the test cases to not print\nthe column collation, but then we're not really verifying that this\ncode does what it's supposed to. Not sure what the best compromise is.\n\nComments?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 17 Aug 2020 18:37:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "I wrote:\n>> So I think what is happening here is that postgres_fdw's version of\n>> IMPORT FOREIGN SCHEMA translates \"COLLATE default\" on the remote\n>> server to \"COLLATE default\" on the local one, which of course is\n>> a big fail if the defaults don't match. That allows the local\n>> planner to believe that remote ORDER BYs on the two foreign tables\n>> will give compatible results, causing the merge join to not work\n>> very well at all.\n\nHere's a full patch addressing this issue. I decided that the best\nway to address the test-instability problem is to explicitly give\ncollations to all the foreign-table columns for which it matters\nin the postgres_fdw test. (For portability's sake, that has to be\n\"C\" or \"POSIX\"; I mostly used \"C\".) Aside from ensuring that the\ntest still passes with some other prevailing locale, this seems like\na good idea since we'll then be testing the case we are encouraging\nusers to use.\n\nAnd indeed, it immediately turned up a new problem: if we explicitly\nassign a collation to a foreign-table column c, the system won't\nship WHERE clauses as simple as \"c = 'foo'\" to the remote. This\nsurprised me, but the reason turned out to be that what postgres_fdw\nis actually seeing is something like\n\n {OPEXPR \n :opno 98 \n :opfuncid 67 \n :opresulttype 16 \n :opretset false \n :opcollid 0 \n :inputcollid 950 \n :args (\n {VAR \n :varno 6 \n :varattno 4 \n :vartype 25 \n :vartypmod -1 \n :varcollid 950 \n :varlevelsup 0 \n :varnosyn 6 \n :varattnosyn 4 \n :location 171\n }\n {RELABELTYPE \n :arg \n {CONST \n :consttype 25 \n :consttypmod -1 \n :constcollid 100 \n :constlen -1 \n :constbyval false \n :constisnull false \n :location 341 \n :constvalue 9 [ 36 0 0 0 48 48 48 48 49 ]\n }\n :resulttype 25 \n :resulttypmod -1 \n :resultcollid 950 \n :relabelformat 2 \n :location -1\n }\n )\n :location -1\n }\n\nthat is, the constant is being explicitly relabeled with the correct\ncollation, and thus is_foreign_expr() thinks the collation shown by\nthe RelabelType node is an unsafely-introduced collation.\n\nWhat I did about this was to change the recursion rule in\nforeign_expr_walker() so that merging a safely-derived collation with\nthe same collation unsafely derived is considered safe. I think this\nis all right, and it allows us to accept some cases that previously\nwere rejected as unsafe. But I might be missing something.\n\n(BTW, there's an independent bug here, which is that we're getting\na tree of the above shape rather than a simple Const with the\nappropriate collation; that is, this tree isn't fully const-folded.\nThis is a bug in canonicalize_ec_expression, which I'll go fix\nseparately. But it won't affect the problem at hand.)\n\nThis seems like a sufficiently large change in postgres_fdw's\nbehavior to require review, so I'll go add this to the next CF.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 18 Aug 2020 16:09:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On 17.08.2020 17:26, Tom Lane wrote:\n> PG Bug reporting form <noreply@postgresql.org> writes:\n>> Joining two identical tables placed on separate DBs with different collation\n>> accessed through postgres_fdw failed when joined with merge join. Some\n>> records are missing (7 vs. 16 rows in example) in output. See this snippet\n>> https://gitlab.com/-/snippets/2004522 (or code pasted below) for psql script\n>> reproducing error also with expected output (working fine on alpine linux).\n> So I think what is happening here is that postgres_fdw's version of\n> IMPORT FOREIGN SCHEMA translates \"COLLATE default\" on the remote\n> server to \"COLLATE default\" on the local one, which of course is\n> a big fail if the defaults don't match. That allows the local\n> planner to believe that remote ORDER BYs on the two foreign tables\n> will give compatible results, causing the merge join to not work\n> very well at all.\n\nI am just wondering: if it is bug in IMPORT FOREIGN SCHEMA, how it is \npossible the bug is not present [1] when provided psql script [2] is run \non Alpine Linux? I suppose, both Debian and Alpine has the same IMPORT \nFOREIGN SCHEMA behavior (both has PG12.4). But differs in glibc vs. musl \nlibc. Is it possible, there is also something differing in those \nlibraries with respect to cs.CZ-UTF8?\n\nBest regards, Jiří.\n\n[1] https://gitlab.com/-/snippets/2004522#note_396751634\n\n[2] https://gitlab.com/-/snippets/2004522\n\n\n\n",
"msg_date": "Wed, 19 Aug 2020 07:39:36 +0200",
"msg_from": "=?UTF-8?B?SmnFmcOtIEZlamZhcg==?= <jurafejfar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "=?UTF-8?B?SmnFmcOtIEZlamZhcg==?= <jurafejfar@gmail.com> writes:\n> I am just wondering: if it is bug in IMPORT FOREIGN SCHEMA, how it is \n> possible the bug is not present [1] when provided psql script [2] is run \n> on Alpine Linux?\n\n[ shrug ] Could easy be that Alpine distributes dumbed-down locale\ndefinitions in which the sort order isn't actually any different\nbetween those two locales. Did you check what the sort order of\nyour test data looks like in each case?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Aug 2020 01:53:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On Wed, 19 Aug 2020 at 07:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> =?UTF-8?B?SmnFmcOtIEZlamZhcg==?= <jurafejfar@gmail.com> writes:\n> > I am just wondering: if it is bug in IMPORT FOREIGN SCHEMA, how it is\n> > possible the bug is not present [1] when provided psql script [2] is run\n> > on Alpine Linux?\n>\n> [ shrug ] Could easy be that Alpine distributes dumbed-down locale\n> definitions in which the sort order isn't actually any different\n> between those two locales. Did you check what the sort order of\n> your test data looks like in each case?\n>\n> regards, tom lane\n\nOh, I can see on Alpine that even local tables are ordered like with\nen.US-UTF8 even if DB has default cs.CZ-UTF8.\n\npostgres=# \\l\n List of databases\n Name | Owner | Encoding | Collate | Ctype |\nAccess privileges\n-----------+----------+----------+-------------+-------------+-----------------------\n db_cz | postgres | UTF8 | cs_CZ.UTF-8 | cs_CZ.UTF-8 |\n db_en | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\n ...\npostgres=# \\c db_cz ;\nYou are now connected to database \"db_cz\" as user \"postgres\".\ndb_cz=# select * from t_nuts order by label;\n id | label\n----+--------\n 1 | CZ0100\n 2 | CZ0201\n...\n\n 11 | CZ020A\n 12 | CZ020B\n 13 | CZ020C\n\n...\n\nIt is mentioned in Alpine docker docs [1] that \"Alpine-based variants\ndo not support locales;\".\n\nThanks, J.\n\n[1] https://hub.docker.com/_/postgres\n\n\n",
"msg_date": "Wed, 19 Aug 2020 08:08:42 +0200",
"msg_from": "=?UTF-8?B?SmnFmcOtIEZlamZhcg==?= <jurafejfar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On 2020-08-18 22:09, Tom Lane wrote:\n> Here's a full patch addressing this issue. I decided that the best\n> way to address the test-instability problem is to explicitly give\n> collations to all the foreign-table columns for which it matters\n> in the postgres_fdw test. (For portability's sake, that has to be\n> \"C\" or \"POSIX\"; I mostly used \"C\".) Aside from ensuring that the\n> test still passes with some other prevailing locale, this seems like\n> a good idea since we'll then be testing the case we are encouraging\n> users to use.\n\nI have studied this patch and this functionality. I don't think \ncollation differences between remote and local instances are handled \nsufficiently. This bug report and patch addresses one particular case, \nwhere the database-wide collation of the remote and local instance are \ndifferent. But it doesn't handle cases like the same collation name \ndoing different things, having different versions, or different \nattributes. This probably works currently because the libc collations \ndon't have much functionality like that, but there is a variety of work \nconceived (or, in the case of version tracking, already done since the \nbug was first discussed) that would break that.\n\nTaking a step back, I think there are only two ways this could really \nwork: Either, the admin makes a promise that all the collations match on \nall the instances; then the planner can take advantage of that. Or, \nthere is no such promise, and then the planner can't. I don't \nunderstand what the currently implemented approach is. It appears to be \nsomething in the middle, where certain representations are made that \ncertain things might match, and then there is some nontrivial code that \nanalyzes expressions whether they conform to those rules. As you said, \nthe description of the import_collate option is kind of hand-wavy about \nall this.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n",
"msg_date": "Thu, 28 Jan 2021 13:31:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I have studied this patch and this functionality. I don't think \n> collation differences between remote and local instances are handled \n> sufficiently. This bug report and patch addresses one particular case, \n> where the database-wide collation of the remote and local instance are \n> different. But it doesn't handle cases like the same collation name \n> doing different things, having different versions, or different \n> attributes.\n\nYeah, agreed. I don't think it's practical to have a 100% solution.\nI'd make a couple of points:\n\n* The design philosophy of postgres_fdw, to the extent it has one,\nis that it's the user's responsibility to make sure that the local\ndeclaration of a foreign table is a faithful model of the actual\nremote object. There are certain variances you can get away with,\nbut in general, if it breaks it's your fault. (Admittedly, if the\nlocal declaration was created via IMPORT FOREIGN SCHEMA, we would\nlike to be sure that it's right without help. But there's only\nso much we can do there. There are already plenty of ways to\nfool IMPORT FOREIGN SCHEMA anyway, for example if the same type\nname refers to something different on the two systems.)\n\n* Not being able to ship any qual conditions involving collatable\ndatatypes seems like an absolutely unacceptable outcome. Thus,\nI don't buy your alternative of not letting the planner make\nany assumptions at all about compatibility of remote collations.\n\nI think that what this patch is basically doing is increasing the\nvisibility of collation compatibility as something that postgres_fdw\nusers need to take into account. Sure, it's not a 100% solution,\nbut it improves the situation, and it seems like we'd have to do\nthis anyway along the road to any better solution.\n\nIf you've got ideas about how to improve things further, by all\nmeans let's discuss that ... but let's not make the perfect be\nthe enemy of the good.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Jan 2021 11:44:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "Rebased over b663a4136 --- no substantive changes, just keeping\nthe cfbot happy.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 07 Feb 2021 02:55:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nGreetings, \r\nI learned about the patch and read your discussions. I'm not sure why this patch has not been discussed now. In short, I think it's beneficial to submit it as a temporary solution.\r\nAnother thing I want to know is whether these codes can be simplified:\r\n-\tif (state > outer_cxt->state)\r\n+\tif (collation == outer_cxt->collation &&\r\n+\t\t((state == FDW_COLLATE_UNSAFE &&\r\n+\t\t outer_cxt->state == FDW_COLLATE_SAFE) ||\r\n+\t\t (state == FDW_COLLATE_SAFE &&\r\n+\t\t outer_cxt->state == FDW_COLLATE_UNSAFE)))\r\n+\t{\r\n+\t\touter_cxt->state = FDW_COLLATE_SAFE;\r\n+\t}\r\n+\telse if (state > outer_cxt->state)\r\n\r\nIf the state is determined by the collation, when the collations are equal, do we just need to judge the state not equal to FDW_COLLATE_NONE?",
"msg_date": "Wed, 03 Mar 2021 08:41:49 +0000",
"msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 1:42 PM Neil Chen <carpenter.nail.cz@gmail.com>\nwrote:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: tested, passed\n>\n> Greetings,\n> I learned about the patch and read your discussions. I'm not sure why this\n> patch has not been discussed now. In short, I think it's beneficial to\n> submit it as a temporary solution.\n> Another thing I want to know is whether these codes can be simplified:\n> - if (state > outer_cxt->state)\n> + if (collation == outer_cxt->collation &&\n> + ((state == FDW_COLLATE_UNSAFE &&\n> + outer_cxt->state == FDW_COLLATE_SAFE) ||\n> + (state == FDW_COLLATE_SAFE &&\n> + outer_cxt->state == FDW_COLLATE_UNSAFE)))\n> + {\n> + outer_cxt->state = FDW_COLLATE_SAFE;\n> + }\n> + else if (state > outer_cxt->state)\n>\n> If the state is determined by the collation, when the collations are\n> equal, do we just need to judge the state not equal to FDW_COLLATE_NONE?\n\n\nThe patch is failing the regression, @Tom Lane <tgl@sss.pgh.pa.us> can you\nplease take a look at that.\n\nhttps://cirrus-ci.com/task/4593497492684800\n\n============== running regression test queries ==============\ntest postgres_fdw ... FAILED 2782 ms\n============== shutting down postmaster ==============\n======================\n1 of 1 tests failed.\n======================\nThe differences that caused some tests to fail can be viewed in the\nfile \"/tmp/cirrus-ci-build/contrib/postgres_fdw/regression.diffs\". A copy\nof the test summary that you see\nabove is saved in the file\n\"/tmp/cirrus-ci-build/contrib/postgres_fdw/regression.out\".\n\n\n-- \nIbrar Ahmed\n\nOn Wed, Mar 3, 2021 at 1:42 PM Neil Chen <carpenter.nail.cz@gmail.com> wrote:The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nGreetings, \nI learned about the patch and read your discussions. I'm not sure why this patch has not been discussed now. In short, I think it's beneficial to submit it as a temporary solution.\nAnother thing I want to know is whether these codes can be simplified:\n- if (state > outer_cxt->state)\n+ if (collation == outer_cxt->collation &&\n+ ((state == FDW_COLLATE_UNSAFE &&\n+ outer_cxt->state == FDW_COLLATE_SAFE) ||\n+ (state == FDW_COLLATE_SAFE &&\n+ outer_cxt->state == FDW_COLLATE_UNSAFE)))\n+ {\n+ outer_cxt->state = FDW_COLLATE_SAFE;\n+ }\n+ else if (state > outer_cxt->state)\n\nIf the state is determined by the collation, when the collations are equal, do we just need to judge the state not equal to FDW_COLLATE_NONE?The patch is failing the regression, @Tom Lane can you please take a look at that. https://cirrus-ci.com/task/4593497492684800============== running regression test queries ==============test postgres_fdw ... FAILED 2782 ms============== shutting down postmaster ====================================1 of 1 tests failed.======================The differences that caused some tests to fail can be viewed in thefile \"/tmp/cirrus-ci-build/contrib/postgres_fdw/regression.diffs\". A copy of the test summary that you seeabove is saved in the file \"/tmp/cirrus-ci-build/contrib/postgres_fdw/regression.out\".-- Ibrar Ahmed",
"msg_date": "Tue, 13 Jul 2021 16:07:41 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> The patch is failing the regression, @Tom Lane <tgl@sss.pgh.pa.us> can you\n> please take a look at that.\n\nSeems to just need an update of the expected-file to account for test\ncases added recently. (I take no position on whether the new results\nare desirable; some of these might be breaking the intent of the case.\nBut this should quiet the cfbot anyway.)\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 13 Jul 2021 16:41:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 1:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> > The patch is failing the regression, @Tom Lane <tgl@sss.pgh.pa.us> can\n> you\n> > please take a look at that.\n>\n> Seems to just need an update of the expected-file to account for test\n> cases added recently. (I take no position on whether the new results\n> are desirable; some of these might be breaking the intent of the case.\n> But this should quiet the cfbot anyway.)\n>\n> regards, tom lane\n>\n>\nThanks for the update.\n\nThe test case was added by commit \"Add support for asynchronous execution\"\n\"27e1f14563cf982f1f4d71e21ef247866662a052\" by Etsuro Fujita. He can comment\nwhether the new results are desirable or not.\n\n\n\n-- \nIbrar Ahmed\n\nOn Wed, Jul 14, 2021 at 1:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> The patch is failing the regression, @Tom Lane <tgl@sss.pgh.pa.us> can you\n> please take a look at that.\n\nSeems to just need an update of the expected-file to account for test\ncases added recently. (I take no position on whether the new results\nare desirable; some of these might be breaking the intent of the case.\nBut this should quiet the cfbot anyway.)\n\n regards, tom lane\n\nThanks for the update. The test case was added by commit \"Add support for asynchronous execution\"\"27e1f14563cf982f1f4d71e21ef247866662a052\" by Etsuro Fujita. He can commentwhether the new results are desirable or not.-- Ibrar Ahmed",
"msg_date": "Thu, 15 Jul 2021 00:16:31 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 4:17 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> On Wed, Jul 14, 2021 at 1:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Seems to just need an update of the expected-file to account for test\n>> cases added recently. (I take no position on whether the new results\n>> are desirable; some of these might be breaking the intent of the case.\n>> But this should quiet the cfbot anyway.)\n\n> The test case was added by commit \"Add support for asynchronous execution\"\n> \"27e1f14563cf982f1f4d71e21ef247866662a052\" by Etsuro Fujita. He can comment\n> whether the new results are desirable or not.\n\nThe new results aren't what I intended. I'll update the patch to\navoid that by modifying the original test cases properly, if there are\nno objections.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 15 Jul 2021 18:35:33 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 2:35 PM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> On Thu, Jul 15, 2021 at 4:17 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> > On Wed, Jul 14, 2021 at 1:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Seems to just need an update of the expected-file to account for test\n> >> cases added recently. (I take no position on whether the new results\n> >> are desirable; some of these might be breaking the intent of the case.\n> >> But this should quiet the cfbot anyway.)\n>\n> > The test case was added by commit \"Add support for asynchronous\n> execution\"\n> > \"27e1f14563cf982f1f4d71e21ef247866662a052\" by Etsuro Fujita. He can\n> comment\n> > whether the new results are desirable or not.\n>\n> The new results aren't what I intended. I'll update the patch to\n> avoid that by modifying the original test cases properly, if there are\n> no objections.\n>\n> Best regards,\n> Etsuro Fujita\n>\n\nThanks Etsuro,\n\nI have changed the status to \"Waiting On Author\", because patch need\nchanges.\nEtsuro, can you make yourself a reviewer/co-author to keep track of that?\n\n\n-- \nIbrar Ahmed\n\nOn Thu, Jul 15, 2021 at 2:35 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:On Thu, Jul 15, 2021 at 4:17 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> On Wed, Jul 14, 2021 at 1:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Seems to just need an update of the expected-file to account for test\n>> cases added recently. (I take no position on whether the new results\n>> are desirable; some of these might be breaking the intent of the case.\n>> But this should quiet the cfbot anyway.)\n\n> The test case was added by commit \"Add support for asynchronous execution\"\n> \"27e1f14563cf982f1f4d71e21ef247866662a052\" by Etsuro Fujita. He can comment\n> whether the new results are desirable or not.\n\nThe new results aren't what I intended. I'll update the patch to\navoid that by modifying the original test cases properly, if there are\nno objections.\n\nBest regards,\nEtsuro Fujita\nThanks Etsuro, I have changed the status to \"Waiting On Author\", because patch need changes. Etsuro, can you make yourself a reviewer/co-author to keep track of that?-- Ibrar Ahmed",
"msg_date": "Thu, 15 Jul 2021 18:02:28 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> On Thu, Jul 15, 2021 at 4:17 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>> On Wed, Jul 14, 2021 at 1:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Seems to just need an update of the expected-file to account for test\n>>> cases added recently. (I take no position on whether the new results\n>>> are desirable; some of these might be breaking the intent of the case.\n>>> But this should quiet the cfbot anyway.)\n\n>> The test case was added by commit \"Add support for asynchronous execution\"\n>> \"27e1f14563cf982f1f4d71e21ef247866662a052\" by Etsuro Fujita. He can comment\n>> whether the new results are desirable or not.\n\n> The new results aren't what I intended. I'll update the patch to\n> avoid that by modifying the original test cases properly, if there are\n> no objections.\n\nPlease follow up on that sometime? In the meantime, here is a rebase\nover aa769f80e and 2dc53fe2a, to placate the cfbot.\n\nThe real reason that this hasn't gotten committed is that I remain\npretty uncomfortable about whether it's an acceptable solution to\nthe problem. Suddenly asking people to plaster COLLATE clauses\non all their textual remote columns seems like a big compatibility\ngotcha. However, I lack any ideas about a less unpleasant solution.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 01 Sep 2021 16:42:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On Thu, Sep 2, 2021 at 5:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> > On Thu, Jul 15, 2021 at 4:17 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> >> On Wed, Jul 14, 2021 at 1:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> Seems to just need an update of the expected-file to account for test\n> >>> cases added recently. (I take no position on whether the new results\n> >>> are desirable; some of these might be breaking the intent of the case.\n> >>> But this should quiet the cfbot anyway.)\n>\n> >> The test case was added by commit \"Add support for asynchronous execution\"\n> >> \"27e1f14563cf982f1f4d71e21ef247866662a052\" by Etsuro Fujita. He can comment\n> >> whether the new results are desirable or not.\n>\n> > The new results aren't what I intended. I'll update the patch to\n> > avoid that by modifying the original test cases properly, if there are\n> > no objections.\n>\n> Please follow up on that sometime?\n\nWill do in this commitfest.\n\n> In the meantime, here is a rebase\n> over aa769f80e and 2dc53fe2a, to placate the cfbot.\n\nThanks for the rebase!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 2 Sep 2021 11:56:25 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On Thu, Sep 2, 2021 at 5:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The real reason that this hasn't gotten committed is that I remain\n> pretty uncomfortable about whether it's an acceptable solution to\n> the problem. Suddenly asking people to plaster COLLATE clauses\n> on all their textual remote columns seems like a big compatibility\n> gotcha.\n\nI think so too. I reviewed the patch:\n\n /*\n * If the Var is from the foreign table, we consider its\n- * collation (if any) safe to use. If it is from another\n+ * collation (if any) safe to use, *unless* it's\n+ * DEFAULT_COLLATION_OID. We treat that as meaning \"we don't\n+ * know which collation this is\". If it is from another\n * table, we treat its collation the same way as we would a\n * Param's collation, ie it's not safe for it to have a\n * non-default collation.\n@@ -350,7 +352,12 @@ foreign_expr_walker(Node *node,\n\n /* Else check the collation */\n collation = var->varcollid;\n- state = OidIsValid(collation) ? FDW_COLLATE_SAFE :\nFDW_COLLATE_NONE;\n+ if (collation == InvalidOid)\n+ state = FDW_COLLATE_NONE;\n+ else if (collation == DEFAULT_COLLATION_OID)\n+ state = FDW_COLLATE_UNSAFE;\n+ else\n+ state = FDW_COLLATE_SAFE;\n\nOne thing I noticed about this change is:\n\nexplain (verbose, costs off) select * from ft3 order by f2;\n QUERY PLAN\n---------------------------------------------------------\n Sort\n Output: f1, f2, f3\n Sort Key: ft3.f2\n -> Foreign Scan on public.ft3\n Output: f1, f2, f3\n Remote SQL: SELECT f1, f2, f3 FROM public.loct3\n(6 rows)\n\nwhere ft3 is defined as in the postgres_fdw regression test (see the\nsection “test handling of collations”). For this query, the sort is\ndone locally, but I think it should be done remotely, or an error\nshould be raised, as we don’t know the collation assigned to the\ncolumn “f2”. So I think we need to do something about this.\n\nHaving said that, I think another option for this would be to left the\ncode as-is; assume that 1) the foreign var has \"COLLATE default”, not\nan unknown collation, when labeled with \"COLLATE default”, and 2)\n\"COLLATE default” on the local database matches \"COLLATE default” on\nthe remote database. This would be the same as before, so we could\navoid the concern mentioned above. I agree with the\npostgresImportForeignSchema() change, except creating a local column\nwith \"COLLATE default\" silently if that function can’t find a remote\ncollation matching the database's datcollate/datctype when seeing\n\"COLLATE default”, in which case I think an error should be raised to\nprompt the user to check the settings for the remote server and/or\ndefine foreign tables manually with collations that match the remote\nside. Maybe I’m missing something, though.\n\nAnyway, here is a patch created on top of your patch to modify\nasync-related test cases to work as intended. I’m also attaching your\npatch to make the cfbot quiet.\n\nSorry for the delay.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Fri, 10 Sep 2021 00:45:32 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> On Thu, Sep 2, 2021 at 5:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The real reason that this hasn't gotten committed is that I remain\n>> pretty uncomfortable about whether it's an acceptable solution to\n>> the problem. Suddenly asking people to plaster COLLATE clauses\n>> on all their textual remote columns seems like a big compatibility\n>> gotcha.\n\n> I think so too.\n\nYeah :-(. It seems like a very unpleasant change.\n\n> Having said that, I think another option for this would be to left the\n> code as-is; assume that 1) the foreign var has \"COLLATE default”, not\n> an unknown collation, when labeled with \"COLLATE default”, and 2)\n> \"COLLATE default” on the local database matches \"COLLATE default” on\n> the remote database.\n\nThe fundamental complaint that started this thread was exactly that\nassumption (2) isn't safe. So it sounds to me like you're proposing\nthat we do nothing, which isn't a great answer either. I suppose\nwe could try documenting our way out of this, but people will\ncontinue to get bit because they won't read or won't understand\nthe limitation.\n\nI'd be happier if we had a way to check whether the local and remote\ndefault collations are compatible. But it seems like that's a big ask,\nespecially in cross-operating-system situations.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Sep 2021 12:00:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On Fri, Sep 10, 2021 at 1:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> > Having said that, I think another option for this would be to left the\n> > code as-is; assume that 1) the foreign var has \"COLLATE default”, not\n> > an unknown collation, when labeled with \"COLLATE default”, and 2)\n> > \"COLLATE default” on the local database matches \"COLLATE default” on\n> > the remote database.\n>\n> The fundamental complaint that started this thread was exactly that\n> assumption (2) isn't safe. So it sounds to me like you're proposing\n> that we do nothing, which isn't a great answer either. I suppose\n> we could try documenting our way out of this, but people will\n> continue to get bit because they won't read or won't understand\n> the limitation.\n\nYeah, but I think it’s the user’s responsibility to make sure that the\nlocal and remote default collations match if labeling collatable\ncolumns with “COLLATE default” when defining foreign tables manually\nIMO.\n\n> I'd be happier if we had a way to check whether the local and remote\n> default collations are compatible. But it seems like that's a big ask,\n> especially in cross-operating-system situations.\n\nAgreed.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 10 Sep 2021 20:42:27 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On Fri, Sep 10, 2021 at 8:42 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, Sep 10, 2021 at 1:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> > > Having said that, I think another option for this would be to left the\n> > > code as-is; assume that 1) the foreign var has \"COLLATE default”, not\n> > > an unknown collation, when labeled with \"COLLATE default”, and 2)\n> > > \"COLLATE default” on the local database matches \"COLLATE default” on\n> > > the remote database.\n> >\n> > The fundamental complaint that started this thread was exactly that\n> > assumption (2) isn't safe. So it sounds to me like you're proposing\n> > that we do nothing, which isn't a great answer either. I suppose\n> > we could try documenting our way out of this, but people will\n> > continue to get bit because they won't read or won't understand\n> > the limitation.\n>\n> Yeah, but I think it’s the user’s responsibility to make sure that the\n> local and remote default collations match if labeling collatable\n> columns with “COLLATE default” when defining foreign tables manually\n> IMO.\n\nOne thing I noticed is that collatable operators/functions sent to the\nremote might also cause an unexpected result when the default\ncollations are not compatible. Consider this example (even with your\npatch):\n\nexplain verbose select chr(c1) from ft1 order by chr(c1);\n QUERY PLAN\n------------------------------------------------------------------------\n Foreign Scan on public.ft1 (cost=100.00..212.91 rows=2925 width=32)\n Output: chr(c1)\n Remote SQL: SELECT c1 FROM public.t1 ORDER BY chr(c1) ASC NULLS LAST\n(3 rows)\n\nwhere ft1 is a foreign table with an integer column c1. As shown\nabove, the sort using the collatable function chr() is performed\nremotely, so the select query might produce the result in an\nunexpected sort order when the default collations are not compatible.\n\nISTM that we rely heavily on assumption (2).\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 24 Sep 2021 17:36:06 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> One thing I noticed is that collatable operators/functions sent to the\n> remote might also cause an unexpected result when the default\n> collations are not compatible. Consider this example (even with your\n> patch):\n> ...\n> where ft1 is a foreign table with an integer column c1. As shown\n> above, the sort using the collatable function chr() is performed\n> remotely, so the select query might produce the result in an\n> unexpected sort order when the default collations are not compatible.\n\nI don't think there's anything really new there --- it's still assuming\nthat COLLATE \"default\" means the same locally and remotely.\n\nAs a short-term answer, I propose that we apply (and back-patch) the\nattached documentation changes.\n\nLonger-term, it seems like we really have to be able to represent\nthe notion of a remote column that has an \"unknown\" collation (that\nis, one that doesn't match any local collation, or at least is not\nknown to do so). My previous patch essentially makes \"default\" act\nthat way, but conflating \"unknown\" with \"default\" has too many\ndownsides. A rough sketch for making this happen is:\n\n1. Create a built-in \"unknown\" entry in pg_collation. Insert some\nhack or other to prevent this from being applied to any real, local\ncolumn; but allow foreign-table columns to have it.\n\n2. Apply mods, probably fairly similar to my patch, that prevent\npostgres_fdw from believing that \"unknown\" matches any local\ncollation. (Hm, actually maybe no special code change will be\nneeded here, once \"unknown\" has its own OID?)\n\n3. Change postgresImportForeignSchema so that it can substitute\nthe \"unknown\" collation at need. The exact rules for this could\nbe debated depending on whether you'd rather prioritize safety or\nease-of-use, but I think at least we should use \"unknown\" whenever\nimport_collate is turned off. Perhaps there should be an option\nto substitute it for remote \"default\" as well. (Further down the\nroad, perhaps that could be generalized to allow a user-controlled\nmapping from remote to local collations.)\n\nAnyway, I think I should withdraw the upthread patch; we don't\nwant to go that way.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 24 Sep 2021 15:11:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On Sat, Sep 25, 2021 at 4:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> > One thing I noticed is that collatable operators/functions sent to the\n> > remote might also cause an unexpected result when the default\n> > collations are not compatible. Consider this example (even with your\n> > patch):\n> > ...\n> > where ft1 is a foreign table with an integer column c1. As shown\n> > above, the sort using the collatable function chr() is performed\n> > remotely, so the select query might produce the result in an\n> > unexpected sort order when the default collations are not compatible.\n>\n> I don't think there's anything really new there --- it's still assuming\n> that COLLATE \"default\" means the same locally and remotely.\n\nI thought that the example showed that we would need to specify a\ncollation per-operation, not only per-foreign-table-column, like\n“ORDER BY chr(c1) COLLATE “foo”” where “foo” is the actual name of a\nlocal collation matching the local server’s default collation, when\nthe default collation doesn’t match the remote server’s default\ncollation, to avoid pushing down operations incorrectly as in the\nexample.\n\n> As a short-term answer, I propose that we apply (and back-patch) the\n> attached documentation changes.\n\nThe attached patch looks good to me.\n\n> Longer-term, it seems like we really have to be able to represent\n> the notion of a remote column that has an \"unknown\" collation (that\n> is, one that doesn't match any local collation, or at least is not\n> known to do so).\n\n+1\n\n> A rough sketch for making this happen is:\n>\n> 1. Create a built-in \"unknown\" entry in pg_collation. Insert some\n> hack or other to prevent this from being applied to any real, local\n> column; but allow foreign-table columns to have it.\n>\n> 2. Apply mods, probably fairly similar to my patch, that prevent\n> postgres_fdw from believing that \"unknown\" matches any local\n> collation. (Hm, actually maybe no special code change will be\n> needed here, once \"unknown\" has its own OID?)\n>\n> 3. Change postgresImportForeignSchema so that it can substitute\n> the \"unknown\" collation at need. The exact rules for this could\n> be debated depending on whether you'd rather prioritize safety or\n> ease-of-use, but I think at least we should use \"unknown\" whenever\n> import_collate is turned off. Perhaps there should be an option\n> to substitute it for remote \"default\" as well. (Further down the\n> road, perhaps that could be generalized to allow a user-controlled\n> mapping from remote to local collations.)\n\nIn addition, a) we should detect whether local “default” matches\nremote “default”, and b) if not, we should prevent pushing down\nsort/comparison operations using collatable functions/operators like\n“ORDER BY chr(c1)” in the example (and pushing down those operations\non foreign-table columns labeled with “COLLATE default” if such\nlabeling is allowed)?\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Sat, 25 Sep 2021 22:55:29 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> On Sat, Sep 25, 2021 at 4:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Longer-term, it seems like we really have to be able to represent\n>> the notion of a remote column that has an \"unknown\" collation (that\n>> is, one that doesn't match any local collation, or at least is not\n>> known to do so).\n\n> +1\n\n> In addition, a) we should detect whether local “default” matches\n> remote “default”,\n\nIf we had a way to do that, most of the problem here wouldn't exist.\nI don't believe we can do it reliably. (Maybe we could put it on\nthe user to tell us, say via a foreign-server property?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Sep 2021 09:59:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> On Sat, Sep 25, 2021 at 4:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> As a short-term answer, I propose that we apply (and back-patch) the\n>> attached documentation changes.\n\n> The attached patch looks good to me.\n\nI've pushed that, and marked the current CF entry as returned with\nfeedback. I'm not sure how soon I might get around to trying the\nidea of an explicit \"unknown\" collation ... if anyone wants to take\na stab at that, feel free.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Sep 2021 10:59:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On Sat, Sep 25, 2021 at 10:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> > On Sat, Sep 25, 2021 at 4:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Longer-term, it seems like we really have to be able to represent\n> >> the notion of a remote column that has an \"unknown\" collation (that\n> >> is, one that doesn't match any local collation, or at least is not\n> >> known to do so).\n>\n> > +1\n>\n> > In addition, a) we should detect whether local “default” matches\n> > remote “default”,\n>\n> If we had a way to do that, most of the problem here wouldn't exist.\n> I don't believe we can do it reliably. (Maybe we could put it on\n> the user to tell us, say via a foreign-server property?)\n\nYeah, I was thinking we could get it from a server option. Also, I\nwas thinking this bit might be back-patchable independently of the\nsolution mentioned above.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Sun, 26 Sep 2021 17:56:57 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
},
{
"msg_contents": "On 9/25/21 06:59, Tom Lane wrote:\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n>> On Sat, Sep 25, 2021 at 4:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Longer-term, it seems like we really have to be able to represent\n>>> the notion of a remote column that has an \"unknown\" collation (that\n>>> is, one that doesn't match any local collation, or at least is not\n>>> known to do so).\n> \n>> +1\n> \n>> In addition, a) we should detect whether local “default” matches\n>> remote “default”,\n> \n> If we had a way to do that, most of the problem here wouldn't exist.\n> I don't believe we can do it reliably. (Maybe we could put it on\n> the user to tell us, say via a foreign-server property?)\n\nA related situation is local and remote servers having different\nversions of glibc - in particular, pre versus post 2.28. I think there's\nstill a major brewing storm here that hasn't yet fully hit the world of\nPG users.\n\nI know PG throws the warning message for queries using the wrong\ncollation library version, but I can't remember - does the query still\nexecute? If so, then glibc 2.28 seems to significnatly raise the\nlikelihood of wrong query results across the entire global PG install base.\n\nDoes PostgreSQL handle cases which involve FDWs (ala this thread) or hot\nstandbys? Would be nice if some approach could be found to solve that\nproblem at the same time as the one discussed on this thread.\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Fri, 1 Oct 2021 12:37:31 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16583: merge join on tables with different DB collation\n behind postgres_fdw fails"
}
] |
[
{
"msg_contents": "snapshot scalability: cache snapshots using a xact completion counter.\n\nPrevious commits made it faster/more scalable to compute snapshots. But not\nbuilding a snapshot is still faster. Now that GetSnapshotData() does not\nmaintain RecentGlobal* anymore, that is actually not too hard:\n\nThis commit introduces xactCompletionCount, which tracks the number of\ntop-level transactions with xids (i.e. which may have modified the database)\nthat completed in some form since the start of the server.\n\nWe can avoid rebuilding the snapshot's contents whenever the current\nxactCompletionCount is the same as it was when the snapshot was\noriginally built. Currently this check happens while holding\nProcArrayLock. While it's likely possible to perform the check without\nacquiring ProcArrayLock, it seems better to do that separately /\nlater, some careful analysis is required. Even with the lock this is a\nsignificant win on its own.\n\nOn a smaller two socket machine this gains another ~1.03x, on a larger\nmachine the effect is roughly double (earlier patch version tested\nthough). If we were able to safely avoid the lock there'd be another\nsignificant gain on top of that.\n\nAuthor: Andres Freund <andres@anarazel.de>\nReviewed-By: Robert Haas <robertmhaas@gmail.com>\nReviewed-By: Thomas Munro <thomas.munro@gmail.com>\nReviewed-By: David Rowley <dgrowleyml@gmail.com>\nDiscussion: https://postgr.es/m/20200301083601.ews6hz5dduc3w2se@alap3.anarazel.de\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/623a9ba79bbdd11c5eccb30b8bd5c446130e521c\n\nModified Files\n--------------\nsrc/backend/replication/logical/snapbuild.c | 1 +\nsrc/backend/storage/ipc/procarray.c | 125 +++++++++++++++++++++++-----\nsrc/backend/utils/time/snapmgr.c | 4 +\nsrc/include/access/transam.h | 9 ++\nsrc/include/utils/snapshot.h | 7 ++\n5 files changed, 126 insertions(+), 20 deletions(-)",
"msg_date": "Tue, 18 Aug 2020 04:30:21 +0000",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pgsql: snapshot scalability: cache snapshots using a xact completion\n co"
},
{
"msg_contents": "On Tue, Aug 18, 2020 at 04:30:21AM +0000, Andres Freund wrote:\n> snapshot scalability: cache snapshots using a xact completion counter.\n> \n> Previous commits made it faster/more scalable to compute snapshots. But not\n> building a snapshot is still faster. Now that GetSnapshotData() does not\n> maintain RecentGlobal* anymore, that is actually not too hard:\n> \n> This commit introduces xactCompletionCount, which tracks the number of\n> top-level transactions with xids (i.e. which may have modified the database)\n> that completed in some form since the start of the server.\n> \n> We can avoid rebuilding the snapshot's contents whenever the current\n> xactCompletionCount is the same as it was when the snapshot was\n> originally built. Currently this check happens while holding\n> ProcArrayLock. While it's likely possible to perform the check without\n> acquiring ProcArrayLock, it seems better to do that separately /\n> later, some careful analysis is required. Even with the lock this is a\n> significant win on its own.\n> \n> On a smaller two socket machine this gains another ~1.03x, on a larger\n> machine the effect is roughly double (earlier patch version tested\n> though). If we were able to safely avoid the lock there'd be another\n> significant gain on top of that.\n\nspurfowl and more animals are telling us that this commit has broken\n2PC:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=spurfowl&dt=2020-08-18%2004%3A31%3A11\n--\nMichael",
"msg_date": "Tue, 18 Aug 2020 13:52:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: snapshot scalability: cache snapshots using a xact\n completion co"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> snapshot scalability: cache snapshots using a xact completion counter.\n\nbuildfarm doesn't like this a bit ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Aug 2020 00:55:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: snapshot scalability: cache snapshots using a xact\n completion co"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-18 00:55:22 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > snapshot scalability: cache snapshots using a xact completion counter.\n> \n> buildfarm doesn't like this a bit ...\n\nYea, looking already. Unless that turns out to be incredibly bad luck\nand only the first three animals failed (there's a few passes after), or\nunless I find the issue in the next 30min or so, I'll revert.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Aug 2020 22:02:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: snapshot scalability: cache snapshots using a xact\n completion co"
},
{
"msg_contents": "On 2020-08-18 13:52:46 +0900, Michael Paquier wrote:\n> On Tue, Aug 18, 2020 at 04:30:21AM +0000, Andres Freund wrote:\n> spurfowl and more animals are telling us that this commit has broken\n> 2PC:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=spurfowl&dt=2020-08-18%2004%3A31%3A11\n\nIt looks like it's a bit more subtle than outright breaking 2PC. We're\nnow at 3 out of 18 BF members having failed. I locally ran also quite a\nfew loops of the normal regression tests without finding an issue.\n\nI'd written to Tom that I was planning to revert unless the number of\nfailures were lower than initially indicated. But that actually seems to\nhave come to pass (the failures are quicker to report because they don't\nrun the subsequent tests, of course). I'd like to let the failures\naccumulate a bit longer, say until tomorrow Midday if I haven't figured\nit out by then. With the hope of finding some detail to help pinpoint\nthe issue.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Aug 2020 22:18:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: snapshot scalability: cache snapshots using a xact\n completion co"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'd written to Tom that I was planning to revert unless the number of\n> failures were lower than initially indicated. But that actually seems to\n> have come to pass (the failures are quicker to report because they don't\n> run the subsequent tests, of course). I'd like to let the failures\n> accumulate a bit longer, say until tomorrow Midday if I haven't figured\n> it out by then. With the hope of finding some detail to help pinpoint\n> the issue.\n\nThere's certainly no obvious pattern here, so I agree with waiting for\nmore data.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Aug 2020 01:21:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: snapshot scalability: cache snapshots using a xact\n completion co"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-18 01:21:17 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'd written to Tom that I was planning to revert unless the number of\n> > failures were lower than initially indicated. But that actually seems to\n> > have come to pass (the failures are quicker to report because they don't\n> > run the subsequent tests, of course). I'd like to let the failures\n> > accumulate a bit longer, say until tomorrow Midday if I haven't figured\n> > it out by then. With the hope of finding some detail to help pinpoint\n> > the issue.\n> \n> There's certainly no obvious pattern here, so I agree with waiting for\n> more data.\n\nFWIW, I think I have found the bug, but I'm still working to reproduce\nthe issue reliably enough that I can verify that the fix actually works.\n\nThe issue is basically that 2PC PREPARE is weird, WRT procarray. The\nlast snapshot built with GetSnapshotData() before the PREPARE doesn't\ninclude its own transaction in ->xip[], as normal. PrepareTransaction()\nremoves the \"normal\" entry with ProcArrayClearTransaction(), which so\nfar doesn't increase the xact completion count. Because the xact\ncompletion count is not increased, snapshots can be reused as long as\nthey're taken before the 2PC transaction is finished. That's fine for\nother backends, but for the backend doing the PrepareTransaction() it's\nnot, because there ->xip doesn't include the own backend.\n\nIt's a bit tricky to reproduce exactly the issue the BF is occasionally\nhitting, because the way ->xmax is computed *limits* the\ndamage. Combined with the use of SERIALIZABLE (preventing recomputation\nof the data snapshot) that makes it somewhat hard to hit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Aug 2020 13:28:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: snapshot scalability: cache snapshots using a xact\n completion co"
},
{
"msg_contents": "On 2020-08-18 13:28:05 -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2020-08-18 01:21:17 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > I'd written to Tom that I was planning to revert unless the number of\n> > > failures were lower than initially indicated. But that actually seems to\n> > > have come to pass (the failures are quicker to report because they don't\n> > > run the subsequent tests, of course). I'd like to let the failures\n> > > accumulate a bit longer, say until tomorrow Midday if I haven't figured\n> > > it out by then. With the hope of finding some detail to help pinpoint\n> > > the issue.\n> > \n> > There's certainly no obvious pattern here, so I agree with waiting for\n> > more data.\n> \n> FWIW, I think I have found the bug, but I'm still working to reproduce\n> the issue reliably enough that I can verify that the fix actually works.\n> \n> The issue is basically that 2PC PREPARE is weird, WRT procarray. The\n> last snapshot built with GetSnapshotData() before the PREPARE doesn't\n> include its own transaction in ->xip[], as normal. PrepareTransaction()\n> removes the \"normal\" entry with ProcArrayClearTransaction(), which so\n> far doesn't increase the xact completion count. Because the xact\n> completion count is not increased, snapshots can be reused as long as\n> they're taken before the 2PC transaction is finished. That's fine for\n> other backends, but for the backend doing the PrepareTransaction() it's\n> not, because there ->xip doesn't include the own backend.\n> \n> It's a bit tricky to reproduce exactly the issue the BF is occasionally\n> hitting, because the way ->xmax is computed *limits* the\n> damage. Combined with the use of SERIALIZABLE (preventing recomputation\n> of the data snapshot) that makes it somewhat hard to hit.\n\nI pushed a fix. After a while I figured out that it's not actually that\nhard to test reliably. But it does require multiple sessions\ninteracting, particularly another session needs to acquire and commit a\ntransaction id that's later than the prepared transaction's.\n\nI think it's worth adding an isolation test. But it doesn't seem like\nextending prepared-transactions.spec makes too much sense, it doesn't\nfit in well. It's a lot easier to reproduce the issue without\nSERIALIZABLE, for example. Generally the file seems more about\nserializable than 2PC...\n\nSo unless somebody disagrees I'm gonna add a new\nprepared-transactions-2.spec.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Aug 2020 16:45:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: snapshot scalability: cache snapshots using a xact\n completion co"
},
{
"msg_contents": "Hi,\n\nThis thread started on committers, at\nhttps://www.postgresql.org/message-id/20200818234532.uiafo5br5lo6zhya%40alap3.anarazel.de\n\nIn it I wanted to add a isolation test around prepared transactions:\n\nOn 2020-08-18 16:45:32 -0700, Andres Freund wrote:\n> I think it's worth adding an isolation test. But it doesn't seem like\n> extending prepared-transactions.spec makes too much sense, it doesn't\n> fit in well. It's a lot easier to reproduce the issue without\n> SERIALIZABLE, for example. Generally the file seems more about\n> serializable than 2PC...\n>\n> So unless somebody disagrees I'm gonna add a new\n> prepared-transactions-2.spec.\n\n\nBut I noticed that the already existing prepared transactions test\nwasn't in the normal schedule, since:\n\ncommit ae55d9fbe3871a5e6309d9b91629f1b0ff2b8cba\nAuthor: Andrew Dunstan <andrew@dunslane.net>\nDate: 2012-07-20 15:51:40 -0400\n\n Remove prepared transactions from main isolation test schedule.\n\n There is no point in running this test when prepared transactions are disabled,\n which is the default. New make targets that include the test are provided. This\n will save some useless waste of cycles on buildfarm machines.\n\n Backpatch to 9.1 where these tests were introduced.\n\ndiff --git a/src/test/isolation/isolation_schedule b/src/test/isolation/isolation_schedule\nindex 669c0f220c4..2184975dcb1 100644\n--- a/src/test/isolation/isolation_schedule\n+++ b/src/test/isolation/isolation_schedule\n@@ -9,7 +9,6 @@ test: ri-trigger\n test: partial-index\n test: two-ids\n test: multiple-row-versions\n-test: prepared-transactions\n test: fk-contention\n test: fk-deadlock\n test: fk-deadlock2\n\n\nThe commit confuses me, cause I thought we explicitly enabled prepared\ntransactions during tests well before that? See\n\ncommit 8d4f2ecd41312e57422901952cbad234d293060b\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2009-04-23 00:23:46 +0000\n\n Change the default value of max_prepared_transactions to zero, and add\n documentation warnings against setting it nonzero unless active use of\n prepared transactions is intended and a suitable transaction manager has been\n installed. This should help to prevent the type of scenario we've seen\n several times now where a prepared transaction is forgotten and eventually\n causes severe maintenance problems (or even anti-wraparound shutdown).\n \n The only real reason we had the default be nonzero in the first place was to\n support regression testing of the feature. To still be able to do that,\n tweak pg_regress to force a nonzero value during \"make check\". Since we\n cannot force a nonzero value in \"make installcheck\", add a variant regression\n test \"expected\" file that shows the results that will be obtained when\n max_prepared_transactions is zero.\n \n Also, extend the HINT messages for transaction wraparound warnings to mention\n the possibility that old prepared transactions are causing the problem.\n \n All per today's discussion.\n\n\nAnd indeed, including the test in the schedule works for make check, not\njust an installcheck with explicitly enabled prepared xacts.\n\n\nISTM we should just add an alternative output for disabled prepared\nxacts, and re-add the test?\n\n\nMy new test, without the alternative output for now, is attached.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 18 Aug 2020 18:22:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "prepared transaction isolation tests"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> ISTM we should just add an alternative output for disabled prepared\n> xacts, and re-add the test?\n\nI believe the buildfarm runs the isolation step with \"make installcheck\",\nso if you're hoping to get buildfarm coverage that way, you're mistaken.\n\nHaving said that, it'd probably be good if \"make check\" did run this test.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Aug 2020 22:24:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: prepared transaction isolation tests"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-18 22:24:20 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > ISTM we should just add an alternative output for disabled prepared\n> > xacts, and re-add the test?\n> \n> I believe the buildfarm runs the isolation step with \"make installcheck\",\n> so if you're hoping to get buildfarm coverage that way, you're mistaken.\n\nIt seems like the buildfarm ought to configure the started server with a\nbunch of prepared transactions, in that case? At least going forward?\n\n\n> Having said that, it'd probably be good if \"make check\" did run this test.\n\nYea. It'd at least be run when we do check-world - which at least I do\nbefore nearly every commit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Aug 2020 19:34:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: prepared transaction isolation tests"
},
{
"msg_contents": "On Tue, Aug 18, 2020 at 07:34:00PM -0700, Andres Freund wrote:\n> It seems like the buildfarm ought to configure the started server with a\n> bunch of prepared transactions, in that case? At least going forward?\n\nAgreed. Testing with max_prepared_transactions > 0 has much more\nvalue than not, for sure. So I think that it could be a good thing,\nparticularly if we begin to add more isolation tests.\n--\nMichael",
"msg_date": "Wed, 19 Aug 2020 22:38:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: prepared transaction isolation tests"
},
{
"msg_contents": "\nOn 8/18/20 9:22 PM, Andres Freund wrote:\n> Hi,\n>\n> This thread started on committers, at\n> https://www.postgresql.org/message-id/20200818234532.uiafo5br5lo6zhya%40alap3.anarazel.de\n>\n> In it I wanted to add a isolation test around prepared transactions:\n>\n> On 2020-08-18 16:45:32 -0700, Andres Freund wrote:\n>> I think it's worth adding an isolation test. But it doesn't seem like\n>> extending prepared-transactions.spec makes too much sense, it doesn't\n>> fit in well. It's a lot easier to reproduce the issue without\n>> SERIALIZABLE, for example. Generally the file seems more about\n>> serializable than 2PC...\n>>\n>> So unless somebody disagrees I'm gonna add a new\n>> prepared-transactions-2.spec.\n>\n> But I noticed that the already existing prepared transactions test\n> wasn't in the normal schedule, since:\n>\n> commit ae55d9fbe3871a5e6309d9b91629f1b0ff2b8cba\n> Author: Andrew Dunstan <andrew@dunslane.net>\n> Date: 2012-07-20 15:51:40 -0400\n>\n> Remove prepared transactions from main isolation test schedule.\n>\n> There is no point in running this test when prepared transactions are disabled,\n> which is the default. New make targets that include the test are provided. This\n> will save some useless waste of cycles on buildfarm machines.\n>\n> Backpatch to 9.1 where these tests were introduced.\n>\n> diff --git a/src/test/isolation/isolation_schedule b/src/test/isolation/isolation_schedule\n> index 669c0f220c4..2184975dcb1 100644\n> --- a/src/test/isolation/isolation_schedule\n> +++ b/src/test/isolation/isolation_schedule\n> @@ -9,7 +9,6 @@ test: ri-trigger\n> test: partial-index\n> test: two-ids\n> test: multiple-row-versions\n> -test: prepared-transactions\n> test: fk-contention\n> test: fk-deadlock\n> test: fk-deadlock2\n>\n>\n> The commit confuses me, cause I thought we explicitly enabled prepared\n> transactions during tests well before that? See\n>\n> commit 8d4f2ecd41312e57422901952cbad234d293060b\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: 2009-04-23 00:23:46 +0000\n>\n> Change the default value of max_prepared_transactions to zero, and add\n> documentation warnings against setting it nonzero unless active use of\n> prepared transactions is intended and a suitable transaction manager has been\n> installed. This should help to prevent the type of scenario we've seen\n> several times now where a prepared transaction is forgotten and eventually\n> causes severe maintenance problems (or even anti-wraparound shutdown).\n> \n> The only real reason we had the default be nonzero in the first place was to\n> support regression testing of the feature. To still be able to do that,\n> tweak pg_regress to force a nonzero value during \"make check\". Since we\n> cannot force a nonzero value in \"make installcheck\", add a variant regression\n> test \"expected\" file that shows the results that will be obtained when\n> max_prepared_transactions is zero.\n> \n> Also, extend the HINT messages for transaction wraparound warnings to mention\n> the possibility that old prepared transactions are causing the problem.\n> \n> All per today's discussion.\n>\n>\n> And indeed, including the test in the schedule works for make check, not\n> just an installcheck with explicitly enabled prepared xacts.\n>\n>\n> ISTM we should just add an alternative output for disabled prepared\n> xacts, and re-add the test?\n\n\n\nhere's the context for the 2012 commit.\n\n\nhttps://www.postgresql.org/message-id/flat/50099220.2060005%40dunslane.net#8b189efc4920e1996ffa4d6a0ef81b9c\n\n\nSo I hope any changes that are made will not result in a major slowdown\nof buildfarm animals.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 19 Aug 2020 09:38:55 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: prepared transaction isolation tests"
},
{
"msg_contents": "On 2020-Aug-18, Andres Freund wrote:\n\n> So unless somebody disagrees I'm gonna add a new\n> prepared-transactions-2.spec.\n\nI think keeping things separate if they're not really related is\nsensible.\n\nI think it might be a good idea to add that test to older branches too,\neven if it's just 13 -- at least temporarily.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 19 Aug 2020 18:37:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: snapshot scalability: cache snapshots using a xact\n completion co"
}
] |
[
{
"msg_contents": "Hi,\nRight now pg_waldump just prints whether the message is transactional\nor not and its size. That doesn't help much to understand the message\nitself. If it prints the contents of a logical WAL message, it helps\ndebugging logical replication related problems. Prefix is a\nnull-terminated ASCII string, so no problem printing that. Even the\ncontents can be printed as a series of hex bytes. Here's a patch to do\nthat.\n\nI tested this manually as below\n\npostgres=# select pg_logical_emit_message(false, 'some_prefix', 'some\nmessage'::text);\n pg_logical_emit_message\n-------------------------\n 0/1570658\n(1 row)\n\n$> pg_waldump --start 0/1570600 -p data/\nfirst record is after 0/1570600, at 0/1570608, skipping over 8 bytes\nrmgr: LogicalMessage len (rec/tot): 74/ 74, tx: 0,\nlsn: 0/01570608, prev 0/015705D0, desc: MESSAGE nontransactional\nmessage size 12 bytes, prefix some_prefix; mesage: 73 6F 6D 65 20 6D\n65 73 73 61 67 65\nrmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn:\n0/01570658, prev 0/01570608, desc: RUNNING_XACTS nextXid 504\nlatestCompletedXid 503 oldestRunningXid 504\npg_waldump: fatal: error in WAL record at 0/1570658: invalid record\nlength at 0/1570690: wanted 24, got 0\n\nI didn't find any tests for pg_waldump to test its output, so haven't\nadded one in the patch.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Tue, 18 Aug 2020 11:15:51 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "Print logical WAL message content"
},
{
"msg_contents": "On 2020-Aug-18, Ashutosh Bapat wrote:\n\n> Right now pg_waldump just prints whether the message is transactional\n> or not and its size. That doesn't help much to understand the message\n> itself. If it prints the contents of a logical WAL message, it helps\n> debugging logical replication related problems. Prefix is a\n> null-terminated ASCII string, so no problem printing that. Even the\n> contents can be printed as a series of hex bytes. Here's a patch to do\n> that.\n\nLooks like a good idea.\n\nI didn't like that you're documenting the message format in the new\nfunction:\n\n> \t\txl_logical_message *xlrec = (xl_logical_message *) rec;\n> +\t\t/*\n> +\t\t * Per LogLogicalMessage() actual logical message follows a null-terminated prefix of length\n> +\t\t * prefix_size.\n\nI would prefer to remove this comment, and instead add a comment atop\nxl_logical_message's struct definition in message.h to say that the\nmessage has a valid C-string as prefix, whose length is prefix_size, and\nplease see logicalmesg_desc() if you change this.\nThis way, you don't need to blame LogLogicalMessage for this\nrestriction, but it's actually part of the definition of the WAL\nmessage.\n\n> +\t\t/*\n> +\t\t * Per LogLogicalMessage() actual logical message follows a null-terminated prefix of length\n> +\t\t * prefix_size.\n> +\t\t */\n> +\t\tchar *prefix = xlrec->message;\n> +\t\tchar *message = xlrec->message + xlrec->prefix_size;\n> +\t\tint\t\tcnt;\n> +\t\tchar *sep = \"\";\n\nThis would cause a crash if the message actually fails to follow the\nrule. Let's test that prefix[xlrec->prefix_size] is a trailing zero,\nand if not, avoid printing it. Although, just Assert()'ing that it's a\ntrailing zero would seem to suffice.\n\n> +\t\tappendStringInfo(buf, \"%s message size %zu bytes, prefix %s; mesage: \",\n> \t\t\t\t\t\t xlrec->transactional ? \"transactional\" : \"nontransactional\",\n> -\t\t\t\t\t\t xlrec->message_size);\n> +\t\t\t\t\t\t xlrec->message_size, prefix);\n\nMisspelled \"message\", but also the line looks a bit repetitive -- the\nword \"message\" would appear three times:\n\n> lsn: 0/01570608, prev 0/015705D0, desc: MESSAGE nontransactional message size 12 bytes, prefix some_prefix; mesage: 73 6F 6D 65 20 6D 65 73 73 61 67 65\n\nI would reduce it to\n\n> lsn: 0/01570608, prev 0/015705D0, desc: MESSAGE nontransactional, prefix \"some_prefix\"; payload (12 bytes): 73 6F 6D 65 20 6D 65 73 73 61 67 65\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 18 Aug 2020 17:51:19 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Print logical WAL message content"
},
{
"msg_contents": "Thanks Alvaro for review.\n\nOn Wed, Aug 19, 2020 at 3:21 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> I didn't like that you're documenting the message format in the new\n> function:\n>\n> > xl_logical_message *xlrec = (xl_logical_message *) rec;\n> > + /*\n> > + * Per LogLogicalMessage() actual logical message follows a null-terminated prefix of length\n> > + * prefix_size.\n>\n> I would prefer to remove this comment, and instead add a comment atop\n> xl_logical_message's struct definition in message.h to say that the\n> message has a valid C-string as prefix, whose length is prefix_size, and\n> please see logicalmesg_desc() if you change this.\n\nIt's documented in the struct definition. Added a note about logicalmesg_desc().\n\n> This way, you don't need to blame LogLogicalMessage for this\n> restriction, but it's actually part of the definition of the WAL\n> message.\n>\n> > + /*\n> > + * Per LogLogicalMessage() actual logical message follows a null-terminated prefix of length\n> > + * prefix_size.\n> > + */\n> > + char *prefix = xlrec->message;\n> > + char *message = xlrec->message + xlrec->prefix_size;\n> > + int cnt;\n> > + char *sep = \"\";\n>\n> This would cause a crash if the message actually fails to follow the\n> rule. Let's test that prefix[xlrec->prefix_size] is a trailing zero,\n> and if not, avoid printing it. Although, just Assert()'ing that it's a\n> trailing zero would seem to suffice.\n\nAdded an Assert.\n\n>\n> > + appendStringInfo(buf, \"%s message size %zu bytes, prefix %s; mesage: \",\n> > xlrec->transactional ? \"transactional\" : \"nontransactional\",\n> > - xlrec->message_size);\n> > + xlrec->message_size, prefix);\n>\n> Misspelled \"message\", but also the line looks a bit repetitive -- the\n> word \"message\" would appear three times:\n>\n> > lsn: 0/01570608, prev 0/015705D0, desc: MESSAGE nontransactional message size 12 bytes, prefix some_prefix; mesage: 73 6F 6D 65 20 6D 65 73 73 61 67 65\n>\n> I would reduce it to\n>\n> > lsn: 0/01570608, prev 0/015705D0, desc: MESSAGE nontransactional, prefix \"some_prefix\"; payload (12 bytes): 73 6F 6D 65 20 6D 65 73 73 61 67 65\n\nI like this format as well. Done.\n\nPFA the patch attached with your comments addressed.\n\nThanks for your review.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Wed, 19 Aug 2020 18:59:20 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Print logical WAL message content"
},
{
"msg_contents": "On 2020-Aug-19, Ashutosh Bapat wrote:\n\n> I like this format as well. Done.\n> \n> PFA the patch attached with your comments addressed.\n\nPushed, thanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 10 Sep 2020 19:38:38 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Print logical WAL message content"
},
{
"msg_contents": "On Fri, 11 Sep 2020 at 04:08, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n>\n> Pushed, thanks!\n>\n\nThanks Alvaro.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Fri, 11 Sep 2020 at 04:08, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\nPushed, thanks!Thanks Alvaro. -- Best Wishes,Ashutosh",
"msg_date": "Mon, 14 Sep 2020 10:18:55 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Print logical WAL message content"
}
] |
[
{
"msg_contents": "Hi,\n\nIt's important to provide the metrics for tuning the size of WAL \nbuffers.\nFor now, it's lack of the statistics how often processes wait to write \nWAL because WAL buffer is full.\n\nIf those situation are often occurred, WAL buffer is too small for the \nworkload.\nDBAs must to tune the WAL buffer size for performance improvement.\n\nThere are related threads, but those are not merged.\nhttps://www.postgresql.org/message-id/4FF824F3.5090407@uptime.jp\nhttps://www.postgresql.org/message-id/flat/CAJrrPGc6APFUGYNcPe4qcNxpL8gXKYv1KST%2BvwJcFtCSCEySnA%40mail.gmail.com\n\nWhat do you think?\nIf we can have a consensus, I will make a PoC patch.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 18 Aug 2020 16:21:26 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "New statistics for tuning WAL buffer size"
},
{
"msg_contents": "From: Masahiro Ikeda <ikedamsh@oss.nttdata.com>\n> It's important to provide the metrics for tuning the size of WAL buffers.\n> For now, it's lack of the statistics how often processes wait to write WAL\n> because WAL buffer is full.\n> \n> If those situation are often occurred, WAL buffer is too small for the workload.\n> DBAs must to tune the WAL buffer size for performance improvement.\n\nYes, it's helpful to know if we need to enlarge the WAL buffer. That's why our colleague HariBabu proposed the patch. We'd be happy if it could be committed in some form.\n\n\n> There are related threads, but those are not merged.\n> https://www.postgresql.org/message-id/4FF824F3.5090407@uptime.jp\n> https://www.postgresql.org/message-id/flat/CAJrrPGc6APFUGYNcPe4qcNx\n> pL8gXKYv1KST%2BvwJcFtCSCEySnA%40mail.gmail.com\n\nWhat's the difference between those patches? What blocked them from being committed?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Tue, 18 Aug 2020 07:35:50 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-08-18 16:35, tsunakawa.takay@fujitsu.com wrote:\n> From: Masahiro Ikeda <ikedamsh@oss.nttdata.com>\n>> It's important to provide the metrics for tuning the size of WAL \n>> buffers.\n>> For now, it's lack of the statistics how often processes wait to write \n>> WAL\n>> because WAL buffer is full.\n>> \n>> If those situation are often occurred, WAL buffer is too small for the \n>> workload.\n>> DBAs must to tune the WAL buffer size for performance improvement.\n> \n> Yes, it's helpful to know if we need to enlarge the WAL buffer.\n> That's why our colleague HariBabu proposed the patch. We'd be happy\n> if it could be committed in some form.\n> \n>> There are related threads, but those are not merged.\n>> https://www.postgresql.org/message-id/4FF824F3.5090407@uptime.jp\n>> https://www.postgresql.org/message-id/flat/CAJrrPGc6APFUGYNcPe4qcNx\n>> pL8gXKYv1KST%2BvwJcFtCSCEySnA%40mail.gmail.com\n> \n> What's the difference between those patches? What blocked them from\n> being committed?\n\nThanks for replying.\n\nSince the above threads are not active now and those patches can't be \napplied HEAD,\nI made this thread. If it is better to reply the above thread, I will do \nso.\n\nIf my understanding is correct, we have to measure the performance \nimpact first.\nDo you know HariBabu is now trying to solve it? If not, I will try to \nmodify patches to apply HEAD.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 19 Aug 2020 13:41:29 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "From: Masahiro Ikeda <ikedamsh@oss.nttdata.com>\n> If my understanding is correct, we have to measure the performance\n> impact first.\n> Do you know HariBabu is now trying to solve it? If not, I will try to\n> modify patches to apply HEAD.\n\nNo, he's not doing it anymore. It'd be great if you could resume it. However, I recommend sharing your understanding about what were the issues with those two threads and how you're trying to solve them. Was the performance overhead the blocker in both of the threads?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n",
"msg_date": "Wed, 19 Aug 2020 04:49:02 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-08-19 13:49, tsunakawa.takay@fujitsu.com wrote:\n> From: Masahiro Ikeda <ikedamsh@oss.nttdata.com>\n>> If my understanding is correct, we have to measure the performance\n>> impact first.\n>> Do you know HariBabu is now trying to solve it? If not, I will try to\n>> modify patches to apply HEAD.\n> \n> No, he's not doing it anymore. It'd be great if you could resume it.\n\nOK, thanks.\n\n> However, I recommend sharing your understanding about what were the\n> issues with those two threads and how you're trying to solve them.\n> Was the performance overhead the blocker in both of the threads?\n\nIn my understanding, some comments are not solved in both of the \nthreads.\nI think the following works are remained.\n\n1) Modify patches to apply HEAD\n2) Get consensus what metrics we collect and how to use them for tuning.\n3) Measure performance impact and if it leads poor performance, we solve \nit.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 19 Aug 2020 14:10:08 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/08/19 14:10, Masahiro Ikeda wrote:\n> On 2020-08-19 13:49, tsunakawa.takay@fujitsu.com wrote:\n>> From: Masahiro Ikeda <ikedamsh@oss.nttdata.com>\n>>> If my understanding is correct, we have to measure the performance\n>>> impact first.\n>>> Do you know HariBabu is now trying to solve it? If not, I will try to\n>>> modify patches to apply HEAD.\n>>\n>> No, he's not doing it anymore. It'd be great if you could resume it.\n> \n> OK, thanks.\n> \n>> However, I recommend sharing your understanding about what were the\n>> issues with those two threads and how you're trying to solve them.\n>> Was the performance overhead the blocker in both of the threads?\n> \n> In my understanding, some comments are not solved in both of the threads.\n> I think the following works are remained.\n> \n> 1) Modify patches to apply HEAD\n> 2) Get consensus what metrics we collect and how to use them for tuning.\n\nI agree to expose the number of WAL write caused by full of WAL buffers.\nIt's helpful when tuning wal_buffers size. Haribabu separated that number\ninto two fields in his patch; one is the number of WAL write by backend,\nand another is by background processes and workers. But I'm not sure\nhow useful such separation is. I'm ok with just one field for that number.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 20 Aug 2020 20:01:29 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/08/20 20:01, Fujii Masao wrote:\n> \n> \n> On 2020/08/19 14:10, Masahiro Ikeda wrote:\n>> On 2020-08-19 13:49, tsunakawa.takay@fujitsu.com wrote:\n>>> From: Masahiro Ikeda <ikedamsh@oss.nttdata.com>\n>>>> If my understanding is correct, we have to measure the performance\n>>>> impact first.\n>>>> Do you know HariBabu is now trying to solve it? If not, I will try to\n>>>> modify patches to apply HEAD.\n>>>\n>>> No, he's not doing it anymore. It'd be great if you could resume it.\n>>\n>> OK, thanks.\n>>\n>>> However, I recommend sharing your understanding about what were the\n>>> issues with those two threads and how you're trying to solve them.\n>>> Was the performance overhead the blocker in both of the threads?\n>>\n>> In my understanding, some comments are not solved in both of the threads.\n>> I think the following works are remained.\n>>\n>> 1) Modify patches to apply HEAD\n>> 2) Get consensus what metrics we collect and how to use them for tuning.\n> \n> I agree to expose the number of WAL write caused by full of WAL buffers.\n> It's helpful when tuning wal_buffers size. Haribabu separated that number\n> into two fields in his patch; one is the number of WAL write by backend,\n> and another is by background processes and workers. But I'm not sure\n> how useful such separation is. I'm ok with just one field for that number.\n\nJust idea; it may be worth exposing the number of when new WAL file is\ncreated and zero-filled. This initialization may have impact on\nthe performance of write-heavy workload generating lots of WAL. If this\nnumber is reported high, to reduce the number of this initialization,\nwe can tune WAL-related parameters so that more \"recycled\" WAL files\ncan be hold.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 20 Aug 2020 20:19:29 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "From: Fujii Masao <masao.fujii@oss.nttdata.com>\r\n> I agree to expose the number of WAL write caused by full of WAL buffers.\r\n> It's helpful when tuning wal_buffers size. Haribabu separated that number\r\n> into two fields in his patch; one is the number of WAL write by backend,\r\n> and another is by background processes and workers. But I'm not sure\r\n> how useful such separation is. I'm ok with just one field for that number.\r\n\r\nI agree with you. I don't think we need to separate the numbers for foreground processes and background ones. WAL buffer is a single resource. So \"Writes due to full WAL buffer are happening. We may be able to boost performance by increasing wal_buffers\" would be enough.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 21 Aug 2020 02:53:42 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "From: Fujii Masao <masao.fujii@oss.nttdata.com>\r\n> Just idea; it may be worth exposing the number of when new WAL file is\r\n> created and zero-filled. This initialization may have impact on\r\n> the performance of write-heavy workload generating lots of WAL. If this\r\n> number is reported high, to reduce the number of this initialization,\r\n> we can tune WAL-related parameters so that more \"recycled\" WAL files\r\n> can be hold.\r\n\r\nSounds good. Actually, I want to know how much those zeroing affected the transaction response times, but it may be the target of the wait event statistics that Imai-san is addressing.\r\n\r\n(I wonder how the fallocate() patch went that tries to minimize the zeroing time.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 21 Aug 2020 03:08:44 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/08/21 12:08, tsunakawa.takay@fujitsu.com wrote:\n> From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>> Just idea; it may be worth exposing the number of when new WAL file is\n>> created and zero-filled. This initialization may have impact on\n>> the performance of write-heavy workload generating lots of WAL. If this\n>> number is reported high, to reduce the number of this initialization,\n>> we can tune WAL-related parameters so that more \"recycled\" WAL files\n>> can be hold.\n> \n> Sounds good. Actually, I want to know how much those zeroing affected the transaction response times, but it may be the target of the wait event statistics that Imai-san is addressing.\n\nMaybe, so I'm ok if the first pg_stat_walwriter patch doesn't expose\nthis number. We can extend it to include that later after we confirm\nthat number is really useful.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 21 Aug 2020 12:22:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "Hi, thanks for useful comments.\n\n>> I agree to expose the number of WAL write caused by full of WAL \n>> buffers.\n>> It's helpful when tuning wal_buffers size. Haribabu separated that \n>> number\n>> into two fields in his patch; one is the number of WAL write by \n>> backend,\n>> and another is by background processes and workers. But I'm not sure\n>> how useful such separation is. I'm ok with just one field for that \n>> number.\n> I agree with you. I don't think we need to separate the numbers for \n> foreground processes and background ones. WAL buffer is a single \n> resource. So \"Writes due to full WAL buffer are happening. We may be \n> able to boost performance by increasing wal_buffers\" would be enough.\n\nI made a patch to expose the number of WAL write caused by full of WAL \nbuffers.\nI'm going to submit this patch to commitfests.\n\nAs Fujii-san and Tsunakawa-san said, it expose the total number\nsince I agreed that we don't need to separate the numbers for\nforeground processes and background ones.\n\nBy the way, do we need to add another metrics related to WAL?\nFor example, is the total number of WAL writes to the buffers useful to \ncalculate the dirty WAL write ratio?\n\nIs it enough as a first step?\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 24 Aug 2020 20:45:36 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-08-24 20:45, Masahiro Ikeda wrote:\n> Hi, thanks for useful comments.\n> \n>>> I agree to expose the number of WAL write caused by full of WAL \n>>> buffers.\n>>> It's helpful when tuning wal_buffers size. Haribabu separated that \n>>> number\n>>> into two fields in his patch; one is the number of WAL write by \n>>> backend,\n>>> and another is by background processes and workers. But I'm not sure\n>>> how useful such separation is. I'm ok with just one field for that \n>>> number.\n>> I agree with you. I don't think we need to separate the numbers for \n>> foreground processes and background ones. WAL buffer is a single \n>> resource. So \"Writes due to full WAL buffer are happening. We may be \n>> able to boost performance by increasing wal_buffers\" would be enough.\n> \n> I made a patch to expose the number of WAL write caused by full of WAL \n> buffers.\n> I'm going to submit this patch to commitfests.\n> \n> As Fujii-san and Tsunakawa-san said, it expose the total number\n> since I agreed that we don't need to separate the numbers for\n> foreground processes and background ones.\n> \n> By the way, do we need to add another metrics related to WAL?\n> For example, is the total number of WAL writes to the buffers useful\n> to calculate the dirty WAL write ratio?\n> \n> Is it enough as a first step?\n\nI forgot to rebase the current master.\nI've attached the rebased patch.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 24 Aug 2020 21:00:56 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/08/24 21:00, Masahiro Ikeda wrote:\n> On 2020-08-24 20:45, Masahiro Ikeda wrote:\n>> Hi, thanks for useful comments.\n>>\n>>>> I agree to expose the number of WAL write caused by full of WAL buffers.\n>>>> It's helpful when tuning wal_buffers size. Haribabu separated that number\n>>>> into two fields in his patch; one is the number of WAL write by backend,\n>>>> and another is by background processes and workers. But I'm not sure\n>>>> how useful such separation is. I'm ok with just one field for that number.\n>>> I agree with you.� I don't think we need to separate the numbers for foreground processes and background ones.� WAL buffer is a single resource.� So \"Writes due to full WAL buffer are happening.� We may be able to boost performance by increasing wal_buffers\" would be enough.\n>>\n>> I made a patch to expose the number of WAL write caused by full of WAL buffers.\n>> I'm going to submit this patch to commitfests.\n>>\n>> As Fujii-san and Tsunakawa-san said, it expose the total number\n>> since I agreed that we don't need to separate the numbers for\n>> foreground processes and background ones.\n>>\n>> By the way, do we need to add another metrics related to WAL?\n>> For example, is the total number of WAL writes to the buffers useful\n>> to calculate the dirty WAL write ratio?\n>>\n>> Is it enough as a first step?\n> \n> I forgot to rebase the current master.\n> I've attached the rebased patch.\n\nThanks for the patch!\n\n+/* ----------\n+ * Backend types\n+ * ----------\n\nYou seem to forget to add \"*/\" into the above comment.\nThis issue could cause the following compiler warning.\n\n../../src/include/pgstat.h:761:1: warning: '/*' within block comment [-Wcomment]\n\n\nThe contents of pg_stat_walwrites are reset when the server\nis restarted. Isn't this problematic? IMO since pg_stat_walwrites\nis a collected statistics view, basically its contents should be\nkept even in the case of server restart.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 1 Sep 2020 18:57:12 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "> +/* ----------\n> + * Backend types\n> + * ----------\n> \n> You seem to forget to add \"*/\" into the above comment.\n> This issue could cause the following compiler warning.\n> ../../src/include/pgstat.h:761:1: warning: '/*' within block comment \n> [-Wcomment]\n\nThanks for the comment. I fixed.\n\n> The contents of pg_stat_walwrites are reset when the server\n> is restarted. Isn't this problematic? IMO since pg_stat_walwrites\n> is a collected statistics view, basically its contents should be\n> kept even in the case of server restart.\n\nI agree your opinion.\nI modified to use the statistics collector and persist the wal \nstatistics.\n\n\nI changed the view name from pg_stat_walwrites to pg_stat_walwriter.\nI think it is better to match naming scheme with other views like \npg_stat_bgwriter,\nwhich is for bgwriter statistics but it has the statistics related to \nbackend.\n\nThe pg_stat_walwriter is not security restricted now, so ordinary users \ncan access it.\nI has the same security level as pg_stat_archiver.If you have any \ncomments, please let me know.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Wed, 02 Sep 2020 18:56:17 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/09/02 18:56, Masahiro Ikeda wrote:\n>> +/* ----------\n>> + * Backend types\n>> + * ----------\n>>\n>> You seem to forget to add \"*/\" into the above comment.\n>> This issue could cause the following compiler warning.\n>> ../../src/include/pgstat.h:761:1: warning: '/*' within block comment [-Wcomment]\n> \n> Thanks for the comment. I fixed.\n\nThanks for the fix! But why are those comments necessary?\n\n\n> \n>> The contents of pg_stat_walwrites are reset when the server\n>> is restarted. Isn't this problematic? IMO since pg_stat_walwrites\n>> is a collected statistics view, basically its contents should be\n>> kept even in the case of server restart.\n> \n> I agree your opinion.\n> I modified to use the statistics collector and persist the wal statistics.\n> \n> \n> I changed the view name from pg_stat_walwrites to pg_stat_walwriter.\n> I think it is better to match naming scheme with other views like pg_stat_bgwriter,\n> which is for bgwriter statistics but it has the statistics related to backend.\n\nI prefer the view name pg_stat_walwriter for the consistency with\nother view names. But we also have pg_stat_wal_receiver. Which\nmakes me think that maybe pg_stat_wal_writer is better for\nthe consistency. Thought? IMO either of them works for me.\nI'd like to hear more opinons about this.\n\n\n> \n> The pg_stat_walwriter is not security restricted now, so ordinary users can access it.\n> I has the same security level as pg_stat_archiver.If you have any comments, please let me know.\n\n+ <structfield>dirty_writes</structfield> <type>bigint</type>\n\nI guess that the column name \"dirty_writes\" derived from\nthe DTrace probe name. Isn't this name confusing? We should\nrename it to \"wal_buffers_full\" or something?\n\n\n+/* ----------\n+ * PgStat_MsgWalWriter\t\t\tSent by the walwriter to update statistics.\n\nThis comment seems not accurate because backends also send it.\n\n+/*\n+ * WAL writes statistics counter is updated in XLogWrite function\n+ */\n+extern PgStat_MsgWalWriter WalWriterStats;\n\nThis comment seems not right because the counter is not updated in XLogWrite().\n\n+-- There will surely and maximum one record\n+select count(*) = 1 as ok from pg_stat_walwriter;\n\nWhat about changing this comment to \"There must be only one record\"?\n\n+\t\t\t\t\tWalWriterStats.m_xlog_dirty_writes++;\n \t\t\t\t\tLWLockRelease(WALWriteLock);\n\nSince WalWriterStats.m_xlog_dirty_writes doesn't need to be protected\nwith WALWriteLock, isn't it better to increment that after releasing the lock?\n\n+CREATE VIEW pg_stat_walwriter AS\n+ SELECT\n+ pg_stat_get_xlog_dirty_writes() AS dirty_writes,\n+ pg_stat_get_walwriter_stat_reset_time() AS stats_reset;\n+\n CREATE VIEW pg_stat_progress_vacuum AS\n\nIn system_views.sql, the definition of pg_stat_walwriter should be\nplaced just after that of pg_stat_bgwriter not pg_stat_progress_analyze.\n\n \t}\n-\n \t/*\n \t * We found an existing collector stats file. Read it and put all the\n\nYou seem to accidentally have removed the empty line here.\n\n-\t\t\t\t errhint(\"Target must be \\\"archiver\\\" or \\\"bgwriter\\\".\")));\n+\t\t\t\t errhint(\"Target must be \\\"archiver\\\" or \\\"bgwriter\\\" or \\\"walwriter\\\".\")));\n\nThere are two \"or\" in the message, but the former should be replaced with \",\"?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 3 Sep 2020 16:05:05 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "From: Fujii Masao <masao.fujii@oss.nttdata.com>\n> > I changed the view name from pg_stat_walwrites to pg_stat_walwriter.\n> > I think it is better to match naming scheme with other views like\n> pg_stat_bgwriter,\n> > which is for bgwriter statistics but it has the statistics related to backend.\n> \n> I prefer the view name pg_stat_walwriter for the consistency with\n> other view names. But we also have pg_stat_wal_receiver. Which\n> makes me think that maybe pg_stat_wal_writer is better for\n> the consistency. Thought? IMO either of them works for me.\n> I'd like to hear more opinons about this.\n\nI think pg_stat_bgwriter is now a misnomer, because it contains the backends' activity. Likewise, pg_stat_walwriter leads to misunderstanding because its information is not limited to WAL writer.\n\nHow about simply pg_stat_wal? In the future, we may want to include WAL reads in this view, e.g. reading undo logs in zheap.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n",
"msg_date": "Fri, 4 Sep 2020 02:50:10 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/09/04 11:50, tsunakawa.takay@fujitsu.com wrote:\n> From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>>> I changed the view name from pg_stat_walwrites to pg_stat_walwriter.\n>>> I think it is better to match naming scheme with other views like\n>> pg_stat_bgwriter,\n>>> which is for bgwriter statistics but it has the statistics related to backend.\n>>\n>> I prefer the view name pg_stat_walwriter for the consistency with\n>> other view names. But we also have pg_stat_wal_receiver. Which\n>> makes me think that maybe pg_stat_wal_writer is better for\n>> the consistency. Thought? IMO either of them works for me.\n>> I'd like to hear more opinons about this.\n> \n> I think pg_stat_bgwriter is now a misnomer, because it contains the backends' activity. Likewise, pg_stat_walwriter leads to misunderstanding because its information is not limited to WAL writer.\n> \n> How about simply pg_stat_wal? In the future, we may want to include WAL reads in this view, e.g. reading undo logs in zheap.\n\nSounds reasonable.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 4 Sep 2020 12:42:31 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On Fri, Sep 4, 2020 at 5:42 AM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/09/04 11:50, tsunakawa.takay@fujitsu.com wrote:\n> > From: Fujii Masao <masao.fujii@oss.nttdata.com>\n> >>> I changed the view name from pg_stat_walwrites to pg_stat_walwriter.\n> >>> I think it is better to match naming scheme with other views like\n> >> pg_stat_bgwriter,\n> >>> which is for bgwriter statistics but it has the statistics related to\n> backend.\n> >>\n> >> I prefer the view name pg_stat_walwriter for the consistency with\n> >> other view names. But we also have pg_stat_wal_receiver. Which\n> >> makes me think that maybe pg_stat_wal_writer is better for\n> >> the consistency. Thought? IMO either of them works for me.\n> >> I'd like to hear more opinons about this.\n> >\n> > I think pg_stat_bgwriter is now a misnomer, because it contains the\n> backends' activity. Likewise, pg_stat_walwriter leads to misunderstanding\n> because its information is not limited to WAL writer.\n> >\n> > How about simply pg_stat_wal? In the future, we may want to include WAL\n> reads in this view, e.g. reading undo logs in zheap.\n>\n> Sounds reasonable.\n>\n\n+1.\n\npg_stat_bgwriter has had the \"wrong name\" for quite some time now -- it\nbecame even more apparent when the checkpointer was split out to it's own\nprocess, and that's not exactly a recent change. And it had allocs in it\nfrom day one...\n\nI think naming it for what the data in it is (\"wal\") rather than which\nprocess deals with it (\"walwriter\") is correct, unless the statistics can\nbe known to only *ever* affect one type of process. (And then different\nprocesses can affect different columns in the view). As a general rule --\nand that's from what I can tell exactly what's being proposed.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Sep 4, 2020 at 5:42 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/09/04 11:50, tsunakawa.takay@fujitsu.com wrote:\n> From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>>> I changed the view name from pg_stat_walwrites to pg_stat_walwriter.\n>>> I think it is better to match naming scheme with other views like\n>> pg_stat_bgwriter,\n>>> which is for bgwriter statistics but it has the statistics related to backend.\n>>\n>> I prefer the view name pg_stat_walwriter for the consistency with\n>> other view names. But we also have pg_stat_wal_receiver. Which\n>> makes me think that maybe pg_stat_wal_writer is better for\n>> the consistency. Thought? IMO either of them works for me.\n>> I'd like to hear more opinons about this.\n> \n> I think pg_stat_bgwriter is now a misnomer, because it contains the backends' activity. Likewise, pg_stat_walwriter leads to misunderstanding because its information is not limited to WAL writer.\n> \n> How about simply pg_stat_wal? In the future, we may want to include WAL reads in this view, e.g. reading undo logs in zheap.\n\nSounds reasonable.+1.pg_stat_bgwriter has had the \"wrong name\" for quite some time now -- it became even more apparent when the checkpointer was split out to it's own process, and that's not exactly a recent change. And it had allocs in it from day one...I think naming it for what the data in it is (\"wal\") rather than which process deals with it (\"walwriter\") is correct, unless the statistics can be known to only *ever* affect one type of process. (And then different processes can affect different columns in the view). As a general rule -- and that's from what I can tell exactly what's being proposed.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 5 Sep 2020 11:40:51 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "Thanks for the review and advice!\n\nOn 2020-09-03 16:05, Fujii Masao wrote:\n> On 2020/09/02 18:56, Masahiro Ikeda wrote:\n>>> +/* ----------\n>>> + * Backend types\n>>> + * ----------\n>>> \n>>> You seem to forget to add \"*/\" into the above comment.\n>>> This issue could cause the following compiler warning.\n>>> ../../src/include/pgstat.h:761:1: warning: '/*' within block comment \n>>> [-Wcomment]\n>> \n>> Thanks for the comment. I fixed.\n> \n> Thanks for the fix! But why are those comments necessary?\n\nSorry about that. This comment is not necessary.\nI removed it.\n\n>> The pg_stat_walwriter is not security restricted now, so ordinary \n>> users can access it.\n>> It has the same security level as pg_stat_archiver. If you have any \n>> comments, please let me know.\n> \n> + <structfield>dirty_writes</structfield> <type>bigint</type>\n> \n> I guess that the column name \"dirty_writes\" derived from\n> the DTrace probe name. Isn't this name confusing? We should\n> rename it to \"wal_buffers_full\" or something?\n\nI agree and rename it to \"wal_buffers_full\".\n\n> +/* ----------\n> + * PgStat_MsgWalWriter\t\t\tSent by the walwriter to update statistics.\n> \n> This comment seems not accurate because backends also send it.\n> \n> +/*\n> + * WAL writes statistics counter is updated in XLogWrite function\n> + */\n> +extern PgStat_MsgWalWriter WalWriterStats;\n> \n> This comment seems not right because the counter is not updated in \n> XLogWrite().\n\nRight. I fixed it to \"Sent by each backend and background workers to \nupdate WAL statistics.\"\nIn the future, other statistics will be included so I remove the \nfunction's name.\n\n\n> +-- There will surely and maximum one record\n> +select count(*) = 1 as ok from pg_stat_walwriter;\n> \n> What about changing this comment to \"There must be only one record\"?\n\nThanks, I fixed.\n\n> +\t\t\t\t\tWalWriterStats.m_xlog_dirty_writes++;\n> \t\t\t\t\tLWLockRelease(WALWriteLock);\n> \n> Since WalWriterStats.m_xlog_dirty_writes doesn't need to be protected\n> with WALWriteLock, isn't it better to increment that after releasing \n> the lock?\n\nThanks, I fixed.\n\n> +CREATE VIEW pg_stat_walwriter AS\n> + SELECT\n> + pg_stat_get_xlog_dirty_writes() AS dirty_writes,\n> + pg_stat_get_walwriter_stat_reset_time() AS stats_reset;\n> +\n> CREATE VIEW pg_stat_progress_vacuum AS\n> \n> In system_views.sql, the definition of pg_stat_walwriter should be\n> placed just after that of pg_stat_bgwriter not \n> pg_stat_progress_analyze.\n\nOK, I fixed it.\n\n> \t}\n> -\n> \t/*\n> \t * We found an existing collector stats file. Read it and put all the\n> \n> You seem to accidentally have removed the empty line here.\n\nSorry about that. I fixed it.\n\n> -\t\t\t\t errhint(\"Target must be \\\"archiver\\\" or \\\"bgwriter\\\".\")));\n> +\t\t\t\t errhint(\"Target must be \\\"archiver\\\" or \\\"bgwriter\\\" or\n> \\\"walwriter\\\".\")));\n> \n> There are two \"or\" in the message, but the former should be replaced \n> with \",\"?\n\nThanks, I fixed.\n\n\nOn 2020-09-05 18:40, Magnus Hagander wrote:\n> On Fri, Sep 4, 2020 at 5:42 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n> \n>> On 2020/09/04 11:50, tsunakawa.takay@fujitsu.com wrote:\n>>> From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>>>>> I changed the view name from pg_stat_walwrites to\n>> pg_stat_walwriter.\n>>>>> I think it is better to match naming scheme with other views\n>> like\n>>>> pg_stat_bgwriter,\n>>>>> which is for bgwriter statistics but it has the statistics\n>> related to backend.\n>>>> \n>>>> I prefer the view name pg_stat_walwriter for the consistency with\n>>>> other view names. But we also have pg_stat_wal_receiver. Which\n>>>> makes me think that maybe pg_stat_wal_writer is better for\n>>>> the consistency. Thought? IMO either of them works for me.\n>>>> I'd like to hear more opinons about this.\n>>> \n>>> I think pg_stat_bgwriter is now a misnomer, because it contains\n>> the backends' activity. Likewise, pg_stat_walwriter leads to\n>> misunderstanding because its information is not limited to WAL\n>> writer.\n>>> \n>>> How about simply pg_stat_wal? In the future, we may want to\n>> include WAL reads in this view, e.g. reading undo logs in zheap.\n>> \n>> Sounds reasonable.\n> \n> +1.\n> \n> pg_stat_bgwriter has had the \"wrong name\" for quite some time now --\n> it became even more apparent when the checkpointer was split out to\n> it's own process, and that's not exactly a recent change. And it had\n> allocs in it from day one...\n> \n> I think naming it for what the data in it is (\"wal\") rather than which\n> process deals with it (\"walwriter\") is correct, unless the statistics\n> can be known to only *ever* affect one type of process. (And then\n> different processes can affect different columns in the view). As a\n> general rule -- and that's from what I can tell exactly what's being\n> proposed.\n\nThanks for your comments. I agree with your opinions.\nI changed the view name to \"pg_stat_wal\".\n\n\nI fixed the code to send the WAL statistics from not only backend and \nwalwriter\nbut also checkpointer, walsender and autovacuum worker.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 07 Sep 2020 09:58:14 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/09/07 9:58, Masahiro Ikeda wrote:\n> Thanks for the review and advice!\n> \n> On 2020-09-03 16:05, Fujii Masao wrote:\n>> On 2020/09/02 18:56, Masahiro Ikeda wrote:\n>>>> +/* ----------\n>>>> + * Backend types\n>>>> + * ----------\n>>>>\n>>>> You seem to forget to add \"*/\" into the above comment.\n>>>> This issue could cause the following compiler warning.\n>>>> ../../src/include/pgstat.h:761:1: warning: '/*' within block comment [-Wcomment]\n>>>\n>>> Thanks for the comment. I fixed.\n>>\n>> Thanks for the fix! But why are those comments necessary?\n> \n> Sorry about that. This comment is not necessary.\n> I removed it.\n> \n>>> The pg_stat_walwriter is not security restricted now, so ordinary users can access it.\n>>> It has the same security level as pg_stat_archiver. If you have any comments, please let me know.\n>>\n>> +������ <structfield>dirty_writes</structfield> <type>bigint</type>\n>>\n>> I guess that the column name \"dirty_writes\" derived from\n>> the DTrace probe name. Isn't this name confusing? We should\n>> rename it to \"wal_buffers_full\" or something?\n> \n> I agree and rename it to \"wal_buffers_full\".\n> \n>> +/* ----------\n>> + * PgStat_MsgWalWriter����������� Sent by the walwriter to update statistics.\n>>\n>> This comment seems not accurate because backends also send it.\n>>\n>> +/*\n>> + * WAL writes statistics counter is updated in XLogWrite function\n>> + */\n>> +extern PgStat_MsgWalWriter WalWriterStats;\n>>\n>> This comment seems not right because the counter is not updated in XLogWrite().\n> \n> Right. I fixed it to \"Sent by each backend and background workers to update WAL statistics.\"\n> In the future, other statistics will be included so I remove the function's name.\n> \n> \n>> +-- There will surely and maximum one record\n>> +select count(*) = 1 as ok from pg_stat_walwriter;\n>>\n>> What about changing this comment to \"There must be only one record\"?\n> \n> Thanks, I fixed.\n> \n>> +������������������� WalWriterStats.m_xlog_dirty_writes++;\n>> �������������������� LWLockRelease(WALWriteLock);\n>>\n>> Since WalWriterStats.m_xlog_dirty_writes doesn't need to be protected\n>> with WALWriteLock, isn't it better to increment that after releasing the lock?\n> \n> Thanks, I fixed.\n> \n>> +CREATE VIEW pg_stat_walwriter AS\n>> +��� SELECT\n>> +������� pg_stat_get_xlog_dirty_writes() AS dirty_writes,\n>> +������� pg_stat_get_walwriter_stat_reset_time() AS stats_reset;\n>> +\n>> �CREATE VIEW pg_stat_progress_vacuum AS\n>>\n>> In system_views.sql, the definition of pg_stat_walwriter should be\n>> placed just after that of pg_stat_bgwriter not pg_stat_progress_analyze.\n> \n> OK, I fixed it.\n> \n>> ���� }\n>> -\n>> ���� /*\n>> ����� * We found an existing collector stats file. Read it and put all the\n>>\n>> You seem to accidentally have removed the empty line here.\n> \n> Sorry about that. I fixed it.\n> \n>> -���������������� errhint(\"Target must be \\\"archiver\\\" or \\\"bgwriter\\\".\")));\n>> +���������������� errhint(\"Target must be \\\"archiver\\\" or \\\"bgwriter\\\" or\n>> \\\"walwriter\\\".\")));\n>>\n>> There are two \"or\" in the message, but the former should be replaced with \",\"?\n> \n> Thanks, I fixed.\n> \n> \n> On 2020-09-05 18:40, Magnus Hagander wrote:\n>> On Fri, Sep 4, 2020 at 5:42 AM Fujii Masao\n>> <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>> On 2020/09/04 11:50, tsunakawa.takay@fujitsu.com wrote:\n>>>> From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>>>>>> I changed the view name from pg_stat_walwrites to\n>>> pg_stat_walwriter.\n>>>>>> I think it is better to match naming scheme with other views\n>>> like\n>>>>> pg_stat_bgwriter,\n>>>>>> which is for bgwriter statistics but it has the statistics\n>>> related to backend.\n>>>>>\n>>>>> I prefer the view name pg_stat_walwriter for the consistency with\n>>>>> other view names. But we also have pg_stat_wal_receiver. Which\n>>>>> makes me think that maybe pg_stat_wal_writer is better for\n>>>>> the consistency. Thought? IMO either of them works for me.\n>>>>> I'd like to hear more opinons about this.\n>>>>\n>>>> I think pg_stat_bgwriter is now a misnomer, because it contains\n>>> the backends' activity.� Likewise, pg_stat_walwriter leads to\n>>> misunderstanding because its information is not limited to WAL\n>>> writer.\n>>>>\n>>>> How about simply pg_stat_wal?� In the future, we may want to\n>>> include WAL reads in this view, e.g. reading undo logs in zheap.\n>>>\n>>> Sounds reasonable.\n>>\n>> +1.\n>>\n>> pg_stat_bgwriter has had the \"wrong name\" for quite some time now --\n>> it became even more apparent when the checkpointer was split out to\n>> it's own process, and that's not exactly a recent change. And it had\n>> allocs in it from day one...\n>>\n>> I think naming it for what the data in it is (\"wal\") rather than which\n>> process deals with it (\"walwriter\") is correct, unless the statistics\n>> can be known to only *ever* affect one type of process. (And then\n>> different processes can affect different columns in the view). As a\n>> general rule -- and that's from what I can tell exactly what's being\n>> proposed.\n> \n> Thanks for your comments. I agree with your opinions.\n> I changed the view name to \"pg_stat_wal\".\n> \n> \n> I fixed the code to send the WAL statistics from not only backend and walwriter\n> but also checkpointer, walsender and autovacuum worker.\n\nGood point! Thanks for updating the patch!\n\n\n@@ -604,6 +604,7 @@ heap_vacuum_rel(Relation onerel, VacuumParams *params,\n \t\t\t\t\t\t onerel->rd_rel->relisshared,\n \t\t\t\t\t\t Max(new_live_tuples, 0),\n \t\t\t\t\t\t vacrelstats->new_dead_tuples);\n+\tpgstat_send_wal();\n\nI guess that you changed heap_vacuum_rel() as above so that autovacuum\nworkers can send WAL stats. But heap_vacuum_rel() can be called by\nthe processes (e.g., backends) other than autovacuum workers? Also\nwhat happens if autovacuum workers just do ANALYZE only? In that case,\nheap_vacuum_rel() may not be called.\n\nCurrently autovacuum worker reports the stats at the exit via\npgstat_beshutdown_hook(). Unlike other processes, autovacuum worker\nis not the process that basically keeps running during the service. It exits\nafter it does vacuum or analyze. So ISTM that it's not bad to report the stats\nonly at the exit, in autovacuum worker case. There is no need to add extra\ncode for WAL stats report by autovacuum worker. Thought?\n\n\n@@ -1430,6 +1430,9 @@ WalSndWaitForWal(XLogRecPtr loc)\n \t\telse\n \t\t\tRecentFlushPtr = GetXLogReplayRecPtr(NULL);\n \n+\t\t/* Send wal statistics */\n+\t\tpgstat_send_wal();\n\nAFAIR logical walsender uses three loops in WalSndLoop(), WalSndWriteData()\nand WalSndWaitForWal(). But could you tell me why added pgstat_send_wal()\ninto WalSndWaitForWal()? I'd like to know why WalSndWaitForWal() is the best\nfor that purpose.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 7 Sep 2020 16:19:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-09-07 16:19, Fujii Masao wrote:\n> On 2020/09/07 9:58, Masahiro Ikeda wrote:\n>> Thanks for the review and advice!\n>> \n>> On 2020-09-03 16:05, Fujii Masao wrote:\n>>> On 2020/09/02 18:56, Masahiro Ikeda wrote:\n>>>>> +/* ----------\n>>>>> + * Backend types\n>>>>> + * ----------\n>>>>> \n>>>>> You seem to forget to add \"*/\" into the above comment.\n>>>>> This issue could cause the following compiler warning.\n>>>>> ../../src/include/pgstat.h:761:1: warning: '/*' within block \n>>>>> comment [-Wcomment]\n>>>> \n>>>> Thanks for the comment. I fixed.\n>>> \n>>> Thanks for the fix! But why are those comments necessary?\n>> \n>> Sorry about that. This comment is not necessary.\n>> I removed it.\n>> \n>>>> The pg_stat_walwriter is not security restricted now, so ordinary \n>>>> users can access it.\n>>>> It has the same security level as pg_stat_archiver. If you have any \n>>>> comments, please let me know.\n>>> \n>>> + <structfield>dirty_writes</structfield> <type>bigint</type>\n>>> \n>>> I guess that the column name \"dirty_writes\" derived from\n>>> the DTrace probe name. Isn't this name confusing? We should\n>>> rename it to \"wal_buffers_full\" or something?\n>> \n>> I agree and rename it to \"wal_buffers_full\".\n>> \n>>> +/* ----------\n>>> + * PgStat_MsgWalWriter Sent by the walwriter to update \n>>> statistics.\n>>> \n>>> This comment seems not accurate because backends also send it.\n>>> \n>>> +/*\n>>> + * WAL writes statistics counter is updated in XLogWrite function\n>>> + */\n>>> +extern PgStat_MsgWalWriter WalWriterStats;\n>>> \n>>> This comment seems not right because the counter is not updated in \n>>> XLogWrite().\n>> \n>> Right. I fixed it to \"Sent by each backend and background workers to \n>> update WAL statistics.\"\n>> In the future, other statistics will be included so I remove the \n>> function's name.\n>> \n>> \n>>> +-- There will surely and maximum one record\n>>> +select count(*) = 1 as ok from pg_stat_walwriter;\n>>> \n>>> What about changing this comment to \"There must be only one record\"?\n>> \n>> Thanks, I fixed.\n>> \n>>> + WalWriterStats.m_xlog_dirty_writes++;\n>>> LWLockRelease(WALWriteLock);\n>>> \n>>> Since WalWriterStats.m_xlog_dirty_writes doesn't need to be protected\n>>> with WALWriteLock, isn't it better to increment that after releasing \n>>> the lock?\n>> \n>> Thanks, I fixed.\n>> \n>>> +CREATE VIEW pg_stat_walwriter AS\n>>> + SELECT\n>>> + pg_stat_get_xlog_dirty_writes() AS dirty_writes,\n>>> + pg_stat_get_walwriter_stat_reset_time() AS stats_reset;\n>>> +\n>>> CREATE VIEW pg_stat_progress_vacuum AS\n>>> \n>>> In system_views.sql, the definition of pg_stat_walwriter should be\n>>> placed just after that of pg_stat_bgwriter not \n>>> pg_stat_progress_analyze.\n>> \n>> OK, I fixed it.\n>> \n>>> }\n>>> -\n>>> /*\n>>> * We found an existing collector stats file. Read it and put \n>>> all the\n>>> \n>>> You seem to accidentally have removed the empty line here.\n>> \n>> Sorry about that. I fixed it.\n>> \n>>> - errhint(\"Target must be \\\"archiver\\\" or \n>>> \\\"bgwriter\\\".\")));\n>>> + errhint(\"Target must be \\\"archiver\\\" or \n>>> \\\"bgwriter\\\" or\n>>> \\\"walwriter\\\".\")));\n>>> \n>>> There are two \"or\" in the message, but the former should be replaced \n>>> with \",\"?\n>> \n>> Thanks, I fixed.\n>> \n>> \n>> On 2020-09-05 18:40, Magnus Hagander wrote:\n>>> On Fri, Sep 4, 2020 at 5:42 AM Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote:\n>>> \n>>>> On 2020/09/04 11:50, tsunakawa.takay@fujitsu.com wrote:\n>>>>> From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>>>>>>> I changed the view name from pg_stat_walwrites to\n>>>> pg_stat_walwriter.\n>>>>>>> I think it is better to match naming scheme with other views\n>>>> like\n>>>>>> pg_stat_bgwriter,\n>>>>>>> which is for bgwriter statistics but it has the statistics\n>>>> related to backend.\n>>>>>> \n>>>>>> I prefer the view name pg_stat_walwriter for the consistency with\n>>>>>> other view names. But we also have pg_stat_wal_receiver. Which\n>>>>>> makes me think that maybe pg_stat_wal_writer is better for\n>>>>>> the consistency. Thought? IMO either of them works for me.\n>>>>>> I'd like to hear more opinons about this.\n>>>>> \n>>>>> I think pg_stat_bgwriter is now a misnomer, because it contains\n>>>> the backends' activity. Likewise, pg_stat_walwriter leads to\n>>>> misunderstanding because its information is not limited to WAL\n>>>> writer.\n>>>>> \n>>>>> How about simply pg_stat_wal? In the future, we may want to\n>>>> include WAL reads in this view, e.g. reading undo logs in zheap.\n>>>> \n>>>> Sounds reasonable.\n>>> \n>>> +1.\n>>> \n>>> pg_stat_bgwriter has had the \"wrong name\" for quite some time now --\n>>> it became even more apparent when the checkpointer was split out to\n>>> it's own process, and that's not exactly a recent change. And it had\n>>> allocs in it from day one...\n>>> \n>>> I think naming it for what the data in it is (\"wal\") rather than \n>>> which\n>>> process deals with it (\"walwriter\") is correct, unless the statistics\n>>> can be known to only *ever* affect one type of process. (And then\n>>> different processes can affect different columns in the view). As a\n>>> general rule -- and that's from what I can tell exactly what's being\n>>> proposed.\n>> \n>> Thanks for your comments. I agree with your opinions.\n>> I changed the view name to \"pg_stat_wal\".\n>> \n>> \n>> I fixed the code to send the WAL statistics from not only backend and \n>> walwriter\n>> but also checkpointer, walsender and autovacuum worker.\n> \n> Good point! Thanks for updating the patch!\n> \n> \n> @@ -604,6 +604,7 @@ heap_vacuum_rel(Relation onerel, VacuumParams \n> *params,\n> \t\t\t\t\t\t onerel->rd_rel->relisshared,\n> \t\t\t\t\t\t Max(new_live_tuples, 0),\n> \t\t\t\t\t\t vacrelstats->new_dead_tuples);\n> +\tpgstat_send_wal();\n> \n> I guess that you changed heap_vacuum_rel() as above so that autovacuum\n> workers can send WAL stats. But heap_vacuum_rel() can be called by\n> the processes (e.g., backends) other than autovacuum workers? Also\n> what happens if autovacuum workers just do ANALYZE only? In that case,\n> heap_vacuum_rel() may not be called.\n> \n> Currently autovacuum worker reports the stats at the exit via\n> pgstat_beshutdown_hook(). Unlike other processes, autovacuum worker\n> is not the process that basically keeps running during the service. It \n> exits\n> after it does vacuum or analyze. So ISTM that it's not bad to report \n> the stats\n> only at the exit, in autovacuum worker case. There is no need to add \n> extra\n> code for WAL stats report by autovacuum worker. Thought?\n\nThanks, I understood. I removed this code.\n\n> \n> @@ -1430,6 +1430,9 @@ WalSndWaitForWal(XLogRecPtr loc)\n> \t\telse\n> \t\t\tRecentFlushPtr = GetXLogReplayRecPtr(NULL);\n> +\t\t/* Send wal statistics */\n> +\t\tpgstat_send_wal();\n> \n> AFAIR logical walsender uses three loops in WalSndLoop(), \n> WalSndWriteData()\n> and WalSndWaitForWal(). But could you tell me why added \n> pgstat_send_wal()\n> into WalSndWaitForWal()? I'd like to know why WalSndWaitForWal() is the \n> best\n> for that purpose.\n\nI checked what function calls XLogBackgroundFlush() which calls\nAdvanceXLInsertBuffer() to increment m_wal_buffers_full.\n\nI found that WalSndWaitForWal() calls it, so I added it.\nIs it better to move it in WalSndLoop() like the attached patch?\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Wed, 09 Sep 2020 13:57:37 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/09/09 13:57, Masahiro Ikeda wrote:\n> On 2020-09-07 16:19, Fujii Masao wrote:\n>> On 2020/09/07 9:58, Masahiro Ikeda wrote:\n>>> Thanks for the review and advice!\n>>>\n>>> On 2020-09-03 16:05, Fujii Masao wrote:\n>>>> On 2020/09/02 18:56, Masahiro Ikeda wrote:\n>>>>>> +/* ----------\n>>>>>> + * Backend types\n>>>>>> + * ----------\n>>>>>>\n>>>>>> You seem to forget to add \"*/\" into the above comment.\n>>>>>> This issue could cause the following compiler warning.\n>>>>>> ../../src/include/pgstat.h:761:1: warning: '/*' within block comment [-Wcomment]\n>>>>>\n>>>>> Thanks for the comment. I fixed.\n>>>>\n>>>> Thanks for the fix! But why are those comments necessary?\n>>>\n>>> Sorry about that. This comment is not necessary.\n>>> I removed it.\n>>>\n>>>>> The pg_stat_walwriter is not security restricted now, so ordinary users can access it.\n>>>>> It has the same security level as pg_stat_archiver. If you have any comments, please let me know.\n>>>>\n>>>> + <structfield>dirty_writes</structfield> <type>bigint</type>\n>>>>\n>>>> I guess that the column name \"dirty_writes\" derived from\n>>>> the DTrace probe name. Isn't this name confusing? We should\n>>>> rename it to \"wal_buffers_full\" or something?\n>>>\n>>> I agree and rename it to \"wal_buffers_full\".\n>>>\n>>>> +/* ----------\n>>>> + * PgStat_MsgWalWriter Sent by the walwriter to update statistics.\n>>>>\n>>>> This comment seems not accurate because backends also send it.\n>>>>\n>>>> +/*\n>>>> + * WAL writes statistics counter is updated in XLogWrite function\n>>>> + */\n>>>> +extern PgStat_MsgWalWriter WalWriterStats;\n>>>>\n>>>> This comment seems not right because the counter is not updated in XLogWrite().\n>>>\n>>> Right. I fixed it to \"Sent by each backend and background workers to update WAL statistics.\"\n>>> In the future, other statistics will be included so I remove the function's name.\n>>>\n>>>\n>>>> +-- There will surely and maximum one record\n>>>> +select count(*) = 1 as ok from pg_stat_walwriter;\n>>>>\n>>>> What about changing this comment to \"There must be only one record\"?\n>>>\n>>> Thanks, I fixed.\n>>>\n>>>> + WalWriterStats.m_xlog_dirty_writes++;\n>>>> LWLockRelease(WALWriteLock);\n>>>>\n>>>> Since WalWriterStats.m_xlog_dirty_writes doesn't need to be protected\n>>>> with WALWriteLock, isn't it better to increment that after releasing the lock?\n>>>\n>>> Thanks, I fixed.\n>>>\n>>>> +CREATE VIEW pg_stat_walwriter AS\n>>>> + SELECT\n>>>> + pg_stat_get_xlog_dirty_writes() AS dirty_writes,\n>>>> + pg_stat_get_walwriter_stat_reset_time() AS stats_reset;\n>>>> +\n>>>> CREATE VIEW pg_stat_progress_vacuum AS\n>>>>\n>>>> In system_views.sql, the definition of pg_stat_walwriter should be\n>>>> placed just after that of pg_stat_bgwriter not pg_stat_progress_analyze.\n>>>\n>>> OK, I fixed it.\n>>>\n>>>> }\n>>>> -\n>>>> /*\n>>>> * We found an existing collector stats file. Read it and put all the\n>>>>\n>>>> You seem to accidentally have removed the empty line here.\n>>>\n>>> Sorry about that. I fixed it.\n>>>\n>>>> - errhint(\"Target must be \\\"archiver\\\" or \\\"bgwriter\\\".\")));\n>>>> + errhint(\"Target must be \\\"archiver\\\" or \\\"bgwriter\\\" or\n>>>> \\\"walwriter\\\".\")));\n>>>>\n>>>> There are two \"or\" in the message, but the former should be replaced with \",\"?\n>>>\n>>> Thanks, I fixed.\n>>>\n>>>\n>>> On 2020-09-05 18:40, Magnus Hagander wrote:\n>>>> On Fri, Sep 4, 2020 at 5:42 AM Fujii Masao\n>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>> On 2020/09/04 11:50, tsunakawa.takay@fujitsu.com wrote:\n>>>>>> From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>>>>>>>> I changed the view name from pg_stat_walwrites to\n>>>>> pg_stat_walwriter.\n>>>>>>>> I think it is better to match naming scheme with other views\n>>>>> like\n>>>>>>> pg_stat_bgwriter,\n>>>>>>>> which is for bgwriter statistics but it has the statistics\n>>>>> related to backend.\n>>>>>>>\n>>>>>>> I prefer the view name pg_stat_walwriter for the consistency with\n>>>>>>> other view names. But we also have pg_stat_wal_receiver. Which\n>>>>>>> makes me think that maybe pg_stat_wal_writer is better for\n>>>>>>> the consistency. Thought? IMO either of them works for me.\n>>>>>>> I'd like to hear more opinons about this.\n>>>>>>\n>>>>>> I think pg_stat_bgwriter is now a misnomer, because it contains\n>>>>> the backends' activity. Likewise, pg_stat_walwriter leads to\n>>>>> misunderstanding because its information is not limited to WAL\n>>>>> writer.\n>>>>>>\n>>>>>> How about simply pg_stat_wal? In the future, we may want to\n>>>>> include WAL reads in this view, e.g. reading undo logs in zheap.\n>>>>>\n>>>>> Sounds reasonable.\n>>>>\n>>>> +1.\n>>>>\n>>>> pg_stat_bgwriter has had the \"wrong name\" for quite some time now --\n>>>> it became even more apparent when the checkpointer was split out to\n>>>> it's own process, and that's not exactly a recent change. And it had\n>>>> allocs in it from day one...\n>>>>\n>>>> I think naming it for what the data in it is (\"wal\") rather than which\n>>>> process deals with it (\"walwriter\") is correct, unless the statistics\n>>>> can be known to only *ever* affect one type of process. (And then\n>>>> different processes can affect different columns in the view). As a\n>>>> general rule -- and that's from what I can tell exactly what's being\n>>>> proposed.\n>>>\n>>> Thanks for your comments. I agree with your opinions.\n>>> I changed the view name to \"pg_stat_wal\".\n>>>\n>>>\n>>> I fixed the code to send the WAL statistics from not only backend and walwriter\n>>> but also checkpointer, walsender and autovacuum worker.\n>>\n>> Good point! Thanks for updating the patch!\n>>\n>>\n>> @@ -604,6 +604,7 @@ heap_vacuum_rel(Relation onerel, VacuumParams *params,\n>> onerel->rd_rel->relisshared,\n>> Max(new_live_tuples, 0),\n>> vacrelstats->new_dead_tuples);\n>> + pgstat_send_wal();\n>>\n>> I guess that you changed heap_vacuum_rel() as above so that autovacuum\n>> workers can send WAL stats. But heap_vacuum_rel() can be called by\n>> the processes (e.g., backends) other than autovacuum workers? Also\n>> what happens if autovacuum workers just do ANALYZE only? In that case,\n>> heap_vacuum_rel() may not be called.\n>>\n>> Currently autovacuum worker reports the stats at the exit via\n>> pgstat_beshutdown_hook(). Unlike other processes, autovacuum worker\n>> is not the process that basically keeps running during the service. It exits\n>> after it does vacuum or analyze. So ISTM that it's not bad to report the stats\n>> only at the exit, in autovacuum worker case. There is no need to add extra\n>> code for WAL stats report by autovacuum worker. Thought?\n> \n> Thanks, I understood. I removed this code.\n> \n>>\n>> @@ -1430,6 +1430,9 @@ WalSndWaitForWal(XLogRecPtr loc)\n>> else\n>> RecentFlushPtr = GetXLogReplayRecPtr(NULL);\n>> + /* Send wal statistics */\n>> + pgstat_send_wal();\n>>\n>> AFAIR logical walsender uses three loops in WalSndLoop(), WalSndWriteData()\n>> and WalSndWaitForWal(). But could you tell me why added pgstat_send_wal()\n>> into WalSndWaitForWal()? I'd like to know why WalSndWaitForWal() is the best\n>> for that purpose.\n> \n> I checked what function calls XLogBackgroundFlush() which calls\n> AdvanceXLInsertBuffer() to increment m_wal_buffers_full.\n> \n> I found that WalSndWaitForWal() calls it, so I added it.\n\nOk. But XLogBackgroundFlush() calls AdvanceXLInsertBuffer() wit the second argument opportunistic=true, so in this case WAL write by wal_buffers full seems to never happen. Right? If this understanding is right, WalSndWaitForWal() doesn't need to call pgstat_send_wal(). Probably also walwriter doesn't need to do that.\n\nThe logical rep walsender can generate WAL and call AdvanceXLInsertBuffer() when it executes the replication commands like CREATE_REPLICATION_SLOT. But this case is already covered by pgstat_report_activity()->pgstat_send_wal() called in PostgresMain(), with your patch. So no more calls to pgstat_send_wal() seems necessary for logical rep walsender.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 11 Sep 2020 01:40:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "Hello.\n\nAt Wed, 09 Sep 2020 13:57:37 +0900, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote in \n> I checked what function calls XLogBackgroundFlush() which calls\n> AdvanceXLInsertBuffer() to increment m_wal_buffers_full.\n> \n> I found that WalSndWaitForWal() calls it, so I added it.\n> Is it better to move it in WalSndLoop() like the attached patch?\n\nBy the way, we are counting some wal-related numbers in\npgWalUsage.(bytes, records, fpi). Since now that we are going to have\na new view related to WAL statistics, wouln't it be more useful to\nshow them together in the view?\n\n(Another reason to propose this is that a substantially one-column\n table may look not-great..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 11 Sep 2020 12:17:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/09/11 12:17, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> At Wed, 09 Sep 2020 13:57:37 +0900, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote in\n>> I checked what function calls XLogBackgroundFlush() which calls\n>> AdvanceXLInsertBuffer() to increment m_wal_buffers_full.\n>>\n>> I found that WalSndWaitForWal() calls it, so I added it.\n>> Is it better to move it in WalSndLoop() like the attached patch?\n> \n> By the way, we are counting some wal-related numbers in\n> pgWalUsage.(bytes, records, fpi). Since now that we are going to have\n> a new view related to WAL statistics, wouln't it be more useful to\n> show them together in the view?\n\nProbably yes. But IMO it's better to commit the current patch first, and then add those stats into the view after confirming exposing them is useful.\n\nBTW, to expose the total WAL bytes, I think it's better to just save the LSN at when pg_stat_wal is reset rather than counting pgWalUsage.bytes. If we do that, we can easily total WAL bytes by subtracting that LSN from the latest LSN. Also saving the LSN at the reset timing causes obviously less overhead than counting pgWalUsage.bytes.\n\n\n> (Another reason to propose this is that a substantially one-column\n> table may look not-great..)\n\nI'm ok with such \"small\" view. But if this is really problem, I'm ok to expose only functions pg_stat_get_wal_buffers_full() and pg_stat_get_wal_stat_reset_time(), without the view, at first.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 11 Sep 2020 13:48:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "At Fri, 11 Sep 2020 13:48:49 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/09/11 12:17, Kyotaro Horiguchi wrote:\n> > Hello.\n> > At Wed, 09 Sep 2020 13:57:37 +0900, Masahiro Ikeda\n> > <ikedamsh@oss.nttdata.com> wrote in\n> >> I checked what function calls XLogBackgroundFlush() which calls\n> >> AdvanceXLInsertBuffer() to increment m_wal_buffers_full.\n> >>\n> >> I found that WalSndWaitForWal() calls it, so I added it.\n> >> Is it better to move it in WalSndLoop() like the attached patch?\n> > By the way, we are counting some wal-related numbers in\n> > pgWalUsage.(bytes, records, fpi). Since now that we are going to have\n> > a new view related to WAL statistics, wouln't it be more useful to\n> > show them together in the view?\n> \n> Probably yes. But IMO it's better to commit the current patch first,\n> and then add those stats into the view after confirming exposing them\n> is useful.\n\nI'm fine with that.\n\n> BTW, to expose the total WAL bytes, I think it's better to just save\n> the LSN at when pg_stat_wal is reset rather than counting\n> pgWalUsage.bytes. If we do that, we can easily total WAL bytes by\n> subtracting that LSN from the latest LSN. Also saving the LSN at the\n> reset timing causes obviously less overhead than counting\n> pgWalUsage.bytes.\n\npgWalUsage is always counting so it doesn't add any overhead. But\nsince it cannot be reset, the value needs to be saved at reset time\nlike LSN. I don't mind either way we take from performance\nperspective.\n\n> > (Another reason to propose this is that a substantially one-column\n> > table may look not-great..)\n> \n> I'm ok with such \"small\" view. But if this is really problem, I'm ok\n> to expose only functions pg_stat_get_wal_buffers_full() and\n> pg_stat_get_wal_stat_reset_time(), without the view, at first.\n\nI don't mind that we have such small views as far as it is promised to\ngrow up:p\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 11 Sep 2020 16:54:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/09/11 16:54, Kyotaro Horiguchi wrote:\n> At Fri, 11 Sep 2020 13:48:49 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2020/09/11 12:17, Kyotaro Horiguchi wrote:\n>>> Hello.\n>>> At Wed, 09 Sep 2020 13:57:37 +0900, Masahiro Ikeda\n>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>> I checked what function calls XLogBackgroundFlush() which calls\n>>>> AdvanceXLInsertBuffer() to increment m_wal_buffers_full.\n>>>>\n>>>> I found that WalSndWaitForWal() calls it, so I added it.\n>>>> Is it better to move it in WalSndLoop() like the attached patch?\n>>> By the way, we are counting some wal-related numbers in\n>>> pgWalUsage.(bytes, records, fpi). Since now that we are going to have\n>>> a new view related to WAL statistics, wouln't it be more useful to\n>>> show them together in the view?\n>>\n>> Probably yes. But IMO it's better to commit the current patch first,\n>> and then add those stats into the view after confirming exposing them\n>> is useful.\n> \n> I'm fine with that.\n> \n>> BTW, to expose the total WAL bytes, I think it's better to just save\n>> the LSN at when pg_stat_wal is reset rather than counting\n>> pgWalUsage.bytes. If we do that, we can easily total WAL bytes by\n>> subtracting that LSN from the latest LSN. Also saving the LSN at the\n>> reset timing causes obviously less overhead than counting\n>> pgWalUsage.bytes.\n> \n> pgWalUsage is always counting so it doesn't add any overhead.\n\nYes. And I'm a bit concerned about the overhead by frequent message sent for WAL bytes to the stats collector.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 11 Sep 2020 17:13:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-09-11 01:40, Fujii Masao wrote:\n> On 2020/09/09 13:57, Masahiro Ikeda wrote:\n>> On 2020-09-07 16:19, Fujii Masao wrote:\n>>> On 2020/09/07 9:58, Masahiro Ikeda wrote:\n>>>> Thanks for the review and advice!\n>>>> \n>>>> On 2020-09-03 16:05, Fujii Masao wrote:\n>>>>> On 2020/09/02 18:56, Masahiro Ikeda wrote:\n>>>>>>> +/* ----------\n>>>>>>> + * Backend types\n>>>>>>> + * ----------\n>>>>>>> \n>>>>>>> You seem to forget to add \"*/\" into the above comment.\n>>>>>>> This issue could cause the following compiler warning.\n>>>>>>> ../../src/include/pgstat.h:761:1: warning: '/*' within block \n>>>>>>> comment [-Wcomment]\n>>>>>> \n>>>>>> Thanks for the comment. I fixed.\n>>>>> \n>>>>> Thanks for the fix! But why are those comments necessary?\n>>>> \n>>>> Sorry about that. This comment is not necessary.\n>>>> I removed it.\n>>>> \n>>>>>> The pg_stat_walwriter is not security restricted now, so ordinary \n>>>>>> users can access it.\n>>>>>> It has the same security level as pg_stat_archiver. If you have \n>>>>>> any comments, please let me know.\n>>>>> \n>>>>> + <structfield>dirty_writes</structfield> <type>bigint</type>\n>>>>> \n>>>>> I guess that the column name \"dirty_writes\" derived from\n>>>>> the DTrace probe name. Isn't this name confusing? We should\n>>>>> rename it to \"wal_buffers_full\" or something?\n>>>> \n>>>> I agree and rename it to \"wal_buffers_full\".\n>>>> \n>>>>> +/* ----------\n>>>>> + * PgStat_MsgWalWriter Sent by the walwriter to update \n>>>>> statistics.\n>>>>> \n>>>>> This comment seems not accurate because backends also send it.\n>>>>> \n>>>>> +/*\n>>>>> + * WAL writes statistics counter is updated in XLogWrite function\n>>>>> + */\n>>>>> +extern PgStat_MsgWalWriter WalWriterStats;\n>>>>> \n>>>>> This comment seems not right because the counter is not updated in \n>>>>> XLogWrite().\n>>>> \n>>>> Right. I fixed it to \"Sent by each backend and background workers to \n>>>> update WAL statistics.\"\n>>>> In the future, other statistics will be included so I remove the \n>>>> function's name.\n>>>> \n>>>> \n>>>>> +-- There will surely and maximum one record\n>>>>> +select count(*) = 1 as ok from pg_stat_walwriter;\n>>>>> \n>>>>> What about changing this comment to \"There must be only one \n>>>>> record\"?\n>>>> \n>>>> Thanks, I fixed.\n>>>> \n>>>>> + WalWriterStats.m_xlog_dirty_writes++;\n>>>>> LWLockRelease(WALWriteLock);\n>>>>> \n>>>>> Since WalWriterStats.m_xlog_dirty_writes doesn't need to be \n>>>>> protected\n>>>>> with WALWriteLock, isn't it better to increment that after \n>>>>> releasing the lock?\n>>>> \n>>>> Thanks, I fixed.\n>>>> \n>>>>> +CREATE VIEW pg_stat_walwriter AS\n>>>>> + SELECT\n>>>>> + pg_stat_get_xlog_dirty_writes() AS dirty_writes,\n>>>>> + pg_stat_get_walwriter_stat_reset_time() AS stats_reset;\n>>>>> +\n>>>>> CREATE VIEW pg_stat_progress_vacuum AS\n>>>>> \n>>>>> In system_views.sql, the definition of pg_stat_walwriter should be\n>>>>> placed just after that of pg_stat_bgwriter not \n>>>>> pg_stat_progress_analyze.\n>>>> \n>>>> OK, I fixed it.\n>>>> \n>>>>> }\n>>>>> -\n>>>>> /*\n>>>>> * We found an existing collector stats file. Read it and put \n>>>>> all the\n>>>>> \n>>>>> You seem to accidentally have removed the empty line here.\n>>>> \n>>>> Sorry about that. I fixed it.\n>>>> \n>>>>> - errhint(\"Target must be \\\"archiver\\\" or \n>>>>> \\\"bgwriter\\\".\")));\n>>>>> + errhint(\"Target must be \\\"archiver\\\" or \n>>>>> \\\"bgwriter\\\" or\n>>>>> \\\"walwriter\\\".\")));\n>>>>> \n>>>>> There are two \"or\" in the message, but the former should be \n>>>>> replaced with \",\"?\n>>>> \n>>>> Thanks, I fixed.\n>>>> \n>>>> \n>>>> On 2020-09-05 18:40, Magnus Hagander wrote:\n>>>>> On Fri, Sep 4, 2020 at 5:42 AM Fujii Masao\n>>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>> \n>>>>>> On 2020/09/04 11:50, tsunakawa.takay@fujitsu.com wrote:\n>>>>>>> From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>>>>>>>>> I changed the view name from pg_stat_walwrites to\n>>>>>> pg_stat_walwriter.\n>>>>>>>>> I think it is better to match naming scheme with other views\n>>>>>> like\n>>>>>>>> pg_stat_bgwriter,\n>>>>>>>>> which is for bgwriter statistics but it has the statistics\n>>>>>> related to backend.\n>>>>>>>> \n>>>>>>>> I prefer the view name pg_stat_walwriter for the consistency \n>>>>>>>> with\n>>>>>>>> other view names. But we also have pg_stat_wal_receiver. Which\n>>>>>>>> makes me think that maybe pg_stat_wal_writer is better for\n>>>>>>>> the consistency. Thought? IMO either of them works for me.\n>>>>>>>> I'd like to hear more opinons about this.\n>>>>>>> \n>>>>>>> I think pg_stat_bgwriter is now a misnomer, because it contains\n>>>>>> the backends' activity. Likewise, pg_stat_walwriter leads to\n>>>>>> misunderstanding because its information is not limited to WAL\n>>>>>> writer.\n>>>>>>> \n>>>>>>> How about simply pg_stat_wal? In the future, we may want to\n>>>>>> include WAL reads in this view, e.g. reading undo logs in zheap.\n>>>>>> \n>>>>>> Sounds reasonable.\n>>>>> \n>>>>> +1.\n>>>>> \n>>>>> pg_stat_bgwriter has had the \"wrong name\" for quite some time now \n>>>>> --\n>>>>> it became even more apparent when the checkpointer was split out to\n>>>>> it's own process, and that's not exactly a recent change. And it \n>>>>> had\n>>>>> allocs in it from day one...\n>>>>> \n>>>>> I think naming it for what the data in it is (\"wal\") rather than \n>>>>> which\n>>>>> process deals with it (\"walwriter\") is correct, unless the \n>>>>> statistics\n>>>>> can be known to only *ever* affect one type of process. (And then\n>>>>> different processes can affect different columns in the view). As a\n>>>>> general rule -- and that's from what I can tell exactly what's \n>>>>> being\n>>>>> proposed.\n>>>> \n>>>> Thanks for your comments. I agree with your opinions.\n>>>> I changed the view name to \"pg_stat_wal\".\n>>>> \n>>>> \n>>>> I fixed the code to send the WAL statistics from not only backend \n>>>> and walwriter\n>>>> but also checkpointer, walsender and autovacuum worker.\n>>> \n>>> Good point! Thanks for updating the patch!\n>>> \n>>> \n>>> @@ -604,6 +604,7 @@ heap_vacuum_rel(Relation onerel, VacuumParams \n>>> *params,\n>>> onerel->rd_rel->relisshared,\n>>> Max(new_live_tuples, 0),\n>>> vacrelstats->new_dead_tuples);\n>>> + pgstat_send_wal();\n>>> \n>>> I guess that you changed heap_vacuum_rel() as above so that \n>>> autovacuum\n>>> workers can send WAL stats. But heap_vacuum_rel() can be called by\n>>> the processes (e.g., backends) other than autovacuum workers? Also\n>>> what happens if autovacuum workers just do ANALYZE only? In that \n>>> case,\n>>> heap_vacuum_rel() may not be called.\n>>> \n>>> Currently autovacuum worker reports the stats at the exit via\n>>> pgstat_beshutdown_hook(). Unlike other processes, autovacuum worker\n>>> is not the process that basically keeps running during the service. \n>>> It exits\n>>> after it does vacuum or analyze. So ISTM that it's not bad to report \n>>> the stats\n>>> only at the exit, in autovacuum worker case. There is no need to add \n>>> extra\n>>> code for WAL stats report by autovacuum worker. Thought?\n>> \n>> Thanks, I understood. I removed this code.\n>> \n>>> \n>>> @@ -1430,6 +1430,9 @@ WalSndWaitForWal(XLogRecPtr loc)\n>>> else\n>>> RecentFlushPtr = GetXLogReplayRecPtr(NULL);\n>>> + /* Send wal statistics */\n>>> + pgstat_send_wal();\n>>> \n>>> AFAIR logical walsender uses three loops in WalSndLoop(), \n>>> WalSndWriteData()\n>>> and WalSndWaitForWal(). But could you tell me why added \n>>> pgstat_send_wal()\n>>> into WalSndWaitForWal()? I'd like to know why WalSndWaitForWal() is \n>>> the best\n>>> for that purpose.\n>> \n>> I checked what function calls XLogBackgroundFlush() which calls\n>> AdvanceXLInsertBuffer() to increment m_wal_buffers_full.\n>> \n>> I found that WalSndWaitForWal() calls it, so I added it.\n> \n> Ok. But XLogBackgroundFlush() calls AdvanceXLInsertBuffer() wit the\n> second argument opportunistic=true, so in this case WAL write by\n> wal_buffers full seems to never happen. Right? If this understanding\n> is right, WalSndWaitForWal() doesn't need to call pgstat_send_wal().\n> Probably also walwriter doesn't need to do that.\n> \n> The logical rep walsender can generate WAL and call\n> AdvanceXLInsertBuffer() when it executes the replication commands like\n> CREATE_REPLICATION_SLOT. But this case is already covered by\n> pgstat_report_activity()->pgstat_send_wal() called in PostgresMain(),\n> with your patch. So no more calls to pgstat_send_wal() seems necessary\n> for logical rep walsender.\n\nThanks for your reviews. I didn't notice that.\nI updated the patches.\n\n\nOn 2020-09-11 17:13, Fujii Masao wrote:\n> On 2020/09/11 16:54, Kyotaro Horiguchi wrote:\n>> At Fri, 11 Sep 2020 13:48:49 +0900, Fujii Masao \n>> <masao.fujii@oss.nttdata.com> wrote in\n>>> \n>>> \n>>> On 2020/09/11 12:17, Kyotaro Horiguchi wrote:\n>>>> Hello.\n>>>> At Wed, 09 Sep 2020 13:57:37 +0900, Masahiro Ikeda\n>>>> <ikedamsh@oss.nttdata.com> wrote in\n>>>>> I checked what function calls XLogBackgroundFlush() which calls\n>>>>> AdvanceXLInsertBuffer() to increment m_wal_buffers_full.\n>>>>> \n>>>>> I found that WalSndWaitForWal() calls it, so I added it.\n>>>>> Is it better to move it in WalSndLoop() like the attached patch?\n>>>> By the way, we are counting some wal-related numbers in\n>>>> pgWalUsage.(bytes, records, fpi). Since now that we are going to \n>>>> have\n>>>> a new view related to WAL statistics, wouln't it be more useful to\n>>>> show them together in the view?\n>>> \n>>> Probably yes. But IMO it's better to commit the current patch first,\n>>> and then add those stats into the view after confirming exposing them\n>>> is useful.\n>> \n>> I'm fine with that.\n>> \n>>> BTW, to expose the total WAL bytes, I think it's better to just save\n>>> the LSN at when pg_stat_wal is reset rather than counting\n>>> pgWalUsage.bytes. If we do that, we can easily total WAL bytes by\n>>> subtracting that LSN from the latest LSN. Also saving the LSN at the\n>>> reset timing causes obviously less overhead than counting\n>>> pgWalUsage.bytes.\n>> \n>> pgWalUsage is always counting so it doesn't add any overhead.\n> \n> Yes. And I'm a bit concerned about the overhead by frequent message\n> sent for WAL bytes to the stats collector.\n\nThanks for the comments.\nI agree that we need to add more wal-related statistics\nafter this patch is committed.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Tue, 15 Sep 2020 15:52:30 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/09/15 15:52, Masahiro Ikeda wrote:\n> On 2020-09-11 01:40, Fujii Masao wrote:\n>> On 2020/09/09 13:57, Masahiro Ikeda wrote:\n>>> On 2020-09-07 16:19, Fujii Masao wrote:\n>>>> On 2020/09/07 9:58, Masahiro Ikeda wrote:\n>>>>> Thanks for the review and advice!\n>>>>>\n>>>>> On 2020-09-03 16:05, Fujii Masao wrote:\n>>>>>> On 2020/09/02 18:56, Masahiro Ikeda wrote:\n>>>>>>>> +/* ----------\n>>>>>>>> + * Backend types\n>>>>>>>> + * ----------\n>>>>>>>>\n>>>>>>>> You seem to forget to add \"*/\" into the above comment.\n>>>>>>>> This issue could cause the following compiler warning.\n>>>>>>>> ../../src/include/pgstat.h:761:1: warning: '/*' within block comment [-Wcomment]\n>>>>>>>\n>>>>>>> Thanks for the comment. I fixed.\n>>>>>>\n>>>>>> Thanks for the fix! But why are those comments necessary?\n>>>>>\n>>>>> Sorry about that. This comment is not necessary.\n>>>>> I removed it.\n>>>>>\n>>>>>>> The pg_stat_walwriter is not security restricted now, so ordinary users can access it.\n>>>>>>> It has the same security level as pg_stat_archiver. If you have any comments, please let me know.\n>>>>>>\n>>>>>> + <structfield>dirty_writes</structfield> <type>bigint</type>\n>>>>>>\n>>>>>> I guess that the column name \"dirty_writes\" derived from\n>>>>>> the DTrace probe name. Isn't this name confusing? We should\n>>>>>> rename it to \"wal_buffers_full\" or something?\n>>>>>\n>>>>> I agree and rename it to \"wal_buffers_full\".\n>>>>>\n>>>>>> +/* ----------\n>>>>>> + * PgStat_MsgWalWriter Sent by the walwriter to update statistics.\n>>>>>>\n>>>>>> This comment seems not accurate because backends also send it.\n>>>>>>\n>>>>>> +/*\n>>>>>> + * WAL writes statistics counter is updated in XLogWrite function\n>>>>>> + */\n>>>>>> +extern PgStat_MsgWalWriter WalWriterStats;\n>>>>>>\n>>>>>> This comment seems not right because the counter is not updated in XLogWrite().\n>>>>>\n>>>>> Right. I fixed it to \"Sent by each backend and background workers to update WAL statistics.\"\n>>>>> In the future, other statistics will be included so I remove the function's name.\n>>>>>\n>>>>>\n>>>>>> +-- There will surely and maximum one record\n>>>>>> +select count(*) = 1 as ok from pg_stat_walwriter;\n>>>>>>\n>>>>>> What about changing this comment to \"There must be only one record\"?\n>>>>>\n>>>>> Thanks, I fixed.\n>>>>>\n>>>>>> + WalWriterStats.m_xlog_dirty_writes++;\n>>>>>> LWLockRelease(WALWriteLock);\n>>>>>>\n>>>>>> Since WalWriterStats.m_xlog_dirty_writes doesn't need to be protected\n>>>>>> with WALWriteLock, isn't it better to increment that after releasing the lock?\n>>>>>\n>>>>> Thanks, I fixed.\n>>>>>\n>>>>>> +CREATE VIEW pg_stat_walwriter AS\n>>>>>> + SELECT\n>>>>>> + pg_stat_get_xlog_dirty_writes() AS dirty_writes,\n>>>>>> + pg_stat_get_walwriter_stat_reset_time() AS stats_reset;\n>>>>>> +\n>>>>>> CREATE VIEW pg_stat_progress_vacuum AS\n>>>>>>\n>>>>>> In system_views.sql, the definition of pg_stat_walwriter should be\n>>>>>> placed just after that of pg_stat_bgwriter not pg_stat_progress_analyze.\n>>>>>\n>>>>> OK, I fixed it.\n>>>>>\n>>>>>> }\n>>>>>> -\n>>>>>> /*\n>>>>>> * We found an existing collector stats file. Read it and put all the\n>>>>>>\n>>>>>> You seem to accidentally have removed the empty line here.\n>>>>>\n>>>>> Sorry about that. I fixed it.\n>>>>>\n>>>>>> - errhint(\"Target must be \\\"archiver\\\" or \\\"bgwriter\\\".\")));\n>>>>>> + errhint(\"Target must be \\\"archiver\\\" or \\\"bgwriter\\\" or\n>>>>>> \\\"walwriter\\\".\")));\n>>>>>>\n>>>>>> There are two \"or\" in the message, but the former should be replaced with \",\"?\n>>>>>\n>>>>> Thanks, I fixed.\n>>>>>\n>>>>>\n>>>>> On 2020-09-05 18:40, Magnus Hagander wrote:\n>>>>>> On Fri, Sep 4, 2020 at 5:42 AM Fujii Masao\n>>>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>> On 2020/09/04 11:50, tsunakawa.takay@fujitsu.com wrote:\n>>>>>>>> From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>>>>>>>>>> I changed the view name from pg_stat_walwrites to\n>>>>>>> pg_stat_walwriter.\n>>>>>>>>>> I think it is better to match naming scheme with other views\n>>>>>>> like\n>>>>>>>>> pg_stat_bgwriter,\n>>>>>>>>>> which is for bgwriter statistics but it has the statistics\n>>>>>>> related to backend.\n>>>>>>>>>\n>>>>>>>>> I prefer the view name pg_stat_walwriter for the consistency with\n>>>>>>>>> other view names. But we also have pg_stat_wal_receiver. Which\n>>>>>>>>> makes me think that maybe pg_stat_wal_writer is better for\n>>>>>>>>> the consistency. Thought? IMO either of them works for me.\n>>>>>>>>> I'd like to hear more opinons about this.\n>>>>>>>>\n>>>>>>>> I think pg_stat_bgwriter is now a misnomer, because it contains\n>>>>>>> the backends' activity. Likewise, pg_stat_walwriter leads to\n>>>>>>> misunderstanding because its information is not limited to WAL\n>>>>>>> writer.\n>>>>>>>>\n>>>>>>>> How about simply pg_stat_wal? In the future, we may want to\n>>>>>>> include WAL reads in this view, e.g. reading undo logs in zheap.\n>>>>>>>\n>>>>>>> Sounds reasonable.\n>>>>>>\n>>>>>> +1.\n>>>>>>\n>>>>>> pg_stat_bgwriter has had the \"wrong name\" for quite some time now --\n>>>>>> it became even more apparent when the checkpointer was split out to\n>>>>>> it's own process, and that's not exactly a recent change. And it had\n>>>>>> allocs in it from day one...\n>>>>>>\n>>>>>> I think naming it for what the data in it is (\"wal\") rather than which\n>>>>>> process deals with it (\"walwriter\") is correct, unless the statistics\n>>>>>> can be known to only *ever* affect one type of process. (And then\n>>>>>> different processes can affect different columns in the view). As a\n>>>>>> general rule -- and that's from what I can tell exactly what's being\n>>>>>> proposed.\n>>>>>\n>>>>> Thanks for your comments. I agree with your opinions.\n>>>>> I changed the view name to \"pg_stat_wal\".\n>>>>>\n>>>>>\n>>>>> I fixed the code to send the WAL statistics from not only backend and walwriter\n>>>>> but also checkpointer, walsender and autovacuum worker.\n>>>>\n>>>> Good point! Thanks for updating the patch!\n>>>>\n>>>>\n>>>> @@ -604,6 +604,7 @@ heap_vacuum_rel(Relation onerel, VacuumParams *params,\n>>>> onerel->rd_rel->relisshared,\n>>>> Max(new_live_tuples, 0),\n>>>> vacrelstats->new_dead_tuples);\n>>>> + pgstat_send_wal();\n>>>>\n>>>> I guess that you changed heap_vacuum_rel() as above so that autovacuum\n>>>> workers can send WAL stats. But heap_vacuum_rel() can be called by\n>>>> the processes (e.g., backends) other than autovacuum workers? Also\n>>>> what happens if autovacuum workers just do ANALYZE only? In that case,\n>>>> heap_vacuum_rel() may not be called.\n>>>>\n>>>> Currently autovacuum worker reports the stats at the exit via\n>>>> pgstat_beshutdown_hook(). Unlike other processes, autovacuum worker\n>>>> is not the process that basically keeps running during the service. It exits\n>>>> after it does vacuum or analyze. So ISTM that it's not bad to report the stats\n>>>> only at the exit, in autovacuum worker case. There is no need to add extra\n>>>> code for WAL stats report by autovacuum worker. Thought?\n>>>\n>>> Thanks, I understood. I removed this code.\n>>>\n>>>>\n>>>> @@ -1430,6 +1430,9 @@ WalSndWaitForWal(XLogRecPtr loc)\n>>>> else\n>>>> RecentFlushPtr = GetXLogReplayRecPtr(NULL);\n>>>> + /* Send wal statistics */\n>>>> + pgstat_send_wal();\n>>>>\n>>>> AFAIR logical walsender uses three loops in WalSndLoop(), WalSndWriteData()\n>>>> and WalSndWaitForWal(). But could you tell me why added pgstat_send_wal()\n>>>> into WalSndWaitForWal()? I'd like to know why WalSndWaitForWal() is the best\n>>>> for that purpose.\n>>>\n>>> I checked what function calls XLogBackgroundFlush() which calls\n>>> AdvanceXLInsertBuffer() to increment m_wal_buffers_full.\n>>>\n>>> I found that WalSndWaitForWal() calls it, so I added it.\n>>\n>> Ok. But XLogBackgroundFlush() calls AdvanceXLInsertBuffer() wit the\n>> second argument opportunistic=true, so in this case WAL write by\n>> wal_buffers full seems to never happen. Right? If this understanding\n>> is right, WalSndWaitForWal() doesn't need to call pgstat_send_wal().\n>> Probably also walwriter doesn't need to do that.\n\nThanks for updating the patch! This patch adds pgstat_send_wal() in\nwalwriter main loop. But isn't this unnecessary because of the above reason?\nThat is, since walwriter calls AdvanceXLInsertBuffer() with\nthe second argument \"opportunistic\" = true via XLogBackgroundFlush(),\nthe event of full wal_buffers will never happen. No?\n\n\n>>\n>> The logical rep walsender can generate WAL and call\n>> AdvanceXLInsertBuffer() when it executes the replication commands like\n>> CREATE_REPLICATION_SLOT. But this case is already covered by\n>> pgstat_report_activity()->pgstat_send_wal() called in PostgresMain(),\n>> with your patch. So no more calls to pgstat_send_wal() seems necessary\n>> for logical rep walsender.\n> \n> Thanks for your reviews. I didn't notice that.\n> I updated the patches.\n\nSorry, the above my analysis might be incorrect. During logical replication,\nwalsender may access to the system table. Which may cause HOT pruning\nor killing of dead index tuple. Also which can cause WAL and\nfull wal_buffers event. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 15 Sep 2020 17:10:34 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-09-15 17:10, Fujii Masao wrote:\n> On 2020/09/15 15:52, Masahiro Ikeda wrote:\n>> On 2020-09-11 01:40, Fujii Masao wrote:\n>>> On 2020/09/09 13:57, Masahiro Ikeda wrote:\n>>>> On 2020-09-07 16:19, Fujii Masao wrote:\n>>>>> On 2020/09/07 9:58, Masahiro Ikeda wrote:\n>>>>>> Thanks for the review and advice!\n>>>>>> \n>>>>>> On 2020-09-03 16:05, Fujii Masao wrote:\n>>>>>>> On 2020/09/02 18:56, Masahiro Ikeda wrote:\n>>>>>>>>> +/* ----------\n>>>>>>>>> + * Backend types\n>>>>>>>>> + * ----------\n>>>>>>>>> \n>>>>>>>>> You seem to forget to add \"*/\" into the above comment.\n>>>>>>>>> This issue could cause the following compiler warning.\n>>>>>>>>> ../../src/include/pgstat.h:761:1: warning: '/*' within block \n>>>>>>>>> comment [-Wcomment]\n>>>>>>>> \n>>>>>>>> Thanks for the comment. I fixed.\n>>>>>>> \n>>>>>>> Thanks for the fix! But why are those comments necessary?\n>>>>>> \n>>>>>> Sorry about that. This comment is not necessary.\n>>>>>> I removed it.\n>>>>>> \n>>>>>>>> The pg_stat_walwriter is not security restricted now, so \n>>>>>>>> ordinary users can access it.\n>>>>>>>> It has the same security level as pg_stat_archiver. If you have \n>>>>>>>> any comments, please let me know.\n>>>>>>> \n>>>>>>> + <structfield>dirty_writes</structfield> \n>>>>>>> <type>bigint</type>\n>>>>>>> \n>>>>>>> I guess that the column name \"dirty_writes\" derived from\n>>>>>>> the DTrace probe name. Isn't this name confusing? We should\n>>>>>>> rename it to \"wal_buffers_full\" or something?\n>>>>>> \n>>>>>> I agree and rename it to \"wal_buffers_full\".\n>>>>>> \n>>>>>>> +/* ----------\n>>>>>>> + * PgStat_MsgWalWriter Sent by the walwriter to \n>>>>>>> update statistics.\n>>>>>>> \n>>>>>>> This comment seems not accurate because backends also send it.\n>>>>>>> \n>>>>>>> +/*\n>>>>>>> + * WAL writes statistics counter is updated in XLogWrite \n>>>>>>> function\n>>>>>>> + */\n>>>>>>> +extern PgStat_MsgWalWriter WalWriterStats;\n>>>>>>> \n>>>>>>> This comment seems not right because the counter is not updated \n>>>>>>> in XLogWrite().\n>>>>>> \n>>>>>> Right. I fixed it to \"Sent by each backend and background workers \n>>>>>> to update WAL statistics.\"\n>>>>>> In the future, other statistics will be included so I remove the \n>>>>>> function's name.\n>>>>>> \n>>>>>> \n>>>>>>> +-- There will surely and maximum one record\n>>>>>>> +select count(*) = 1 as ok from pg_stat_walwriter;\n>>>>>>> \n>>>>>>> What about changing this comment to \"There must be only one \n>>>>>>> record\"?\n>>>>>> \n>>>>>> Thanks, I fixed.\n>>>>>> \n>>>>>>> + WalWriterStats.m_xlog_dirty_writes++;\n>>>>>>> LWLockRelease(WALWriteLock);\n>>>>>>> \n>>>>>>> Since WalWriterStats.m_xlog_dirty_writes doesn't need to be \n>>>>>>> protected\n>>>>>>> with WALWriteLock, isn't it better to increment that after \n>>>>>>> releasing the lock?\n>>>>>> \n>>>>>> Thanks, I fixed.\n>>>>>> \n>>>>>>> +CREATE VIEW pg_stat_walwriter AS\n>>>>>>> + SELECT\n>>>>>>> + pg_stat_get_xlog_dirty_writes() AS dirty_writes,\n>>>>>>> + pg_stat_get_walwriter_stat_reset_time() AS stats_reset;\n>>>>>>> +\n>>>>>>> CREATE VIEW pg_stat_progress_vacuum AS\n>>>>>>> \n>>>>>>> In system_views.sql, the definition of pg_stat_walwriter should \n>>>>>>> be\n>>>>>>> placed just after that of pg_stat_bgwriter not \n>>>>>>> pg_stat_progress_analyze.\n>>>>>> \n>>>>>> OK, I fixed it.\n>>>>>> \n>>>>>>> }\n>>>>>>> -\n>>>>>>> /*\n>>>>>>> * We found an existing collector stats file. Read it and \n>>>>>>> put all the\n>>>>>>> \n>>>>>>> You seem to accidentally have removed the empty line here.\n>>>>>> \n>>>>>> Sorry about that. I fixed it.\n>>>>>> \n>>>>>>> - errhint(\"Target must be \\\"archiver\\\" or \n>>>>>>> \\\"bgwriter\\\".\")));\n>>>>>>> + errhint(\"Target must be \\\"archiver\\\" or \n>>>>>>> \\\"bgwriter\\\" or\n>>>>>>> \\\"walwriter\\\".\")));\n>>>>>>> \n>>>>>>> There are two \"or\" in the message, but the former should be \n>>>>>>> replaced with \",\"?\n>>>>>> \n>>>>>> Thanks, I fixed.\n>>>>>> \n>>>>>> \n>>>>>> On 2020-09-05 18:40, Magnus Hagander wrote:\n>>>>>>> On Fri, Sep 4, 2020 at 5:42 AM Fujii Masao\n>>>>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>> \n>>>>>>>> On 2020/09/04 11:50, tsunakawa.takay@fujitsu.com wrote:\n>>>>>>>>> From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>>>>>>>>>>> I changed the view name from pg_stat_walwrites to\n>>>>>>>> pg_stat_walwriter.\n>>>>>>>>>>> I think it is better to match naming scheme with other views\n>>>>>>>> like\n>>>>>>>>>> pg_stat_bgwriter,\n>>>>>>>>>>> which is for bgwriter statistics but it has the statistics\n>>>>>>>> related to backend.\n>>>>>>>>>> \n>>>>>>>>>> I prefer the view name pg_stat_walwriter for the consistency \n>>>>>>>>>> with\n>>>>>>>>>> other view names. But we also have pg_stat_wal_receiver. Which\n>>>>>>>>>> makes me think that maybe pg_stat_wal_writer is better for\n>>>>>>>>>> the consistency. Thought? IMO either of them works for me.\n>>>>>>>>>> I'd like to hear more opinons about this.\n>>>>>>>>> \n>>>>>>>>> I think pg_stat_bgwriter is now a misnomer, because it contains\n>>>>>>>> the backends' activity. Likewise, pg_stat_walwriter leads to\n>>>>>>>> misunderstanding because its information is not limited to WAL\n>>>>>>>> writer.\n>>>>>>>>> \n>>>>>>>>> How about simply pg_stat_wal? In the future, we may want to\n>>>>>>>> include WAL reads in this view, e.g. reading undo logs in zheap.\n>>>>>>>> \n>>>>>>>> Sounds reasonable.\n>>>>>>> \n>>>>>>> +1.\n>>>>>>> \n>>>>>>> pg_stat_bgwriter has had the \"wrong name\" for quite some time now \n>>>>>>> --\n>>>>>>> it became even more apparent when the checkpointer was split out \n>>>>>>> to\n>>>>>>> it's own process, and that's not exactly a recent change. And it \n>>>>>>> had\n>>>>>>> allocs in it from day one...\n>>>>>>> \n>>>>>>> I think naming it for what the data in it is (\"wal\") rather than \n>>>>>>> which\n>>>>>>> process deals with it (\"walwriter\") is correct, unless the \n>>>>>>> statistics\n>>>>>>> can be known to only *ever* affect one type of process. (And then\n>>>>>>> different processes can affect different columns in the view). As \n>>>>>>> a\n>>>>>>> general rule -- and that's from what I can tell exactly what's \n>>>>>>> being\n>>>>>>> proposed.\n>>>>>> \n>>>>>> Thanks for your comments. I agree with your opinions.\n>>>>>> I changed the view name to \"pg_stat_wal\".\n>>>>>> \n>>>>>> \n>>>>>> I fixed the code to send the WAL statistics from not only backend \n>>>>>> and walwriter\n>>>>>> but also checkpointer, walsender and autovacuum worker.\n>>>>> \n>>>>> Good point! Thanks for updating the patch!\n>>>>> \n>>>>> \n>>>>> @@ -604,6 +604,7 @@ heap_vacuum_rel(Relation onerel, VacuumParams \n>>>>> *params,\n>>>>> onerel->rd_rel->relisshared,\n>>>>> Max(new_live_tuples, 0),\n>>>>> vacrelstats->new_dead_tuples);\n>>>>> + pgstat_send_wal();\n>>>>> \n>>>>> I guess that you changed heap_vacuum_rel() as above so that \n>>>>> autovacuum\n>>>>> workers can send WAL stats. But heap_vacuum_rel() can be called by\n>>>>> the processes (e.g., backends) other than autovacuum workers? Also\n>>>>> what happens if autovacuum workers just do ANALYZE only? In that \n>>>>> case,\n>>>>> heap_vacuum_rel() may not be called.\n>>>>> \n>>>>> Currently autovacuum worker reports the stats at the exit via\n>>>>> pgstat_beshutdown_hook(). Unlike other processes, autovacuum worker\n>>>>> is not the process that basically keeps running during the service. \n>>>>> It exits\n>>>>> after it does vacuum or analyze. So ISTM that it's not bad to \n>>>>> report the stats\n>>>>> only at the exit, in autovacuum worker case. There is no need to \n>>>>> add extra\n>>>>> code for WAL stats report by autovacuum worker. Thought?\n>>>> \n>>>> Thanks, I understood. I removed this code.\n>>>> \n>>>>> \n>>>>> @@ -1430,6 +1430,9 @@ WalSndWaitForWal(XLogRecPtr loc)\n>>>>> else\n>>>>> RecentFlushPtr = GetXLogReplayRecPtr(NULL);\n>>>>> + /* Send wal statistics */\n>>>>> + pgstat_send_wal();\n>>>>> \n>>>>> AFAIR logical walsender uses three loops in WalSndLoop(), \n>>>>> WalSndWriteData()\n>>>>> and WalSndWaitForWal(). But could you tell me why added \n>>>>> pgstat_send_wal()\n>>>>> into WalSndWaitForWal()? I'd like to know why WalSndWaitForWal() is \n>>>>> the best\n>>>>> for that purpose.\n>>>> \n>>>> I checked what function calls XLogBackgroundFlush() which calls\n>>>> AdvanceXLInsertBuffer() to increment m_wal_buffers_full.\n>>>> \n>>>> I found that WalSndWaitForWal() calls it, so I added it.\n>>> \n>>> Ok. But XLogBackgroundFlush() calls AdvanceXLInsertBuffer() wit the\n>>> second argument opportunistic=true, so in this case WAL write by\n>>> wal_buffers full seems to never happen. Right? If this understanding\n>>> is right, WalSndWaitForWal() doesn't need to call pgstat_send_wal().\n>>> Probably also walwriter doesn't need to do that.\n> \n> Thanks for updating the patch! This patch adds pgstat_send_wal() in\n> walwriter main loop. But isn't this unnecessary because of the above \n> reason?\n> That is, since walwriter calls AdvanceXLInsertBuffer() with\n> the second argument \"opportunistic\" = true via XLogBackgroundFlush(),\n> the event of full wal_buffers will never happen. No?\n\nRight, I fixed it.\n\n>>> \n>>> The logical rep walsender can generate WAL and call\n>>> AdvanceXLInsertBuffer() when it executes the replication commands \n>>> like\n>>> CREATE_REPLICATION_SLOT. But this case is already covered by\n>>> pgstat_report_activity()->pgstat_send_wal() called in PostgresMain(),\n>>> with your patch. So no more calls to pgstat_send_wal() seems \n>>> necessary\n>>> for logical rep walsender.\n>> \n>> Thanks for your reviews. I didn't notice that.\n>> I updated the patches.\n> \n> Sorry, the above my analysis might be incorrect. During logical \n> replication,\n> walsender may access to the system table. Which may cause HOT pruning\n> or killing of dead index tuple. Also which can cause WAL and\n> full wal_buffers event. Thought?\n\nThanks. I confirmed that it causes HOT pruning or killing of\ndead index tuple if DecodeCommit() is called.\n\nAs you said, DecodeCommit() may access the system table.\n\nWalSndLoop()\n -> XLogSendLogical()\n -> LogicalDecodingProcessRecord()\n -> DecodeXactOp()\n -> DecodeCommit()\n -> ReorderBufferCommit()\n -> ReorderBufferProcessTXN()\n -> RelidByRelfilenode()\n -> systable_getnext()\n\nThe wals are generated only when logical replication is performed.\nSo, I added pgstat_send_wal() in XLogSendLogical().\n\nBut, I concerned that it causes poor performance\nsince pgstat_send_wal() is called per wal record,\n\nIs it necessary to introduce a mechanism to send in bulk?\nBut I worried about how to implement is best. Is it good to send wal \nstatistics per X recoreds?\n\n\nI think there are other background processes that access the system \ntables,\nso I organized which process must send wal metrics and added \npgstat_send_wal() to the main loop of some background processes\nfor example, autovacuum launcher, logical replication launcher, and \nlogical replication worker's one.\n\n(*) [x]: it needs to send it\n [ ]: it don't need to send it\n\n* [ ] postmaster\n* [ ] background writer\n* [x] checkpointer: it generates wal for checkpoint.\n* [ ] walwriter\n* [x] autovacuum launcher: it accesses to the system tables to get the \ndatabase list.\n* [x] autovacuum worker: it generates wal for vacuum.\n* [ ] stats collector\n* [x] backend: it generate wal for query execution.\n* [ ] startup\n* [ ] archiver\n* [x] walsender: it accesses to the system tables if logical replication \nis performed.\n* [ ] walreceiver\n* [x] logical replication launcher: it accesses to the system tables to \nget the subscription list.\n* [x] logical replication worker: it accesses to the system tables to \nget oid from relname.\n* [x] parallel worker: it generates wal for query execution.\n\nIf my understanding is wrong, please let me know.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Fri, 18 Sep 2020 09:40:11 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "At Fri, 18 Sep 2020 09:40:11 +0900, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote in \n> Thanks. I confirmed that it causes HOT pruning or killing of\n> dead index tuple if DecodeCommit() is called.\n> \n> As you said, DecodeCommit() may access the system table.\n...\n> The wals are generated only when logical replication is performed.\n> So, I added pgstat_send_wal() in XLogSendLogical().\n> \n> But, I concerned that it causes poor performance\n> since pgstat_send_wal() is called per wal record,\n\nI think that's too frequent. If we want to send any stats to the\ncollector, it is usually done at commit time using\npgstat_report_stat(), and the function avoids sending stats too\nfrequently. For logrep-worker, apply_handle_commit() is calling it. It\nseems to be the place if we want to send the wal stats. Or it may be\nbetter to call pgstat_send_wal() via pgstat_report_stat(), like\npg_stat_slru().\n\nCurrently logrep-laucher, logrep-worker and autovac-launcher (and some\nother processes?) don't seem (AFAICS) sending scan stats at all but\naccording to the discussion here, we should let such processes send\nstats.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 18 Sep 2020 11:11:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-09-18 11:11, Kyotaro Horiguchi wrote:\n> At Fri, 18 Sep 2020 09:40:11 +0900, Masahiro Ikeda\n> <ikedamsh@oss.nttdata.com> wrote in\n>> Thanks. I confirmed that it causes HOT pruning or killing of\n>> dead index tuple if DecodeCommit() is called.\n>> \n>> As you said, DecodeCommit() may access the system table.\n> ...\n>> The wals are generated only when logical replication is performed.\n>> So, I added pgstat_send_wal() in XLogSendLogical().\n>> \n>> But, I concerned that it causes poor performance\n>> since pgstat_send_wal() is called per wal record,\n> \n> I think that's too frequent. If we want to send any stats to the\n> collector, it is usually done at commit time using\n> pgstat_report_stat(), and the function avoids sending stats too\n> frequently. For logrep-worker, apply_handle_commit() is calling it. It\n> seems to be the place if we want to send the wal stats. Or it may be\n> better to call pgstat_send_wal() via pgstat_report_stat(), like\n> pg_stat_slru().\n\nThanks for your comments.\nSince I changed to use pgstat_report_stat() and DecodeCommit() is \ncalling it,\nthe frequency to send statistics is not so high.\n\n> Currently logrep-laucher, logrep-worker and autovac-launcher (and some\n> other processes?) don't seem (AFAICS) sending scan stats at all but\n> according to the discussion here, we should let such processes send\n> stats.\n\nI added pgstat_report_stat() to logrep-laucher and autovac-launcher.\nAs you said, logrep-worker already calls apply_handle_commit() and \npgstat_report_stat().\n\nThe checkpointer doesn't seem to call pgstat_report_stat() currently,\nbut since there is a possibility to send wal statistics, I added \npgstat_report_stat().\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Fri, 25 Sep 2020 12:06:13 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/09/25 12:06, Masahiro Ikeda wrote:\n> On 2020-09-18 11:11, Kyotaro Horiguchi wrote:\n>> At Fri, 18 Sep 2020 09:40:11 +0900, Masahiro Ikeda\n>> <ikedamsh@oss.nttdata.com> wrote in\n>>> Thanks. I confirmed that it causes HOT pruning or killing of\n>>> dead index tuple if DecodeCommit() is called.\n>>>\n>>> As you said, DecodeCommit() may access the system table.\n>> ...\n>>> The wals are generated only when logical replication is performed.\n>>> So, I added pgstat_send_wal() in XLogSendLogical().\n>>>\n>>> But, I concerned that it causes poor performance\n>>> since pgstat_send_wal() is called per wal record,\n>>\n>> I think that's too frequent.� If we want to send any stats to the\n>> collector, it is usually done at commit time using\n>> pgstat_report_stat(), and the function avoids sending stats too\n>> frequently. For logrep-worker, apply_handle_commit() is calling it. It\n>> seems to be the place if we want to send the wal stats.� Or it may be\n>> better to call pgstat_send_wal() via pgstat_report_stat(), like\n>> pg_stat_slru().\n> \n> Thanks for your comments.\n> Since I changed to use pgstat_report_stat() and DecodeCommit() is calling it,\n> the frequency to send statistics is not so high.\n\nOn second thought, it's strange to include this change in pg_stat_wal patch.\nBecause pgstat_report_stat() sends various stats and that change would\naffect not only pg_stat_wal but also other stats views. That is, if we really\nwant to make some processes call pgstat_report_stat() newly, which\nshould be implemented as a separate patch. But I'm not sure how useful\nthis change is because probably the stats are almost negligibly small\nin those processes.\n\nThis thought seems valid for pgstat_send_wal(). I changed the thought\nand am inclined to be ok not to call pgstat_send_wal() in some background\nprocesses that are very unlikely to generate WAL. For example, logical-rep\nlauncher, logical-rep walsender, and autovacuum launcher. Thought?\n\n\n> \n>> Currently logrep-laucher, logrep-worker and autovac-launcher (and some\n>> other processes?) don't seem (AFAICS) sending scan stats at all but\n>> according to the discussion here, we should let such processes send\n>> stats.\n> \n> I added pgstat_report_stat() to logrep-laucher and autovac-launcher.\n> As you said, logrep-worker already calls apply_handle_commit() and pgstat_report_stat().\n\nRight.\n\n\n> The checkpointer doesn't seem to call pgstat_report_stat() currently,\n> but since there is a possibility to send wal statistics, I added pgstat_report_stat().\n\nIMO it's better to call pgstat_send_wal() in the checkpointer, instead,\nbecause of the above reason.\n\nThanks for updating the patch! I'd like to share my review comments.\n\n+ <xref linkend=\"monitoring-pg-stat-wal-view\"/> for details.\n\nLike the description for pg_stat_bgwriter, <link> tag should be used\ninstead of <xref>.\n\n\n+ <para>\n+ Number of WAL writes when the <xref linkend=\"guc-wal-buffers\"/> are full\n+ </para></entry>\n\nI prefer the following description. Thought?\n\n\"Number of times WAL data was written to the disk because wal_buffers got full\"\n\n\n+ the <structname>pg_stat_archiver</structname> view ,or <literal>wal</literal>\n\nA comma should be just after \"view\" (not just before \"or\").\n\n\n+/*\n+ * WAL global statistics counter.\n+ * This counter is incremented by both each backend and background.\n+ * And then, sent to the stat collector process.\n+ */\n+PgStat_MsgWal WalStats;\n\nWhat about merging the comments for BgWriterStats and WalStats into one because they are almost the same? For example,\n\n-------------------------------\n/*\n * BgWriter and WAL global statistics counters.\n * Stored directly in a stats message structure so they can be sent\n * without needing to copy things around. We assume these init to zeroes.\n */\nPgStat_MsgBgWriter BgWriterStats;\nPgStat_MsgWal WalStats;\n-------------------------------\n\nBTW, originally there was the comment \"(unused in other processes)\"\nfor BgWriterStats. But it seems not true, so I removed it from\nthe above example.\n\n\n+\trc = fwrite(&walStats, sizeof(walStats), 1, fpout);\n+\t(void) rc;\t\t\t\t\t/* we'll check for error with ferror */\n\nSince the patch changes the pgstat file format,\nPGSTAT_FILE_FORMAT_ID should also be changed?\n\n\n-\t * Clear out global and archiver statistics so they start from zero in\n+\t * Clear out global, archiver and wal statistics so they start from zero in\n\nThis is not the issue of this patch, but isn't it better to mention\nalso SLRU stats here? That is, what about \"Clear out global, archiver,\nWAL and SLRU statistics so they start from zero in\"?\n\n\nI found \"wal statistics\" and \"wal stats\" in some comments in the patch,\nbut isn't it better to use \"WAL statistics\" and \"WAL stats\", instead,\nif there is no special reason to use lowercase?\n\n\n+\t/*\n+\t * Read wal stats struct\n+\t */\n+\tif (fread(&walStats, 1, sizeof(walStats), fpin) != sizeof(walStats))\n\nIn pgstat_read_db_statsfile_timestamp(), the local variable myWalStats\nshould be declared and be used to store the WAL stats read via fread(),\ninstead.\n\n\n+{ oid => '1136', descr => 'statistics: number of WAL writes when the wal buffers are full',\n\nIf we change the description of wal_buffers_full column in the document\nas I proposed, we should also use the proposed description here.\n\n\n+{ oid => '1137', descr => 'statistics: last reset for the walwriter',\n\n\"the walwriter\" should be \"WAL\" or \"WAL activity\", etc?\n\n\n+ * PgStat_MsgWal\t\t\tSent by each backend and background workers to update WAL statistics.\n\nIf your intention here is to mention background processes like checkpointer,\n\"each backend and background workers\" should be \"backends and background\nprocesses\"?\n\n\n+\tPgStat_Counter m_wal_buffers_full;\t/* number of WAL write caused by full of WAL buffers */\n\nI don't think this comment is necessary.\n\n\n+\tPgStat_Counter wal_buffers_full;\t/* number of WAL write caused by full of WAL buffers */\n+\tTimestampTz stat_reset_timestamp;\t/* last time when the stats reset */\n\nI don't think these comments are necessary.\n\n\n+/*\n+ * WAL writes statistics counter is updated by backend and background workers\n\nSame as above.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 26 Sep 2020 02:36:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On Fri, Sep 25, 2020 at 11:06 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/09/25 12:06, Masahiro Ikeda wrote:\n> > On 2020-09-18 11:11, Kyotaro Horiguchi wrote:\n> >> At Fri, 18 Sep 2020 09:40:11 +0900, Masahiro Ikeda\n> >> <ikedamsh@oss.nttdata.com> wrote in\n> >>> Thanks. I confirmed that it causes HOT pruning or killing of\n> >>> dead index tuple if DecodeCommit() is called.\n> >>>\n> >>> As you said, DecodeCommit() may access the system table.\n> >> ...\n> >>> The wals are generated only when logical replication is performed.\n> >>> So, I added pgstat_send_wal() in XLogSendLogical().\n> >>>\n> >>> But, I concerned that it causes poor performance\n> >>> since pgstat_send_wal() is called per wal record,\n> >>\n> >> I think that's too frequent. If we want to send any stats to the\n> >> collector, it is usually done at commit time using\n> >> pgstat_report_stat(), and the function avoids sending stats too\n> >> frequently. For logrep-worker, apply_handle_commit() is calling it. It\n> >> seems to be the place if we want to send the wal stats. Or it may be\n> >> better to call pgstat_send_wal() via pgstat_report_stat(), like\n> >> pg_stat_slru().\n> >\n> > Thanks for your comments.\n> > Since I changed to use pgstat_report_stat() and DecodeCommit() is calling it,\n> > the frequency to send statistics is not so high.\n>\n> On second thought, it's strange to include this change in pg_stat_wal patch.\n> Because pgstat_report_stat() sends various stats and that change would\n> affect not only pg_stat_wal but also other stats views. That is, if we really\n> want to make some processes call pgstat_report_stat() newly, which\n> should be implemented as a separate patch. But I'm not sure how useful\n> this change is because probably the stats are almost negligibly small\n> in those processes.\n>\n> This thought seems valid for pgstat_send_wal(). I changed the thought\n> and am inclined to be ok not to call pgstat_send_wal() in some background\n> processes that are very unlikely to generate WAL.\n>\n\nThis makes sense to me. I think even if such background processes have\nto write WAL due to wal_buffers, it will be accounted next time the\nbackend sends the stats.\n\nOne minor point, don't we need to reset the counter\nWalStats.m_wal_buffers_full once we sent the stats, otherwise the same\nstats will be accounted multiple times.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 26 Sep 2020 15:48:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "At Sat, 26 Sep 2020 15:48:49 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Fri, Sep 25, 2020 at 11:06 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > On 2020/09/25 12:06, Masahiro Ikeda wrote:\n> > > On 2020-09-18 11:11, Kyotaro Horiguchi wrote:\n> > >> At Fri, 18 Sep 2020 09:40:11 +0900, Masahiro Ikeda\n> > >> <ikedamsh@oss.nttdata.com> wrote in\n> > >>> Thanks. I confirmed that it causes HOT pruning or killing of\n> > >>> dead index tuple if DecodeCommit() is called.\n> > >>>\n> > >>> As you said, DecodeCommit() may access the system table.\n> > >> ...\n> > >>> The wals are generated only when logical replication is performed.\n> > >>> So, I added pgstat_send_wal() in XLogSendLogical().\n> > >>>\n> > >>> But, I concerned that it causes poor performance\n> > >>> since pgstat_send_wal() is called per wal record,\n> > >>\n> > >> I think that's too frequent. If we want to send any stats to the\n> > >> collector, it is usually done at commit time using\n> > >> pgstat_report_stat(), and the function avoids sending stats too\n> > >> frequently. For logrep-worker, apply_handle_commit() is calling it. It\n> > >> seems to be the place if we want to send the wal stats. Or it may be\n> > >> better to call pgstat_send_wal() via pgstat_report_stat(), like\n> > >> pg_stat_slru().\n> > >\n> > > Thanks for your comments.\n> > > Since I changed to use pgstat_report_stat() and DecodeCommit() is calling it,\n> > > the frequency to send statistics is not so high.\n> >\n> > On second thought, it's strange to include this change in pg_stat_wal patch.\n> > Because pgstat_report_stat() sends various stats and that change would\n> > affect not only pg_stat_wal but also other stats views. That is, if we really\n> > want to make some processes call pgstat_report_stat() newly, which\n> > should be implemented as a separate patch. But I'm not sure how useful\n> > this change is because probably the stats are almost negligibly small\n> > in those processes.\n> >\n> > This thought seems valid for pgstat_send_wal(). I changed the thought\n> > and am inclined to be ok not to call pgstat_send_wal() in some background\n> > processes that are very unlikely to generate WAL.\n> >\n> \n> This makes sense to me. I think even if such background processes have\n\n+1\n\n> This makes sense to me. I think even if such background processes have\n> to write WAL due to wal_buffers, it will be accounted next time the\n> backend sends the stats.\n\nWhere do they send the stats? (I think it's ok to omit seding stats at\nall for such low-wal/heap activity processes.)\n\n> One minor point, don't we need to reset the counter\n> WalStats.m_wal_buffers_full once we sent the stats, otherwise the same\n> stats will be accounted multiple times.\n\nIsn't this doing that?\n\n+\t/*\n+\t * Clear out the statistics buffer, so it can be re-used.\n+\t */\n+\tMemSet(&WalStats, 0, sizeof(WalStats));\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 28 Sep 2020 09:51:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-09-26 19:18, Amit Kapila wrote:\n> On Fri, Sep 25, 2020 at 11:06 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> \n>> On 2020/09/25 12:06, Masahiro Ikeda wrote:\n>> > On 2020-09-18 11:11, Kyotaro Horiguchi wrote:\n>> >> At Fri, 18 Sep 2020 09:40:11 +0900, Masahiro Ikeda\n>> >> <ikedamsh@oss.nttdata.com> wrote in\n>> >>> Thanks. I confirmed that it causes HOT pruning or killing of\n>> >>> dead index tuple if DecodeCommit() is called.\n>> >>>\n>> >>> As you said, DecodeCommit() may access the system table.\n>> >> ...\n>> >>> The wals are generated only when logical replication is performed.\n>> >>> So, I added pgstat_send_wal() in XLogSendLogical().\n>> >>>\n>> >>> But, I concerned that it causes poor performance\n>> >>> since pgstat_send_wal() is called per wal record,\n>> >>\n>> >> I think that's too frequent. If we want to send any stats to the\n>> >> collector, it is usually done at commit time using\n>> >> pgstat_report_stat(), and the function avoids sending stats too\n>> >> frequently. For logrep-worker, apply_handle_commit() is calling it. It\n>> >> seems to be the place if we want to send the wal stats. Or it may be\n>> >> better to call pgstat_send_wal() via pgstat_report_stat(), like\n>> >> pg_stat_slru().\n>> >\n>> > Thanks for your comments.\n>> > Since I changed to use pgstat_report_stat() and DecodeCommit() is calling it,\n>> > the frequency to send statistics is not so high.\n>> \n>> On second thought, it's strange to include this change in pg_stat_wal \n>> patch.\n>> Because pgstat_report_stat() sends various stats and that change would\n>> affect not only pg_stat_wal but also other stats views. That is, if we \n>> really\n>> want to make some processes call pgstat_report_stat() newly, which\n>> should be implemented as a separate patch. But I'm not sure how useful\n>> this change is because probably the stats are almost negligibly small\n>> in those processes.\n>> \n>> This thought seems valid for pgstat_send_wal(). I changed the thought\n>> and am inclined to be ok not to call pgstat_send_wal() in some \n>> background\n>> processes that are very unlikely to generate WAL.\n>> \n\nOK, I removed to pgstat_report_stat() for autovaccum launcher, \nlogrep-worker and logrep-launcher.\n\n\n> This makes sense to me. I think even if such background processes have\n> to write WAL due to wal_buffers, it will be accounted next time the\n> backend sends the stats.\n\nThanks for your comments.\n\nIIUC, since each process counts WalStats.m_wal_buffers_full,\nbackend can't send the counter which other background processes have to \nwrite WAL due to wal_buffers.\nAlthough we can't track all WAL activity, the impact on the statistics \nis minimal so we can ignore it.\n\n> One minor point, don't we need to reset the counter\n> WalStats.m_wal_buffers_full once we sent the stats, otherwise the same\n> stats will be accounted multiple times.\n\nNow, the counter is reset in pgstat_send_wal.\nIsn't it enough?\n\n\n>> The checkpointer doesn't seem to call pgstat_report_stat() currently,\n>> but since there is a possibility to send wal statistics, I added \n>> pgstat_report_stat().\n> \n> IMO it's better to call pgstat_send_wal() in the checkpointer, instead,\n> because of the above reason.\n\nOk, I changed.\n\n\n> Thanks for updating the patch! I'd like to share my review comments.\n> \n> + <xref linkend=\"monitoring-pg-stat-wal-view\"/> for details.\n> \n> Like the description for pg_stat_bgwriter, <link> tag should be used\n> instead of <xref>.\n\nThanks, fixed.\n\n> + <para>\n> + Number of WAL writes when the <xref linkend=\"guc-wal-buffers\"/> \n> are full\n> + </para></entry>\n> \n> I prefer the following description. Thought?\n> \n> \"Number of times WAL data was written to the disk because wal_buffers \n> got full\"\n\nOk, I changed.\n\n> + the <structname>pg_stat_archiver</structname> view ,or\n> <literal>wal</literal>\n> \n> A comma should be just after \"view\" (not just before \"or\").\n\nSorry, anyway I think a comma is not necessary.\nI removed it.\n\n> +/*\n> + * WAL global statistics counter.\n> + * This counter is incremented by both each backend and background.\n> + * And then, sent to the stat collector process.\n> + */\n> +PgStat_MsgWal WalStats;\n> \n> What about merging the comments for BgWriterStats and WalStats into\n> one because they are almost the same? For example,\n> \n> -------------------------------\n> /*\n> * BgWriter and WAL global statistics counters.\n> * Stored directly in a stats message structure so they can be sent\n> * without needing to copy things around. We assume these init to \n> zeroes.\n> */\n> PgStat_MsgBgWriter BgWriterStats;\n> PgStat_MsgWal WalStats;\n> -------------------------------\n> \n> BTW, originally there was the comment \"(unused in other processes)\"\n> for BgWriterStats. But it seems not true, so I removed it from\n> the above example.\n\nThanks, I changed.\n\n> +\trc = fwrite(&walStats, sizeof(walStats), 1, fpout);\n> +\t(void) rc;\t\t\t\t\t/* we'll check for error with ferror */\n> \n> Since the patch changes the pgstat file format,\n> PGSTAT_FILE_FORMAT_ID should also be changed?\n\nSorry about that.\nI incremented PGSTAT_FILE_FORMAT_ID by +1.\n\n> -\t * Clear out global and archiver statistics so they start from zero \n> in\n> +\t * Clear out global, archiver and wal statistics so they start from \n> zero in\n> \n> This is not the issue of this patch, but isn't it better to mention\n> also SLRU stats here? That is, what about \"Clear out global, archiver,\n> WAL and SLRU statistics so they start from zero in\"?\n\nThanks, I changed.\n\n> I found \"wal statistics\" and \"wal stats\" in some comments in the patch,\n> but isn't it better to use \"WAL statistics\" and \"WAL stats\", instead,\n> if there is no special reason to use lowercase?\n\nOK. I fixed it.\n\n> +\t/*\n> +\t * Read wal stats struct\n> +\t */\n> +\tif (fread(&walStats, 1, sizeof(walStats), fpin) != sizeof(walStats))\n> \n> In pgstat_read_db_statsfile_timestamp(), the local variable myWalStats\n> should be declared and be used to store the WAL stats read via fread(),\n> instead.\n\nThanks, I changed it to declare myWalStats.\n\n> +{ oid => '1136', descr => 'statistics: number of WAL writes when the\n> wal buffers are full',\n> \n> If we change the description of wal_buffers_full column in the document\n> as I proposed, we should also use the proposed description here.\n\nOK, I fixed it.\n\n> +{ oid => '1137', descr => 'statistics: last reset for the walwriter',\n> \n> \"the walwriter\" should be \"WAL\" or \"WAL activity\", etc?\n\nThanks, I fixed it.\n\n> + * PgStat_MsgWal\t\t\tSent by each backend and background workers to\n> update WAL statistics.\n> \n> If your intention here is to mention background processes like \n> checkpointer,\n> \"each backend and background workers\" should be \"backends and \n> background\n> processes\"?\n\nThanks, I fixed it.\n\n> +\tPgStat_Counter m_wal_buffers_full;\t/* number of WAL write caused by\n> full of WAL buffers */\n> \n> I don't think this comment is necessary.\n\nOK, I removed.\n\n> +\tPgStat_Counter wal_buffers_full;\t/* number of WAL write caused by\n> full of WAL buffers */\n> +\tTimestampTz stat_reset_timestamp;\t/* last time when the stats reset \n> */\n> \n> I don't think these comments are necessary.\n\nOK, I removed\n\n> +/*\n> + * WAL writes statistics counter is updated by backend and background \n> workers\n> \n> Same as above.\n\nI fixed it.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 28 Sep 2020 10:30:25 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On Mon, Sep 28, 2020 at 7:00 AM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> On 2020-09-26 19:18, Amit Kapila wrote\n>\n> > This makes sense to me. I think even if such background processes have\n> > to write WAL due to wal_buffers, it will be accounted next time the\n> > backend sends the stats.\n>\n> Thanks for your comments.\n>\n> IIUC, since each process counts WalStats.m_wal_buffers_full,\n> backend can't send the counter which other background processes have to\n> write WAL due to wal_buffers.\n>\n\nRight, I misunderstood it.\n\n> Although we can't track all WAL activity, the impact on the statistics\n> is minimal so we can ignore it.\n>\n\nYeah, that is probably true.\n\n> > One minor point, don't we need to reset the counter\n> > WalStats.m_wal_buffers_full once we sent the stats, otherwise the same\n> > stats will be accounted multiple times.\n>\n> Now, the counter is reset in pgstat_send_wal.\n> Isn't it enough?\n>\n\nThat should be enough.\n\nOne other thing that occurred to me today is can't we keep this as\npart of PgStat_GlobalStats? We can use pg_stat_reset_shared('wal'); to\nreset it. It seems to me this is a cluster-wide stats and somewhat\nsimilar to some of the other stats we maintain there.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 28 Sep 2020 08:11:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "At Mon, 28 Sep 2020 08:11:23 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> One other thing that occurred to me today is can't we keep this as\n> part of PgStat_GlobalStats? We can use pg_stat_reset_shared('wal'); to\n> reset it. It seems to me this is a cluster-wide stats and somewhat\n> similar to some of the other stats we maintain there.\n\nI like that direction, but PgStat_GlobalStats is actually\nPgStat_BgWriterStats and cleard by a RESET_BGWRITER message.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 28 Sep 2020 11:54:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On Mon, Sep 28, 2020 at 8:24 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 28 Sep 2020 08:11:23 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > One other thing that occurred to me today is can't we keep this as\n> > part of PgStat_GlobalStats? We can use pg_stat_reset_shared('wal'); to\n> > reset it. It seems to me this is a cluster-wide stats and somewhat\n> > similar to some of the other stats we maintain there.\n>\n> I like that direction, but PgStat_GlobalStats is actually\n> PgStat_BgWriterStats and cleard by a RESET_BGWRITER message.\n>\n\nYeah, I think if we want to pursue this direction then we probably\nneed to have a separate message to set/reset WAL-related stuff. I\nguess we probably need to have a separate reset timestamp for WAL. I\nthink the difference would be that we can have one structure to refer\nto global_stats instead of referring to multiple structures and we\ndon't need to issue separate read/write calls but OTOH I don't see\nmany disadvantages of the current approach as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 28 Sep 2020 09:13:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-09-28 12:43, Amit Kapila wrote:\n> On Mon, Sep 28, 2020 at 8:24 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> \n>> At Mon, 28 Sep 2020 08:11:23 +0530, Amit Kapila \n>> <amit.kapila16@gmail.com> wrote in\n>> > One other thing that occurred to me today is can't we keep this as\n>> > part of PgStat_GlobalStats? We can use pg_stat_reset_shared('wal'); to\n>> > reset it. It seems to me this is a cluster-wide stats and somewhat\n>> > similar to some of the other stats we maintain there.\n>> \n>> I like that direction, but PgStat_GlobalStats is actually\n>> PgStat_BgWriterStats and cleard by a RESET_BGWRITER message.\n>> \n> \n> Yeah, I think if we want to pursue this direction then we probably\n> need to have a separate message to set/reset WAL-related stuff. I\n> guess we probably need to have a separate reset timestamp for WAL. I\n> think the difference would be that we can have one structure to refer\n> to global_stats instead of referring to multiple structures and we\n> don't need to issue separate read/write calls but OTOH I don't see\n> many disadvantages of the current approach as well.\n\nIIUC, if we keep wal stats as part of PgStat_GlobalStats,\ndon't we need to add PgStat_ArchiverStats and PgStat_SLRUStats\nto PgStat_GlobalStats too?\n\nSince this is refactoring, I think it's better to make another patch\nafter the current patch is merged.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 29 Sep 2020 11:09:10 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On Tue, Sep 29, 2020 at 7:39 AM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> On 2020-09-28 12:43, Amit Kapila wrote:\n> > On Mon, Sep 28, 2020 at 8:24 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >>\n> >> At Mon, 28 Sep 2020 08:11:23 +0530, Amit Kapila\n> >> <amit.kapila16@gmail.com> wrote in\n> >> > One other thing that occurred to me today is can't we keep this as\n> >> > part of PgStat_GlobalStats? We can use pg_stat_reset_shared('wal'); to\n> >> > reset it. It seems to me this is a cluster-wide stats and somewhat\n> >> > similar to some of the other stats we maintain there.\n> >>\n> >> I like that direction, but PgStat_GlobalStats is actually\n> >> PgStat_BgWriterStats and cleard by a RESET_BGWRITER message.\n> >>\n> >\n> > Yeah, I think if we want to pursue this direction then we probably\n> > need to have a separate message to set/reset WAL-related stuff. I\n> > guess we probably need to have a separate reset timestamp for WAL. I\n> > think the difference would be that we can have one structure to refer\n> > to global_stats instead of referring to multiple structures and we\n> > don't need to issue separate read/write calls but OTOH I don't see\n> > many disadvantages of the current approach as well.\n>\n> IIUC, if we keep wal stats as part of PgStat_GlobalStats,\n> don't we need to add PgStat_ArchiverStats and PgStat_SLRUStats\n> to PgStat_GlobalStats too?\n>\n\nI have given the idea for wal_stats because there is just one counter\nin that. I think you can just try to evaluate the merits of each\napproach and choose whichever you feel is good. This is just a\nsuggestion, if you don't like it feel free to proceed with the current\napproach.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 29 Sep 2020 08:13:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-09-29 11:43, Amit Kapila wrote:\n> On Tue, Sep 29, 2020 at 7:39 AM Masahiro Ikeda \n> <ikedamsh@oss.nttdata.com> wrote:\n>> \n>> On 2020-09-28 12:43, Amit Kapila wrote:\n>> > On Mon, Sep 28, 2020 at 8:24 AM Kyotaro Horiguchi\n>> > <horikyota.ntt@gmail.com> wrote:\n>> >>\n>> >> At Mon, 28 Sep 2020 08:11:23 +0530, Amit Kapila\n>> >> <amit.kapila16@gmail.com> wrote in\n>> >> > One other thing that occurred to me today is can't we keep this as\n>> >> > part of PgStat_GlobalStats? We can use pg_stat_reset_shared('wal'); to\n>> >> > reset it. It seems to me this is a cluster-wide stats and somewhat\n>> >> > similar to some of the other stats we maintain there.\n>> >>\n>> >> I like that direction, but PgStat_GlobalStats is actually\n>> >> PgStat_BgWriterStats and cleard by a RESET_BGWRITER message.\n>> >>\n>> >\n>> > Yeah, I think if we want to pursue this direction then we probably\n>> > need to have a separate message to set/reset WAL-related stuff. I\n>> > guess we probably need to have a separate reset timestamp for WAL. I\n>> > think the difference would be that we can have one structure to refer\n>> > to global_stats instead of referring to multiple structures and we\n>> > don't need to issue separate read/write calls but OTOH I don't see\n>> > many disadvantages of the current approach as well.\n>> \n>> IIUC, if we keep wal stats as part of PgStat_GlobalStats,\n>> don't we need to add PgStat_ArchiverStats and PgStat_SLRUStats\n>> to PgStat_GlobalStats too?\n>> \n> \n> I have given the idea for wal_stats because there is just one counter\n> in that. I think you can just try to evaluate the merits of each\n> approach and choose whichever you feel is good. This is just a\n> suggestion, if you don't like it feel free to proceed with the current\n> approach.\n\nThanks for your suggestion.\nI understood that the point is that WAL-related stats have just one \ncounter now.\n\nSince we may add some WAL-related stats like pgWalUsage.(bytes, records, \nfpi),\nI think that the current approach is good.\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 29 Sep 2020 11:51:13 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/09/29 11:51, Masahiro Ikeda wrote:\n> On 2020-09-29 11:43, Amit Kapila wrote:\n>> On Tue, Sep 29, 2020 at 7:39 AM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>>>\n>>> On 2020-09-28 12:43, Amit Kapila wrote:\n>>> > On Mon, Sep 28, 2020 at 8:24 AM Kyotaro Horiguchi\n>>> > <horikyota.ntt@gmail.com> wrote:\n>>> >>\n>>> >> At Mon, 28 Sep 2020 08:11:23 +0530, Amit Kapila\n>>> >> <amit.kapila16@gmail.com> wrote in\n>>> >> > One other thing that occurred to me today is can't we keep this as\n>>> >> > part of PgStat_GlobalStats? We can use pg_stat_reset_shared('wal'); to\n>>> >> > reset it. It seems to me this is a cluster-wide stats and somewhat\n>>> >> > similar to some of the other stats we maintain there.\n>>> >>\n>>> >> I like that direction, but PgStat_GlobalStats is actually\n>>> >> PgStat_BgWriterStats and cleard by a RESET_BGWRITER message.\n>>> >>\n>>> >\n>>> > Yeah, I think if we want to pursue this direction then we probably\n>>> > need to have a separate message to set/reset WAL-related stuff. I\n>>> > guess we probably need to have a separate reset timestamp for WAL. I\n>>> > think the difference would be that we can have one structure to refer\n>>> > to global_stats instead of referring to multiple structures and we\n>>> > don't need to issue separate read/write calls but OTOH I don't see\n>>> > many disadvantages of the current approach as well.\n>>>\n>>> IIUC, if we keep wal stats as part of PgStat_GlobalStats,\n>>> don't we need to add PgStat_ArchiverStats and PgStat_SLRUStats\n>>> to PgStat_GlobalStats too?\n>>>\n>>\n>> I have given the idea for wal_stats because there is just one counter\n>> in that. I think you can just try to evaluate the merits of each\n>> approach and choose whichever you feel is good. This is just a\n>> suggestion, if you don't like it feel free to proceed with the current\n>> approach.\n> \n> Thanks for your suggestion.\n> I understood that the point is that WAL-related stats have just one counter now.\n> \n> Since we may add some WAL-related stats like pgWalUsage.(bytes, records, fpi),\n> I think that the current approach is good.\n\n+1\n\nI marked this patch as ready for committer.\nBarring any objection, I will commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 30 Sep 2020 00:53:06 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On Tue, Sep 29, 2020 at 9:23 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/09/29 11:51, Masahiro Ikeda wrote:\n> > On 2020-09-29 11:43, Amit Kapila wrote:\n> >> On Tue, Sep 29, 2020 at 7:39 AM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n> >>>\n> >>> On 2020-09-28 12:43, Amit Kapila wrote:\n> >>> > On Mon, Sep 28, 2020 at 8:24 AM Kyotaro Horiguchi\n> >>> > <horikyota.ntt@gmail.com> wrote:\n> >>> >>\n> >>> >> At Mon, 28 Sep 2020 08:11:23 +0530, Amit Kapila\n> >>> >> <amit.kapila16@gmail.com> wrote in\n> >>> >> > One other thing that occurred to me today is can't we keep this as\n> >>> >> > part of PgStat_GlobalStats? We can use pg_stat_reset_shared('wal'); to\n> >>> >> > reset it. It seems to me this is a cluster-wide stats and somewhat\n> >>> >> > similar to some of the other stats we maintain there.\n> >>> >>\n> >>> >> I like that direction, but PgStat_GlobalStats is actually\n> >>> >> PgStat_BgWriterStats and cleard by a RESET_BGWRITER message.\n> >>> >>\n> >>> >\n> >>> > Yeah, I think if we want to pursue this direction then we probably\n> >>> > need to have a separate message to set/reset WAL-related stuff. I\n> >>> > guess we probably need to have a separate reset timestamp for WAL. I\n> >>> > think the difference would be that we can have one structure to refer\n> >>> > to global_stats instead of referring to multiple structures and we\n> >>> > don't need to issue separate read/write calls but OTOH I don't see\n> >>> > many disadvantages of the current approach as well.\n> >>>\n> >>> IIUC, if we keep wal stats as part of PgStat_GlobalStats,\n> >>> don't we need to add PgStat_ArchiverStats and PgStat_SLRUStats\n> >>> to PgStat_GlobalStats too?\n> >>>\n> >>\n> >> I have given the idea for wal_stats because there is just one counter\n> >> in that. I think you can just try to evaluate the merits of each\n> >> approach and choose whichever you feel is good. This is just a\n> >> suggestion, if you don't like it feel free to proceed with the current\n> >> approach.\n> >\n> > Thanks for your suggestion.\n> > I understood that the point is that WAL-related stats have just one counter now.\n> >\n> > Since we may add some WAL-related stats like pgWalUsage.(bytes, records, fpi),\n> > I think that the current approach is good.\n>\n> +1\n>\n\nOkay, it makes sense to keep it in the current form if we have a plan\nto extend this view with additional stats. However, why don't we\nexpose it with a function similar to pg_stat_get_archiver() instead of\nproviding individual functions like pg_stat_get_wal_buffers_full() and\npg_stat_get_wal_stat_reset_time?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 30 Sep 2020 16:51:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/09/30 20:21, Amit Kapila wrote:\n> On Tue, Sep 29, 2020 at 9:23 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/09/29 11:51, Masahiro Ikeda wrote:\n>>> On 2020-09-29 11:43, Amit Kapila wrote:\n>>>> On Tue, Sep 29, 2020 at 7:39 AM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>>>>>\n>>>>> On 2020-09-28 12:43, Amit Kapila wrote:\n>>>>>> On Mon, Sep 28, 2020 at 8:24 AM Kyotaro Horiguchi\n>>>>>> <horikyota.ntt@gmail.com> wrote:\n>>>>>>>\n>>>>>>> At Mon, 28 Sep 2020 08:11:23 +0530, Amit Kapila\n>>>>>>> <amit.kapila16@gmail.com> wrote in\n>>>>>>>> One other thing that occurred to me today is can't we keep this as\n>>>>>>>> part of PgStat_GlobalStats? We can use pg_stat_reset_shared('wal'); to\n>>>>>>>> reset it. It seems to me this is a cluster-wide stats and somewhat\n>>>>>>>> similar to some of the other stats we maintain there.\n>>>>>>>\n>>>>>>> I like that direction, but PgStat_GlobalStats is actually\n>>>>>>> PgStat_BgWriterStats and cleard by a RESET_BGWRITER message.\n>>>>>>>\n>>>>>>\n>>>>>> Yeah, I think if we want to pursue this direction then we probably\n>>>>>> need to have a separate message to set/reset WAL-related stuff. I\n>>>>>> guess we probably need to have a separate reset timestamp for WAL. I\n>>>>>> think the difference would be that we can have one structure to refer\n>>>>>> to global_stats instead of referring to multiple structures and we\n>>>>>> don't need to issue separate read/write calls but OTOH I don't see\n>>>>>> many disadvantages of the current approach as well.\n>>>>>\n>>>>> IIUC, if we keep wal stats as part of PgStat_GlobalStats,\n>>>>> don't we need to add PgStat_ArchiverStats and PgStat_SLRUStats\n>>>>> to PgStat_GlobalStats too?\n>>>>>\n>>>>\n>>>> I have given the idea for wal_stats because there is just one counter\n>>>> in that. I think you can just try to evaluate the merits of each\n>>>> approach and choose whichever you feel is good. This is just a\n>>>> suggestion, if you don't like it feel free to proceed with the current\n>>>> approach.\n>>>\n>>> Thanks for your suggestion.\n>>> I understood that the point is that WAL-related stats have just one counter now.\n>>>\n>>> Since we may add some WAL-related stats like pgWalUsage.(bytes, records, fpi),\n>>> I think that the current approach is good.\n>>\n>> +1\n>>\n> \n> Okay, it makes sense to keep it in the current form if we have a plan\n> to extend this view with additional stats. However, why don't we\n> expose it with a function similar to pg_stat_get_archiver() instead of\n> providing individual functions like pg_stat_get_wal_buffers_full() and\n> pg_stat_get_wal_stat_reset_time?\n\nWe can adopt either of those approaches for pg_stat_wal. I think that\nthe former is a bit more flexible because we can collect only one of\nWAL information even when pg_stat_wal will contain many information\nin the future, by using the function. But you thought there are some\nreasons that the latter is better for pg_stat_wal?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 1 Oct 2020 09:05:19 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "At Thu, 1 Oct 2020 09:05:19 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/09/30 20:21, Amit Kapila wrote:\n> > On Tue, Sep 29, 2020 at 9:23 PM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >> On 2020/09/29 11:51, Masahiro Ikeda wrote:\n> >>> On 2020-09-29 11:43, Amit Kapila wrote:\n> >>>> On Tue, Sep 29, 2020 at 7:39 AM Masahiro Ikeda\n> >>>> <ikedamsh@oss.nttdata.com> wrote:\n> >>> Thanks for your suggestion.\n> >>> I understood that the point is that WAL-related stats have just one\n> >>> counter now.\n> >>>\n> >>> Since we may add some WAL-related stats like pgWalUsage.(bytes,\n> >>> records, fpi),\n> >>> I think that the current approach is good.\n> >>\n> >> +1\n> >>\n> > Okay, it makes sense to keep it in the current form if we have a plan\n> > to extend this view with additional stats. However, why don't we\n> > expose it with a function similar to pg_stat_get_archiver() instead of\n> > providing individual functions like pg_stat_get_wal_buffers_full() and\n> > pg_stat_get_wal_stat_reset_time?\n> \n> We can adopt either of those approaches for pg_stat_wal. I think that\n> the former is a bit more flexible because we can collect only one of\n> WAL information even when pg_stat_wal will contain many information\n> in the future, by using the function. But you thought there are some\n> reasons that the latter is better for pg_stat_wal?\n\nFWIW I prefer to expose it by one SRF function rather than by\nsubdivided functions. One of the reasons is the less oid consumption\nand/or reduction of definitions for intrinsic functions.\n\nAnother reason is at least for me subdivided functions are not useful\nso much for on-the-fly examination on psql console. I'm often annoyed\nby realizing I can't recall the exact name of a function, say,\npg_last_wal_receive_lsn or such but function names cannot be\nauto-completed on psql console. \"select proname from pg_proc where\nproname like.. \" is one of my friends:p On the other hand \"select *\nfrom pg_stat_wal\" requires no detailed memory.\n\nHowever subdivided functions might be useful if I wanted use just one\nnumber of wal-stats in a function, I think it is not a major usage and\nwe can use a SQL query on the view instead.\n\nAnother reason that I mildly want to object to subdivided functions is\nI was annoyed that a stats view makes many individual calls to\nfunctions that internally share the same statistics entry. That\nbehavior required me to provide an entry-caching feature to my\nshared-memory statistics patch.\n\nregrds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 01 Oct 2020 10:23:50 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> Another reason that I mildly want to object to subdivided functions is\n> I was annoyed that a stats view makes many individual calls to\n> functions that internally share the same statistics entry. That\n> behavior required me to provide an entry-caching feature to my\n> shared-memory statistics patch.\n\n+1\nThe views for troubleshooting performance problems should be as light as possible. IIRC, we saw frequently searching pg_stat_replication consume unexpectedly high CPU power, because it calls pg_stat_get_activity(null) to get all sessions and join them with the walsenders. At that time, we had hundreds of client sessions. We expected pg_stat_replication to be very lightweight because it provides information about a few walsenders.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n",
"msg_date": "Thu, 1 Oct 2020 01:50:56 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On Thu, Oct 1, 2020 at 6:53 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 1 Oct 2020 09:05:19 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> >\n> >\n> > On 2020/09/30 20:21, Amit Kapila wrote:\n> > > On Tue, Sep 29, 2020 at 9:23 PM Fujii Masao\n> > > <masao.fujii@oss.nttdata.com> wrote:\n> > >>\n> > >> On 2020/09/29 11:51, Masahiro Ikeda wrote:\n> > >>> On 2020-09-29 11:43, Amit Kapila wrote:\n> > >>>> On Tue, Sep 29, 2020 at 7:39 AM Masahiro Ikeda\n> > >>>> <ikedamsh@oss.nttdata.com> wrote:\n> > >>> Thanks for your suggestion.\n> > >>> I understood that the point is that WAL-related stats have just one\n> > >>> counter now.\n> > >>>\n> > >>> Since we may add some WAL-related stats like pgWalUsage.(bytes,\n> > >>> records, fpi),\n> > >>> I think that the current approach is good.\n> > >>\n> > >> +1\n> > >>\n> > > Okay, it makes sense to keep it in the current form if we have a plan\n> > > to extend this view with additional stats. However, why don't we\n> > > expose it with a function similar to pg_stat_get_archiver() instead of\n> > > providing individual functions like pg_stat_get_wal_buffers_full() and\n> > > pg_stat_get_wal_stat_reset_time?\n> >\n> > We can adopt either of those approaches for pg_stat_wal. I think that\n> > the former is a bit more flexible because we can collect only one of\n> > WAL information even when pg_stat_wal will contain many information\n> > in the future, by using the function. But you thought there are some\n> > reasons that the latter is better for pg_stat_wal?\n>\n> FWIW I prefer to expose it by one SRF function rather than by\n> subdivided functions. One of the reasons is the less oid consumption\n> and/or reduction of definitions for intrinsic functions.\n>\n> Another reason is at least for me subdivided functions are not useful\n> so much for on-the-fly examination on psql console. I'm often annoyed\n> by realizing I can't recall the exact name of a function, say,\n> pg_last_wal_receive_lsn or such but function names cannot be\n> auto-completed on psql console. \"select proname from pg_proc where\n> proname like.. \" is one of my friends:p On the other hand \"select *\n> from pg_stat_wal\" requires no detailed memory.\n>\n> However subdivided functions might be useful if I wanted use just one\n> number of wal-stats in a function, I think it is not a major usage and\n> we can use a SQL query on the view instead.\n>\n> Another reason that I mildly want to object to subdivided functions is\n> I was annoyed that a stats view makes many individual calls to\n> functions that internally share the same statistics entry. That\n> behavior required me to provide an entry-caching feature to my\n> shared-memory statistics patch.\n>\n\nAll these are good reasons to expose it via one function and I think\nthat is why most of our existing views also use one function approach.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 1 Oct 2020 08:03:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-10-01 11:33, Amit Kapila wrote:\n> On Thu, Oct 1, 2020 at 6:53 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> \n>> At Thu, 1 Oct 2020 09:05:19 +0900, Fujii Masao \n>> <masao.fujii@oss.nttdata.com> wrote in\n>> >\n>> >\n>> > On 2020/09/30 20:21, Amit Kapila wrote:\n>> > > On Tue, Sep 29, 2020 at 9:23 PM Fujii Masao\n>> > > <masao.fujii@oss.nttdata.com> wrote:\n>> > >>\n>> > >> On 2020/09/29 11:51, Masahiro Ikeda wrote:\n>> > >>> On 2020-09-29 11:43, Amit Kapila wrote:\n>> > >>>> On Tue, Sep 29, 2020 at 7:39 AM Masahiro Ikeda\n>> > >>>> <ikedamsh@oss.nttdata.com> wrote:\n>> > >>> Thanks for your suggestion.\n>> > >>> I understood that the point is that WAL-related stats have just one\n>> > >>> counter now.\n>> > >>>\n>> > >>> Since we may add some WAL-related stats like pgWalUsage.(bytes,\n>> > >>> records, fpi),\n>> > >>> I think that the current approach is good.\n>> > >>\n>> > >> +1\n>> > >>\n>> > > Okay, it makes sense to keep it in the current form if we have a plan\n>> > > to extend this view with additional stats. However, why don't we\n>> > > expose it with a function similar to pg_stat_get_archiver() instead of\n>> > > providing individual functions like pg_stat_get_wal_buffers_full() and\n>> > > pg_stat_get_wal_stat_reset_time?\n>> >\n>> > We can adopt either of those approaches for pg_stat_wal. I think that\n>> > the former is a bit more flexible because we can collect only one of\n>> > WAL information even when pg_stat_wal will contain many information\n>> > in the future, by using the function. But you thought there are some\n>> > reasons that the latter is better for pg_stat_wal?\n>> \n>> FWIW I prefer to expose it by one SRF function rather than by\n>> subdivided functions. One of the reasons is the less oid consumption\n>> and/or reduction of definitions for intrinsic functions.\n>> \n>> Another reason is at least for me subdivided functions are not useful\n>> so much for on-the-fly examination on psql console. I'm often annoyed\n>> by realizing I can't recall the exact name of a function, say,\n>> pg_last_wal_receive_lsn or such but function names cannot be\n>> auto-completed on psql console. \"select proname from pg_proc where\n>> proname like.. \" is one of my friends:p On the other hand \"select *\n>> from pg_stat_wal\" requires no detailed memory.\n>> \n>> However subdivided functions might be useful if I wanted use just one\n>> number of wal-stats in a function, I think it is not a major usage and\n>> we can use a SQL query on the view instead.\n>> \n>> Another reason that I mildly want to object to subdivided functions is\n>> I was annoyed that a stats view makes many individual calls to\n>> functions that internally share the same statistics entry. That\n>> behavior required me to provide an entry-caching feature to my\n>> shared-memory statistics patch.\n>> \n> \n> All these are good reasons to expose it via one function and I think\n> that is why most of our existing views also use one function approach.\n\nThanks for your comments.\nI didn't notice there are the above disadvantages to provide individual \nfunctions.\n\nI changed the latest patch to expose it via one function.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Thu, 01 Oct 2020 12:56:28 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/10/01 12:56, Masahiro Ikeda wrote:\n> On 2020-10-01 11:33, Amit Kapila wrote:\n>> On Thu, Oct 1, 2020 at 6:53 AM Kyotaro Horiguchi\n>> <horikyota.ntt@gmail.com> wrote:\n>>>\n>>> At Thu, 1 Oct 2020 09:05:19 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>> >\n>>> >\n>>> > On 2020/09/30 20:21, Amit Kapila wrote:\n>>> > > On Tue, Sep 29, 2020 at 9:23 PM Fujii Masao\n>>> > > <masao.fujii@oss.nttdata.com> wrote:\n>>> > >>\n>>> > >> On 2020/09/29 11:51, Masahiro Ikeda wrote:\n>>> > >>> On 2020-09-29 11:43, Amit Kapila wrote:\n>>> > >>>> On Tue, Sep 29, 2020 at 7:39 AM Masahiro Ikeda\n>>> > >>>> <ikedamsh@oss.nttdata.com> wrote:\n>>> > >>> Thanks for your suggestion.\n>>> > >>> I understood that the point is that WAL-related stats have just one\n>>> > >>> counter now.\n>>> > >>>\n>>> > >>> Since we may add some WAL-related stats like pgWalUsage.(bytes,\n>>> > >>> records, fpi),\n>>> > >>> I think that the current approach is good.\n>>> > >>\n>>> > >> +1\n>>> > >>\n>>> > > Okay, it makes sense to keep it in the current form if we have a plan\n>>> > > to extend this view with additional stats. However, why don't we\n>>> > > expose it with a function similar to pg_stat_get_archiver() instead of\n>>> > > providing individual functions like pg_stat_get_wal_buffers_full() and\n>>> > > pg_stat_get_wal_stat_reset_time?\n>>> >\n>>> > We can adopt either of those approaches for pg_stat_wal. I think that\n>>> > the former is a bit more flexible because we can collect only one of\n>>> > WAL information even when pg_stat_wal will contain many information\n>>> > in the future, by using the function. But you thought there are some\n>>> > reasons that the latter is better for pg_stat_wal?\n>>>\n>>> FWIW I prefer to expose it by one SRF function rather than by\n>>> subdivided functions.� One of the reasons is the less oid consumption\n>>> and/or reduction of definitions for intrinsic functions.\n>>>\n>>> Another reason is at least for me subdivided functions are not useful\n>>> so much for on-the-fly examination on psql console.� I'm often annoyed\n>>> by realizing I can't recall the exact name of a function, say,\n>>> pg_last_wal_receive_lsn or such but function names cannot be\n>>> auto-completed on psql console. \"select proname from pg_proc where\n>>> proname like.. \" is one of my friends:p On the other hand \"select *\n>>> from pg_stat_wal\" requires no detailed memory.\n>>>\n>>> However subdivided functions might be useful if I wanted use just one\n>>> number of wal-stats in a function, I think it is not a major usage and\n>>> we can use a SQL query on the view instead.\n>>>\n>>> Another reason that I mildly want to object to subdivided functions is\n>>> I was annoyed that a stats view makes many individual calls to\n>>> functions that internally share the same statistics entry.� That\n>>> behavior required me to provide an entry-caching feature to my\n>>> shared-memory statistics patch.\n>>>\n>>\n>> All these are good reasons to expose it via one function and I think\n\nUnderstood. +1 to expose it as one function.\n\n\n>> that is why most of our existing views also use one function approach.\n> \n> Thanks for your comments.\n> I didn't notice there are the above disadvantages to provide individual functions.\n> \n> I changed the latest patch to expose it via one function.\n\nThanks for updating the patch! LGTM.\nBarring any other objection, I will commit it.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 1 Oct 2020 13:35:41 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/10/01 10:50, tsunakawa.takay@fujitsu.com wrote:\n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n>> Another reason that I mildly want to object to subdivided functions is\n>> I was annoyed that a stats view makes many individual calls to\n>> functions that internally share the same statistics entry. That\n>> behavior required me to provide an entry-caching feature to my\n>> shared-memory statistics patch.\n> \n> +1\n> The views for troubleshooting performance problems should be as light as possible. IIRC, we saw frequently searching pg_stat_replication consume unexpectedly high CPU power, because it calls pg_stat_get_activity(null) to get all sessions and join them with the walsenders. At that time, we had hundreds of client sessions. We expected pg_stat_replication to be very lightweight because it provides information about a few walsenders.\n\nI think that we can improve that, for example, by storing backend id\ninto WalSndCtl and making pg_stat_get_wal_senders() directly\nget the walsender's LocalPgBackendStatus with the backend id,\nrather than joining pg_stat_get_activity() and pg_stat_get_wal_senders().\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 2 Oct 2020 09:38:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "From: Fujii Masao <masao.fujii@oss.nttdata.com>\n> I think that we can improve that, for example, by storing backend id\n> into WalSndCtl and making pg_stat_get_wal_senders() directly\n> get the walsender's LocalPgBackendStatus with the backend id,\n> rather than joining pg_stat_get_activity() and pg_stat_get_wal_senders().\n\nYeah, I had something like that in mind. I think I'll take note of this as my private homework. (Of course, anyone can do it.)\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n",
"msg_date": "Fri, 2 Oct 2020 01:05:44 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/10/01 13:35, Fujii Masao wrote:\n> \n> \n> On 2020/10/01 12:56, Masahiro Ikeda wrote:\n>> On 2020-10-01 11:33, Amit Kapila wrote:\n>>> On Thu, Oct 1, 2020 at 6:53 AM Kyotaro Horiguchi\n>>> <horikyota.ntt@gmail.com> wrote:\n>>>>\n>>>> At Thu, 1 Oct 2020 09:05:19 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>>> >\n>>>> >\n>>>> > On 2020/09/30 20:21, Amit Kapila wrote:\n>>>> > > On Tue, Sep 29, 2020 at 9:23 PM Fujii Masao\n>>>> > > <masao.fujii@oss.nttdata.com> wrote:\n>>>> > >>\n>>>> > >> On 2020/09/29 11:51, Masahiro Ikeda wrote:\n>>>> > >>> On 2020-09-29 11:43, Amit Kapila wrote:\n>>>> > >>>> On Tue, Sep 29, 2020 at 7:39 AM Masahiro Ikeda\n>>>> > >>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>> > >>> Thanks for your suggestion.\n>>>> > >>> I understood that the point is that WAL-related stats have just one\n>>>> > >>> counter now.\n>>>> > >>>\n>>>> > >>> Since we may add some WAL-related stats like pgWalUsage.(bytes,\n>>>> > >>> records, fpi),\n>>>> > >>> I think that the current approach is good.\n>>>> > >>\n>>>> > >> +1\n>>>> > >>\n>>>> > > Okay, it makes sense to keep it in the current form if we have a plan\n>>>> > > to extend this view with additional stats. However, why don't we\n>>>> > > expose it with a function similar to pg_stat_get_archiver() instead of\n>>>> > > providing individual functions like pg_stat_get_wal_buffers_full() and\n>>>> > > pg_stat_get_wal_stat_reset_time?\n>>>> >\n>>>> > We can adopt either of those approaches for pg_stat_wal. I think that\n>>>> > the former is a bit more flexible because we can collect only one of\n>>>> > WAL information even when pg_stat_wal will contain many information\n>>>> > in the future, by using the function. But you thought there are some\n>>>> > reasons that the latter is better for pg_stat_wal?\n>>>>\n>>>> FWIW I prefer to expose it by one SRF function rather than by\n>>>> subdivided functions.� One of the reasons is the less oid consumption\n>>>> and/or reduction of definitions for intrinsic functions.\n>>>>\n>>>> Another reason is at least for me subdivided functions are not useful\n>>>> so much for on-the-fly examination on psql console.� I'm often annoyed\n>>>> by realizing I can't recall the exact name of a function, say,\n>>>> pg_last_wal_receive_lsn or such but function names cannot be\n>>>> auto-completed on psql console. \"select proname from pg_proc where\n>>>> proname like.. \" is one of my friends:p On the other hand \"select *\n>>>> from pg_stat_wal\" requires no detailed memory.\n>>>>\n>>>> However subdivided functions might be useful if I wanted use just one\n>>>> number of wal-stats in a function, I think it is not a major usage and\n>>>> we can use a SQL query on the view instead.\n>>>>\n>>>> Another reason that I mildly want to object to subdivided functions is\n>>>> I was annoyed that a stats view makes many individual calls to\n>>>> functions that internally share the same statistics entry.� That\n>>>> behavior required me to provide an entry-caching feature to my\n>>>> shared-memory statistics patch.\n>>>>\n>>>\n>>> All these are good reasons to expose it via one function and I think\n> \n> Understood. +1 to expose it as one function.\n> \n> \n>>> that is why most of our existing views also use one function approach.\n>>\n>> Thanks for your comments.\n>> I didn't notice there are the above disadvantages to provide individual functions.\n>>\n>> I changed the latest patch to expose it via one function.\n> \n> Thanks for updating the patch! LGTM.\n> Barring any other objection, I will commit it.\n\nI updated typedefs.list and pushed the patch. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 2 Oct 2020 10:21:02 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-10-02 10:21, Fujii Masao wrote:\n> On 2020/10/01 13:35, Fujii Masao wrote:\n>> \n>> \n>> On 2020/10/01 12:56, Masahiro Ikeda wrote:\n>>> On 2020-10-01 11:33, Amit Kapila wrote:\n>>>> On Thu, Oct 1, 2020 at 6:53 AM Kyotaro Horiguchi\n>>>> <horikyota.ntt@gmail.com> wrote:\n>>>>> \n>>>>> At Thu, 1 Oct 2020 09:05:19 +0900, Fujii Masao \n>>>>> <masao.fujii@oss.nttdata.com> wrote in\n>>>>> >\n>>>>> >\n>>>>> > On 2020/09/30 20:21, Amit Kapila wrote:\n>>>>> > > On Tue, Sep 29, 2020 at 9:23 PM Fujii Masao\n>>>>> > > <masao.fujii@oss.nttdata.com> wrote:\n>>>>> > >>\n>>>>> > >> On 2020/09/29 11:51, Masahiro Ikeda wrote:\n>>>>> > >>> On 2020-09-29 11:43, Amit Kapila wrote:\n>>>>> > >>>> On Tue, Sep 29, 2020 at 7:39 AM Masahiro Ikeda\n>>>>> > >>>> <ikedamsh@oss.nttdata.com> wrote:\n>>>>> > >>> Thanks for your suggestion.\n>>>>> > >>> I understood that the point is that WAL-related stats have just one\n>>>>> > >>> counter now.\n>>>>> > >>>\n>>>>> > >>> Since we may add some WAL-related stats like pgWalUsage.(bytes,\n>>>>> > >>> records, fpi),\n>>>>> > >>> I think that the current approach is good.\n>>>>> > >>\n>>>>> > >> +1\n>>>>> > >>\n>>>>> > > Okay, it makes sense to keep it in the current form if we have a plan\n>>>>> > > to extend this view with additional stats. However, why don't we\n>>>>> > > expose it with a function similar to pg_stat_get_archiver() instead of\n>>>>> > > providing individual functions like pg_stat_get_wal_buffers_full() and\n>>>>> > > pg_stat_get_wal_stat_reset_time?\n>>>>> >\n>>>>> > We can adopt either of those approaches for pg_stat_wal. I think that\n>>>>> > the former is a bit more flexible because we can collect only one of\n>>>>> > WAL information even when pg_stat_wal will contain many information\n>>>>> > in the future, by using the function. But you thought there are some\n>>>>> > reasons that the latter is better for pg_stat_wal?\n>>>>> \n>>>>> FWIW I prefer to expose it by one SRF function rather than by\n>>>>> subdivided functions. One of the reasons is the less oid \n>>>>> consumption\n>>>>> and/or reduction of definitions for intrinsic functions.\n>>>>> \n>>>>> Another reason is at least for me subdivided functions are not \n>>>>> useful\n>>>>> so much for on-the-fly examination on psql console. I'm often \n>>>>> annoyed\n>>>>> by realizing I can't recall the exact name of a function, say,\n>>>>> pg_last_wal_receive_lsn or such but function names cannot be\n>>>>> auto-completed on psql console. \"select proname from pg_proc where\n>>>>> proname like.. \" is one of my friends:p On the other hand \"select *\n>>>>> from pg_stat_wal\" requires no detailed memory.\n>>>>> \n>>>>> However subdivided functions might be useful if I wanted use just \n>>>>> one\n>>>>> number of wal-stats in a function, I think it is not a major usage \n>>>>> and\n>>>>> we can use a SQL query on the view instead.\n>>>>> \n>>>>> Another reason that I mildly want to object to subdivided functions \n>>>>> is\n>>>>> I was annoyed that a stats view makes many individual calls to\n>>>>> functions that internally share the same statistics entry. That\n>>>>> behavior required me to provide an entry-caching feature to my\n>>>>> shared-memory statistics patch.\n>>>>> \n>>>> \n>>>> All these are good reasons to expose it via one function and I think\n>> \n>> Understood. +1 to expose it as one function.\n>> \n>> \n>>>> that is why most of our existing views also use one function \n>>>> approach.\n>>> \n>>> Thanks for your comments.\n>>> I didn't notice there are the above disadvantages to provide \n>>> individual functions.\n>>> \n>>> I changed the latest patch to expose it via one function.\n>> \n>> Thanks for updating the patch! LGTM.\n>> Barring any other objection, I will commit it.\n> \n> I updated typedefs.list and pushed the patch. Thanks!\n\nThanks to all reviewers!\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 02 Oct 2020 12:40:58 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "Hi,\n\nI think it's better to add other WAL statistics to the pg_stat_wal view.\nI'm thinking to add the following statistics. Please let me know your \nthoughts.\n\n1. Basic wal statistics\n\n* wal_records: Total number of WAL records generated\n* wal_fpi: Total number of WAL full page images generated\n* wal_bytes: Total amount of WAL bytes generated\n\nTo understand DB's performance, first, we will check the performance \ntrends for the entire database instance.\nFor example, if the number of wal_fpi becomes higher, users may tune \n\"wal_compression\", \"checkpoint_timeout\" and so on.\n\nAlthough users can check the above statistics via EXPLAIN, auto_explain, \nautovacuum\nand pg_stat_statements now, if users want to see the performance trends \nfor the entire database,\nthey must preprocess the statistics.\n\nIs it useful to add the sum of the above statistics to the pg_stat_wal \nview?\n\n\n2. Number of when new WAL file is created and zero-filled.\n\nAs Fujii-san already commented, I think it's good for tuning.\n\n> Just idea; it may be worth exposing the number of when new WAL file is \n> created and zero-filled. This initialization may have impact on the \n> performance of write-heavy workload generating lots of WAL. If this \n> number is reported high, to reduce the number of this initialization, \n> we can tune WAL-related parameters so that more \"recycled\" WAL files \n> can be hold.\n\n\n3. Number of when to switch the WAL logfile segment.\n\nThis is similar to 2, but this counts the number of when WAL file is \nrecylcled too.\nI think it's useful for tuning \"wal_segment_size\"\nif the number is high relative to the startup time, \"wal_segment_size\" \nmust be bigger.\n\n\n4. Number of when WAL is flushed\n\nI think it's useful for tuning \"synchronous_commit\" and \"commit_delay\" \nfor query executions.\nIf the number of WAL is flushed is high, users can know \n\"synchronous_commit\" is useful for the workload.\n\nAlso, it's useful for tuning \"wal_writer_delay\" and \n\"wal_writer_flush_after\" for wal writer.\nIf the number is high, users can change the parameter for performance.\n\nI think it's better to separate this for backends and wal writer.\n\n\n5. Wait time when WAL is flushed.\n\nThis is the accumulated time when wal is flushed.\nIf the time becomes much higher, users can detect the possibility of \ndisk failure.\n\nSince users can see how much flash time occupies of the query execution \ntime,\nit may lead to query tuning and so on.\n\nSince there is the above reason, I think it's better to separate this \nfor backends and wal writer.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 06 Oct 2020 15:57:15 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-10-06 15:57, Masahiro Ikeda wrote:\n> Hi,\n> \n> I think it's better to add other WAL statistics to the pg_stat_wal \n> view.\n> I'm thinking to add the following statistics. Please let me know your \n> thoughts.\n> \n> 1. Basic wal statistics\n> \n> * wal_records: Total number of WAL records generated\n> * wal_fpi: Total number of WAL full page images generated\n> * wal_bytes: Total amount of WAL bytes generated\n> \n> To understand DB's performance, first, we will check the performance\n> trends for the entire database instance.\n> For example, if the number of wal_fpi becomes higher, users may tune\n> \"wal_compression\", \"checkpoint_timeout\" and so on.\n> \n> Although users can check the above statistics via EXPLAIN,\n> auto_explain, autovacuum\n> and pg_stat_statements now, if users want to see the performance\n> trends for the entire database,\n> they must preprocess the statistics.\n> \n> Is it useful to add the sum of the above statistics to the pg_stat_wal \n> view?\n> \n> \n> 2. Number of when new WAL file is created and zero-filled.\n> \n> As Fujii-san already commented, I think it's good for tuning.\n> \n>> Just idea; it may be worth exposing the number of when new WAL file is \n>> created and zero-filled. This initialization may have impact on the \n>> performance of write-heavy workload generating lots of WAL. If this \n>> number is reported high, to reduce the number of this initialization, \n>> we can tune WAL-related parameters so that more \"recycled\" WAL files \n>> can be hold.\n> \n> \n> 3. Number of when to switch the WAL logfile segment.\n> \n> This is similar to 2, but this counts the number of when WAL file is\n> recylcled too.\n> I think it's useful for tuning \"wal_segment_size\"\n> if the number is high relative to the startup time, \"wal_segment_size\"\n> must be bigger.\n> \n> \n> 4. Number of when WAL is flushed\n> \n> I think it's useful for tuning \"synchronous_commit\" and \"commit_delay\"\n> for query executions.\n> If the number of WAL is flushed is high, users can know\n> \"synchronous_commit\" is useful for the workload.\n> \n> Also, it's useful for tuning \"wal_writer_delay\" and\n> \"wal_writer_flush_after\" for wal writer.\n> If the number is high, users can change the parameter for performance.\n> \n> I think it's better to separate this for backends and wal writer.\n> \n> \n> 5. Wait time when WAL is flushed.\n> \n> This is the accumulated time when wal is flushed.\n> If the time becomes much higher, users can detect the possibility of\n> disk failure.\n> \n> Since users can see how much flash time occupies of the query execution \n> time,\n> it may lead to query tuning and so on.\n> \n> Since there is the above reason, I think it's better to separate this\n> for backends and wal writer.\n\nI made a patch for collecting the above statistics.\nIf you have any comments, please let me know.\n\nI think it's better to separate some statistics for backend and \nbackgrounds because\ntuning target parameters like \"synchronous_commit\", \"wal_writer_delay\" \nand so on are different.\nBut first, I want to get a consensus to collect them.\n\nBest regards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Tue, 13 Oct 2020 11:57:48 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "\n\nOn 2020/10/13 11:57, Masahiro Ikeda wrote:\n> On 2020-10-06 15:57, Masahiro Ikeda wrote:\n>> Hi,\n>>\n>> I think it's better to add other WAL statistics to the pg_stat_wal view.\n>> I'm thinking to add the following statistics. Please let me know your thoughts.\n>>\n>> 1. Basic wal statistics\n>>\n>> * wal_records: Total number of WAL records generated\n>> * wal_fpi: Total number of WAL full page images generated\n>> * wal_bytes: Total amount of WAL bytes generated\n\n+1\n\n>>\n>> To understand DB's performance, first, we will check the performance\n>> trends for the entire database instance.\n>> For example, if the number of wal_fpi becomes higher, users may tune\n>> \"wal_compression\", \"checkpoint_timeout\" and so on.\n>>\n>> Although users can check the above statistics via EXPLAIN,\n>> auto_explain, autovacuum\n>> and pg_stat_statements now, if users want to see the performance\n>> trends for the entire database,\n>> they must preprocess the statistics.\n>>\n>> Is it useful to add the sum of the above statistics to the pg_stat_wal view?\n>>\n>>\n>> 2. Number of when new WAL file is created and zero-filled.\n>>\n>> As Fujii-san already commented, I think it's good for tuning.\n>>\n>>> Just idea; it may be worth exposing the number of when new WAL file is created and zero-filled. This initialization may have impact on the performance of write-heavy workload generating lots of WAL. If this number is reported high, to reduce the number of this initialization, we can tune WAL-related parameters so that more \"recycled\" WAL files can be hold.\n\n+1\n\nBut it might be better to track the number of when new WAL file is\ncreated whether it's zero-filled or not, if file creation and sync itself\ntakes time.\n\n>>\n>>\n>> 3. Number of when to switch the WAL logfile segment.\n>>\n>> This is similar to 2, but this counts the number of when WAL file is\n>> recylcled too.\n>> I think it's useful for tuning \"wal_segment_size\"\n>> if the number is high relative to the startup time, \"wal_segment_size\"\n>> must be bigger.\n\nYou're thinking to count all the WAL file switch? That number is equal\nto the number of WAL files generated since the last reset of pg_stat_wal?\n\n>>\n>>\n>> 4. Number of when WAL is flushed\n>>\n>> I think it's useful for tuning \"synchronous_commit\" and \"commit_delay\"\n>> for query executions.\n>> If the number of WAL is flushed is high, users can know\n>> \"synchronous_commit\" is useful for the workload.\n>>\n>> Also, it's useful for tuning \"wal_writer_delay\" and\n>> \"wal_writer_flush_after\" for wal writer.\n>> If the number is high, users can change the parameter for performance.\n>>\n>> I think it's better to separate this for backends and wal writer.\n\n+1\n\n>>\n>>\n>> 5. Wait time when WAL is flushed.\n>>\n>> This is the accumulated time when wal is flushed.\n>> If the time becomes much higher, users can detect the possibility of\n>> disk failure.\n\nThis should be tracked, e.g., only when track_io_timing is enabled?\nOtherwise, tracking that may cause performance overhead.\n\n>>\n>> Since users can see how much flash time occupies of the query execution time,\n>> it may lead to query tuning and so on.\n>>\n>> Since there is the above reason, I think it's better to separate this\n>> for backends and wal writer.\n\n\nI'm afraid that this counter for a backend may be a bit confusing. Because\nwhen the counter indicates small time, we may think that walwriter almost\nwrite WAL data and a backend doesn't take time to write WAL. But a backend\nmay be just waiting for walwriter to write WAL.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 15 Oct 2020 19:49:32 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
},
{
"msg_contents": "On 2020-10-15 19:49, Fujii Masao wrote:\n> On 2020/10/13 11:57, Masahiro Ikeda wrote:\n>> On 2020-10-06 15:57, Masahiro Ikeda wrote:\n>>> 2. Number of when new WAL file is created and zero-filled.\n>>> \n>>> As Fujii-san already commented, I think it's good for tuning.\n>>> \n>>>> Just idea; it may be worth exposing the number of when new WAL file \n>>>> is created and zero-filled. This initialization may have impact on \n>>>> the performance of write-heavy workload generating lots of WAL. If \n>>>> this number is reported high, to reduce the number of this \n>>>> initialization, we can tune WAL-related parameters so that more \n>>>> \"recycled\" WAL files can be hold.\n> \n> +1\n> \n> But it might be better to track the number of when new WAL file is\n> created whether it's zero-filled or not, if file creation and sync \n> itself\n> takes time.\n\nOK, I changed to track the number of when new WAL file is created.\n\n>>> \n>>> \n>>> 3. Number of when to switch the WAL logfile segment.\n>>> \n>>> This is similar to 2, but this counts the number of when WAL file is\n>>> recylcled too.\n>>> I think it's useful for tuning \"wal_segment_size\"\n>>> if the number is high relative to the startup time, \n>>> \"wal_segment_size\"\n>>> must be bigger.\n> \n> You're thinking to count all the WAL file switch? That number is equal\n> to the number of WAL files generated since the last reset of \n> pg_stat_wal?\n\nYes. I think it might be better to count it because I think the ratio in \nwhich a new WAL file is created is important.\nTo calculate it, we need the count all the WAL file switch.\n\n\n>>> 4. Number of when WAL is flushed\n>>> \n>>> I think it's useful for tuning \"synchronous_commit\" and \n>>> \"commit_delay\"\n>>> for query executions.\n>>> If the number of WAL is flushed is high, users can know\n>>> \"synchronous_commit\" is useful for the workload.\n>>> \n>>> Also, it's useful for tuning \"wal_writer_delay\" and\n>>> \"wal_writer_flush_after\" for wal writer.\n>>> If the number is high, users can change the parameter for \n>>> performance.\n>>> \n>>> I think it's better to separate this for backends and wal writer.\n> \n> +1\n\nThanks, I separated the statistics for backends and wal writer.\nWhen checkpointer process flushes the WAL, the statistics for backends \nare counted now.\nAlthough I think its impact is not big, is it better to make statistics \nfor checkpointer?\n\n\n>>> 5. Wait time when WAL is flushed.\n>>> \n>>> This is the accumulated time when wal is flushed.\n>>> If the time becomes much higher, users can detect the possibility of\n>>> disk failure.\n> \n> This should be tracked, e.g., only when track_io_timing is enabled?\n> Otherwise, tracking that may cause performance overhead.\n\nOK, I changed the implementation.\n\n\n>>> Since users can see how much flash time occupies of the query \n>>> execution time,\n>>> it may lead to query tuning and so on.\n>>> \n>>> Since there is the above reason, I think it's better to separate this\n>>> for backends and wal writer.\n> \n> \n> I'm afraid that this counter for a backend may be a bit confusing. \n> Because\n> when the counter indicates small time, we may think that walwriter \n> almost\n> write WAL data and a backend doesn't take time to write WAL. But a \n> backend\n> may be just waiting for walwriter to write WAL.\n\nThanks for your comments. I agreed.\n\n\n\nNow, the following is the view implemented in the attached patch.\nIf you have any other comments, please let me know.\n\n```\npostgres=# SELECT * FROM pg_stat_wal;\n-[ RECORD 1 ]-------+------------------------------\nwal_records | 1000128 # Total number of WAL records \ngenerated\nwal_fpi | 1 # Total number of WAL full page \nimages generated\nwal_bytes | 124013682 #Total amount of WAL bytes generated\nwal_buffers_full | 7952 #Total number of WAL data written to \nthe disk because WAL buffers got full\nwal_file | 14 #Total number of WAL file segment created or \nopened a pre-existing one\nwal_init_file | 7 #Total number of WAL file segment created\nwal_write_backend | 7956 #Total number of WAL data written to the \ndisk by backends\nwal_write_walwriter | 27 #Total number of WAL data written to the \ndisk by walwriter\nwal_write_time | 40 # Total amount of time that has been spent \nin the portion of WAL data was written to disk by backend and walwriter, \nin milliseconds\nwal_sync_backend | 1 # Total number of WAL data synced to the disk \nby backends\nwal_sync_walwriter | 6 #Total number of WAL data synced to the disk \nby walwriter\nwal_sync_time | 0 # Total amount of time that has been spent in \nthe portion of WAL data was synced to disk by backend and walwriter, in \nmilliseconds\nstats_reset | 2020-10-16 19:41:01.892272+09\n```\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Fri, 16 Oct 2020 19:58:02 +0900",
"msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: New statistics for tuning WAL buffer size"
}
] |
[
{
"msg_contents": "Hi,\n\nprotocol.sgml describes the protocol messages received by a BASE_BACKUP\nstreaming command, but doesn't tell anything about the additional\nCopyResponse data message containing the contents of the backup\nmanifest (if requested) after having received the tar files. So i\npropose the attached to give a little more detail in this paragraph.\n\n\tThanks, Bernd",
"msg_date": "Tue, 18 Aug 2020 14:41:09 +0200",
"msg_from": "Bernd Helmle <mailings@oopsware.de>",
"msg_from_op": true,
"msg_subject": "Documentation patch for backup manifests in protocol.sgml"
},
{
"msg_contents": "On Tue, Aug 18, 2020 at 02:41:09PM +0200, Bernd Helmle wrote:\n> Hi,\n> \n> protocol.sgml describes the protocol messages received by a BASE_BACKUP\n> streaming command, but doesn't tell anything about the additional\n> CopyResponse data message containing the contents of the backup\n> manifest (if requested) after having received the tar files. So i\n> propose the attached to give a little more detail in this paragraph.\n> \n> \tThanks, Bernd\n> \n\n> diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml\n> index 8b00235a516..31918144b37 100644\n> --- a/doc/src/sgml/protocol.sgml\n> +++ b/doc/src/sgml/protocol.sgml\n> @@ -2665,8 +2665,10 @@ The commands accepted in replication mode are:\n> <quote>ustar interchange format</quote> specified in the POSIX 1003.1-2008\n> standard) dump of the tablespace contents, except that the two trailing\n> blocks of zeroes specified in the standard are omitted.\n> - After the tar data is complete, a final ordinary result set will be sent,\n> - containing the WAL end position of the backup, in the same format as\n> + After the tar data is complete, and if a backup manifest was requested,\n> + another CopyResponse result is sent, containing the manifest data for the\n> + current base backup. In any case, a final ordinary result set will be\n> + sent, containing the WAL end position of the backup, in the same format as\n> the start position.\n> </para>\n\nIf someone can confirm this, I will apply it? Magnus?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 21 Aug 2020 18:03:32 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Documentation patch for backup manifests in protocol.sgml"
},
{
"msg_contents": "On Fri, Aug 21, 2020 at 06:03:32PM -0400, Bruce Momjian wrote:\n> On Tue, Aug 18, 2020 at 02:41:09PM +0200, Bernd Helmle wrote:\n>> protocol.sgml describes the protocol messages received by a BASE_BACKUP\n>> streaming command, but doesn't tell anything about the additional\n>> CopyResponse data message containing the contents of the backup\n>> manifest (if requested) after having received the tar files. So i\n>> propose the attached to give a little more detail in this paragraph.\n> \n> If someone can confirm this, I will apply it? Magnus?\n\nThe reason why backup manifests are sent at the end of a base backup\nis that they include the start and stop positions of the backup (see\ncaller of AddWALInfoToBackupManifest() in perform_base_backup()).\nOnce this is done, an extra CopyOutResponse message is indeed sent\nwithin SendBackupManifest() in backup_manifest.c.\n\nSo confirmed.\n--\nMichael",
"msg_date": "Mon, 24 Aug 2020 16:58:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Documentation patch for backup manifests in protocol.sgml"
},
{
"msg_contents": "On Mon, Aug 24, 2020 at 04:58:34PM +0900, Michael Paquier wrote:\n> On Fri, Aug 21, 2020 at 06:03:32PM -0400, Bruce Momjian wrote:\n> > On Tue, Aug 18, 2020 at 02:41:09PM +0200, Bernd Helmle wrote:\n> >> protocol.sgml describes the protocol messages received by a BASE_BACKUP\n> >> streaming command, but doesn't tell anything about the additional\n> >> CopyResponse data message containing the contents of the backup\n> >> manifest (if requested) after having received the tar files. So i\n> >> propose the attached to give a little more detail in this paragraph.\n> > \n> > If someone can confirm this, I will apply it? Magnus?\n> \n> The reason why backup manifests are sent at the end of a base backup\n> is that they include the start and stop positions of the backup (see\n> caller of AddWALInfoToBackupManifest() in perform_base_backup()).\n> Once this is done, an extra CopyOutResponse message is indeed sent\n> within SendBackupManifest() in backup_manifest.c.\n> \n> So confirmed.\n\nPatch applied through 13.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 31 Aug 2020 18:48:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Documentation patch for backup manifests in protocol.sgml"
},
{
"msg_contents": "On Mon, Aug 31, 2020 at 06:48:53PM -0400, Bruce Momjian wrote:\n> Patch applied through 13.\n\nThanks.\n--\nMichael",
"msg_date": "Tue, 1 Sep 2020 12:04:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Documentation patch for backup manifests in protocol.sgml"
},
{
"msg_contents": "Am Montag, den 31.08.2020, 18:48 -0400 schrieb Bruce Momjian:\n> > So confirmed.\n> \n> \n> Patch applied through 13.\n\nThanks!\n\n\n\n\n",
"msg_date": "Tue, 01 Sep 2020 10:54:52 +0200",
"msg_from": "Bernd Helmle <mailings@oopsware.de>",
"msg_from_op": true,
"msg_subject": "Re: Documentation patch for backup manifests in protocol.sgml"
}
] |
[
{
"msg_contents": "During crash recovery, the server writes this to log:\n\n< 2020-08-16 08:46:08.601 -03 >LOG: redo done at 2299C/1EC6BA00\n< 2020-08-16 08:46:08.877 -03 >LOG: checkpoint starting: end-of-recovery immediate\n\nBut runs a checkpoint, which can take a long time, while the \"ps\" display still\nsays \"recovering NNNNNNNN\".\n\nPlease change to say \"recovery checkpoint\" or similar, as I mentioned here.\nhttps://www.postgresql.org/message-id/20200118201111.GP26045@telsasoft.com\n\n-- \nJustin",
"msg_date": "Tue, 18 Aug 2020 17:52:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Wednesday, August 19, 2020 7:53 AM (GMT+9), Justin Pryzby wrote: \n\nHi, \n\nAll the patches apply, although when applying them the following appears:\n (Stripping trailing CRs from patch; use --binary to disable.)\n\n> During crash recovery, the server writes this to log:\n> \n> < 2020-08-16 08:46:08.601 -03 >LOG: redo done at 2299C/1EC6BA00 <\n> 2020-08-16 08:46:08.877 -03 >LOG: checkpoint starting: end-of-recovery\n> immediate\n> \n> But runs a checkpoint, which can take a long time, while the \"ps\" display still says\n> \"recovering NNNNNNNN\".\n> \n> Please change to say \"recovery checkpoint\" or similar, as I mentioned here.\n> https://www.postgresql.org/message-id/20200118201111.GP26045@telsasoft.c\n> om\n\nYes, I agree that it is helpful to tell users about that.\n\nAbout 0003 patch, there are similar phrases in bgwriter_flush_after and \nbackend_flush_after. Should those be updated too?\n\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -3170,7 +3170,7 @@ include_dir 'conf.d'\n limit the amount of dirty data in the kernel's page cache, reducing\n the likelihood of stalls when an <function>fsync</function> is issued at the end of the\n checkpoint, or when the OS writes data back in larger batches in the\n- background. Often that will result in greatly reduced transaction\n+ background. This feature will often result in greatly reduced transaction\n latency, but there also are some cases, especially with workloads\n that are bigger than <xref linkend=\"guc-shared-buffers\"/>, but smaller\n than the OS's page cache, where performance might degrade. This\n\n\nRegards,\nKirk Jamison\n\n\n",
"msg_date": "Wed, 19 Aug 2020 00:20:50 +0000",
"msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Wed, Aug 19, 2020 at 12:20:50AM +0000, k.jamison@fujitsu.com wrote:\n> On Wednesday, August 19, 2020 7:53 AM (GMT+9), Justin Pryzby wrote: \n>> During crash recovery, the server writes this to log:\n>> Please change to say \"recovery checkpoint\" or similar, as I mentioned here.\n>> https://www.postgresql.org/message-id/20200118201111.GP26045@telsasoft.c\n>> om\n> \n> Yes, I agree that it is helpful to tell users about that.\n\nThat could be helpful. Wouldn't it be better to use \"end-of-recovery\ncheckpoint\" instead? That's the common wording in the code comments.\n\nI don't see the point of patch 0002. In the same paragraph, we\nalready know that this applies to any checkpoints.\n\n> About 0003 patch, there are similar phrases in bgwriter_flush_after and \n> backend_flush_after. Should those be updated too?\n\nYep, makes sense.\n--\nMichael",
"msg_date": "Thu, 20 Aug 2020 17:09:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Thu, Aug 20, 2020 at 05:09:05PM +0900, Michael Paquier wrote:\n> That could be helpful. Wouldn't it be better to use \"end-of-recovery\n> checkpoint\" instead? That's the common wording in the code comments.\n> \n> I don't see the point of patch 0002. In the same paragraph, we\n> already know that this applies to any checkpoints.\n\nThinking more about this.. Could it be better to just add some calls\nto set_ps_display() directly in CreateCheckPoint()? This way, both\nthe checkpointer as well as the startup process at the end of recovery\nwould benefit from the change.\n--\nMichael",
"msg_date": "Mon, 31 Aug 2020 15:52:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Mon, Aug 31, 2020 at 03:52:44PM +0900, Michael Paquier wrote:\n> On Thu, Aug 20, 2020 at 05:09:05PM +0900, Michael Paquier wrote:\n> > That could be helpful. Wouldn't it be better to use \"end-of-recovery\n> > checkpoint\" instead? That's the common wording in the code comments.\n> > \n> > I don't see the point of patch 0002. In the same paragraph, we\n> > already know that this applies to any checkpoints.\n> \n> Thinking more about this.. Could it be better to just add some calls\n> to set_ps_display() directly in CreateCheckPoint()? This way, both\n> the checkpointer as well as the startup process at the end of recovery\n> would benefit from the change.\n\nWhat would you want the checkpointer's ps to say ?\n\nNormally it just says:\npostgres 3468 3151 0 Aug27 ? 00:20:57 postgres: checkpointer \n\nOr do you mean do the same thing as now, but one layer lower, like:\n\n@@ -8728,6 +8725,9 @@ CreateCheckPoint(int flags)\n+ if (flags & CHECKPOINT_END_OF_RECOVERY)\n+ set_ps_display(\"recovery checkpoint\");\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 9 Sep 2020 21:00:50 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Wed, Sep 09, 2020 at 09:00:50PM -0500, Justin Pryzby wrote:\n> What would you want the checkpointer's ps to say ?\n> \n> Normally it just says:\n> postgres 3468 3151 0 Aug27 ? 00:20:57 postgres: checkpointer \n\nNote that CreateCheckPoint() can also be called from the startup\nprocess if the bgwriter has not been launched once recovery finishes.\n\n> Or do you mean do the same thing as now, but one layer lower, like:\n>\n> @@ -8728,6 +8725,9 @@ CreateCheckPoint(int flags)\n> + if (flags & CHECKPOINT_END_OF_RECOVERY)\n> + set_ps_display(\"recovery checkpoint\");\n\nFor the use-case discussed here, that would be fine. Now the\ndifficult point is how much information we can actually display\nwithout bloating ps while still have something meaningful. Showing\nall the information from LogCheckpointStart() would bloat the output a\nlot for example. So, thinking about that, my take would be to have ps\ndisplay the following at the beginning of CreateCheckpoint() and\nCreateRestartPoint():\n- restartpoint or checkpoint\n- shutdown\n- end-of-recovery\n\nThe output also needs to be cleared once the routines finish or if\nthere is a skip, of course.\n--\nMichael",
"msg_date": "Thu, 10 Sep 2020 13:37:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Thu, Sep 10, 2020 at 01:37:10PM +0900, Michael Paquier wrote:\n> On Wed, Sep 09, 2020 at 09:00:50PM -0500, Justin Pryzby wrote:\n> > What would you want the checkpointer's ps to say ?\n> > \n> > Normally it just says:\n> > postgres 3468 3151 0 Aug27 ? 00:20:57 postgres: checkpointer \n> \n> Note that CreateCheckPoint() can also be called from the startup\n> process if the bgwriter has not been launched once recovery finishes.\n> \n> > Or do you mean do the same thing as now, but one layer lower, like:\n> >\n> > @@ -8728,6 +8725,9 @@ CreateCheckPoint(int flags)\n> > + if (flags & CHECKPOINT_END_OF_RECOVERY)\n> > + set_ps_display(\"recovery checkpoint\");\n> \n> For the use-case discussed here, that would be fine. Now the\n> difficult point is how much information we can actually display\n> without bloating ps while still have something meaningful. Showing\n> all the information from LogCheckpointStart() would bloat the output a\n> lot for example. So, thinking about that, my take would be to have ps\n> display the following at the beginning of CreateCheckpoint() and\n> CreateRestartPoint():\n> - restartpoint or checkpoint\n> - shutdown\n> - end-of-recovery\n> \n> The output also needs to be cleared once the routines finish or if\n> there is a skip, of course.\n\nIn my inital request, I *only* care about the startup process' recovery\ncheckpoint. AFAIK, this exits when it's done, so there may be no need to\n\"revert\" to the previous \"ps\". However, one could argue that it's currently a\nbug that the \"recovering NNN\" portion isn't updated after finishing the WAL\nfiles.\n\nStartupXLOG -> xlogreader -> XLogPageRead -> WaitForWALToBecomeAvailable -> XLogFileReadAnyTLI -> XLogFileRead\n -> CreateCheckPoint\n\nMaybe it's a bad idea if the checkpointer is continuously changing its display.\nI don't see the utility in it, since log_checkpoints does more than ps could\never do. I'm concerned that would break things for someone using something\nlike pgrep.\n|$ ps -wwf `pgrep -f 'checkpointer *$'`\n|UID PID PPID C STIME TTY STAT TIME CMD\n|postgres 9434 9418 0 Aug20 ? Ss 214:25 postgres: checkpointer \n\n|pryzbyj 23010 23007 0 10:33 ? 00:00:00 postgres: checkpointer checkpoint\n\nI think this one is by far the most common, but somewhat confusing, since it's\nonly one word. This led me to put parenthesis around it:\n\n|pryzbyj 26810 26809 82 10:53 ? 00:00:12 postgres: startup (end-of-recovery checkpoint)\n\nRelated: I have always thought that this message meant \"recovery will complete\nReal Soon\", but I now understand it to mean \"beginning the recovery checkpoint,\nwhich is flagged CHECKPOINT_IMMEDIATE\" (and may take a long time).\n\n|2020-09-19 10:53:26.345 CDT [26810] LOG: checkpoint starting: end-of-recovery immediate\n\n-- \nJustin",
"msg_date": "Sat, 19 Sep 2020 11:00:31 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Sat, Sep 19, 2020 at 11:00:31AM -0500, Justin Pryzby wrote:\n> Maybe it's a bad idea if the checkpointer is continuously changing its display.\n> I don't see the utility in it, since log_checkpoints does more than ps could\n> ever do. I'm concerned that would break things for someone using something\n> like pgrep.\n\nAt the end of recovery, there is a code path where the startup process\ntriggers the checkpoint by itself if the bgwriter is not launched, but\nthere is also a second code path where, if the bgwriter is started and\nif the cluster not promoted, the startup process would request for an\nimmediate checkpoint and then wait for it. It is IMO equally\nimportant to update the display of the checkpointer in this case to\nshow that the checkpointer is running an end-of-recovery checkpoint.\n\n> Related: I have always thought that this message meant \"recovery will complete\n> Real Soon\", but I now understand it to mean \"beginning the recovery checkpoint,\n> which is flagged CHECKPOINT_IMMEDIATE\" (and may take a long time).\n\nYep. And at the end of crash recovery seconds feel like minutes.\n\nI agree that \"checkpointer checkpoint\" is not the best fit. Using\nparenthesis would also be inconsistent with the other usages of this\nAPI in the backend code. What about adding \"running\" then? This\nwould give \"checkpointer running end-of-recovery checkpoint\".\n\nWhile looking at this patch, I got tempted to use a StringInfo to fill\nin the string to display as that would make the addition of any extra\ninformation easier, giving the attached.\n--\nMichael",
"msg_date": "Fri, 2 Oct 2020 16:28:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Fri, Oct 02, 2020 at 04:28:14PM +0900, Michael Paquier wrote:\n> > Related: I have always thought that this message meant \"recovery will complete\n> > Real Soon\", but I now understand it to mean \"beginning the recovery checkpoint,\n> > which is flagged CHECKPOINT_IMMEDIATE\" (and may take a long time).\n> \n> Yep. And at the end of crash recovery seconds feel like minutes.\n> \n> I agree that \"checkpointer checkpoint\" is not the best fit. Using\n> parenthesis would also be inconsistent with the other usages of this\n> API in the backend code.\n\nI think maybe I got the idea for parenthesis from these:\nsrc/backend/tcop/postgres.c: set_ps_display(\"idle in transaction (aborted)\");\nsrc/backend/postmaster/postmaster.c- if (port->remote_port[0] != '\\0')\nsrc/backend/postmaster/postmaster.c- appendStringInfo(&ps_data, \"(%s)\", port->remote_port);\n\n\n> What about adding \"running\" then? This\n> would give \"checkpointer running end-of-recovery checkpoint\".\n\nWhat about one of these?\n\"checkpointer: running end-of-recovery checkpoint\"\n\"checkpointer running: end-of-recovery checkpoint\"\n\nThanks for looking.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 2 Oct 2020 04:13:54 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "Hi,\r\n\r\nI like the idea behind this patch. I wrote a new version with a\r\ncouple of changes:\r\n\r\n 1. Instead of using StringInfoData, just use a small buffer along\r\n with snprintf(). This is similar to what the WAL receiver\r\n process does.\r\n 2. If the process is the checkpointer, reset the display to \"idle\"\r\n once the checkpoint or restartpoint is finished. It's easy\r\n enough to discover the backend type, and I think adding a bit\r\n more clarity to the checkpointer display is a nice touch.\r\n 3. Instead of \"running,\" use \"creating.\" IMO that's a bit more\r\n accurate.\r\n\r\nI considered also checking that update_process_title was enabled, but\r\nI figured that these ps display updates should happen sparsely enough\r\nthat it wouldn't make much of an impact.\r\n\r\nWhat do you think?\r\n\r\nNathan",
"msg_date": "Thu, 3 Dec 2020 21:18:07 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Thu, Dec 03, 2020 at 09:18:07PM +0000, Bossart, Nathan wrote:\n> I considered also checking that update_process_title was enabled, but\n> I figured that these ps display updates should happen sparsely enough\n> that it wouldn't make much of an impact.\n\nSince bf68b79e5, update_ps_display is responsible for checking\nupdate_process_title. Its other, remaining uses are apparently just acting as\nminor optimizations to guard against useless snprintf's.\n\nSee also https://www.postgresql.org/message-id/flat/1288021.1600178478%40sss.pgh.pa.us\nin which (I just saw) Tom wrote:\n\n> Seems like a good argument, but you'd have to be careful about the\n> final state when you stop overriding update_process_title --- it can't\n> be left looking like it's still-in-progress on some random WAL file.\n\nI think that's a live problem, not just a concern for that patch.\nIt was exactly my complaint leading to this thread:\n\n> But runs a checkpoint, which can take a long time, while the \"ps\" display still\n> says \"recovering NNNNNNNN\".\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 3 Dec 2020 15:58:09 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On 12/3/20, 1:58 PM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\r\n> On Thu, Dec 03, 2020 at 09:18:07PM +0000, Bossart, Nathan wrote:\r\n>> I considered also checking that update_process_title was enabled, but\r\n>> I figured that these ps display updates should happen sparsely enough\r\n>> that it wouldn't make much of an impact.\r\n>\r\n> Since bf68b79e5, update_ps_display is responsible for checking\r\n> update_process_title. Its other, remaining uses are apparently just acting as\r\n> minor optimizations to guard against useless snprintf's.\r\n>\r\n> See also https://www.postgresql.org/message-id/flat/1288021.1600178478%40sss.pgh.pa.us\r\n> in which (I just saw) Tom wrote:\r\n>\r\n>> Seems like a good argument, but you'd have to be careful about the\r\n>> final state when you stop overriding update_process_title --- it can't\r\n>> be left looking like it's still-in-progress on some random WAL file.\r\n>\r\n> I think that's a live problem, not just a concern for that patch.\r\n> It was exactly my complaint leading to this thread:\r\n>\r\n>> But runs a checkpoint, which can take a long time, while the \"ps\" display still\r\n>> says \"recovering NNNNNNNN\".\r\n\r\nAh, I see. Thanks for pointing this out.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Thu, 3 Dec 2020 22:37:32 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On 12/3/20, 1:19 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\r\n> I like the idea behind this patch. I wrote a new version with a\r\n> couple of changes:\r\n\r\nI missed an #include in this patch. Here's a new one with that fixed.\r\n\r\nNathan",
"msg_date": "Fri, 4 Dec 2020 17:17:06 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "\n\nOn 2020/12/05 2:17, Bossart, Nathan wrote:\n> On 12/3/20, 1:19 PM, \"Bossart, Nathan\" <bossartn@amazon.com> wrote:\n>> I like the idea behind this patch. I wrote a new version with a\n>> couple of changes:\n> \n> I missed an #include in this patch. Here's a new one with that fixed.\n\nI agree it might be helpful to display something like \"checkpointing\" or \"waiting for checkpoint\" in PS title for the startup process.\n\nBut, at least for me, it seems strange to display \"idle\" only for checkpointer. *If* we want to monitor the current status of checkpointer, it should be shown as wait event in pg_stat_activity, instead?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 9 Dec 2020 02:00:44 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Wed, Dec 09, 2020 at 02:00:44AM +0900, Fujii Masao wrote:\n> I agree it might be helpful to display something like\n> \"checkpointing\" or \"waiting for checkpoint\" in PS title for the\n> startup process.\n\nThanks.\n\n> But, at least for me, it seems strange to display \"idle\" only for\n> checkpointer.\n\nYeah, I'd rather leave out the part of the patch where we have the\ncheckpointer say \"idle\". So I still think that what v3 did upthread,\nby just resetting the ps display in all cases, feels more natural. So\nwe should remove the parts of v5 from checkpointer.c.\n\n+ * Reset the ps status display. We set the status to \"idle\" for the\n+ * checkpointer process, and we clear it for anything else that runs this\n+ * code.\n+ */\n+ if (MyBackendType == B_CHECKPOINTER)\n+ set_ps_display(\"idle\");\n+ else\n+ set_ps_display(\"\");\nRegarding this part, this just needs a reset with an empty string.\nThe second sentence is confusing (it partially comes fronm v3, from\nme..). Do we lose anything by removing the second sentence of this\ncomment?\n\n+ snprintf(activitymsg, sizeof(activitymsg), \"creating %s%scheckpoint\",\n[...]\n+ snprintf(activitymsg, sizeof(activitymsg), \"creating %srestartpoint\",\nFWIW, I still fing \"running\" to sound better than \"creating\" here. An\nextra term I can think of that could be adapted is \"performing\".\n\n> *If* we want to monitor the current status of\n> checkpointer, it should be shown as wait event in pg_stat_activity,\n> instead? \n\nThat would be WAIT_EVENT_CHECKPOINTER_MAIN, now there has been also on\nthis thread an argument that you would not have access to\npg_stat_activity until crash recovery completes and consistency is\nrestored.\n--\nMichael",
"msg_date": "Wed, 9 Dec 2020 15:15:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On 12/8/20, 10:16 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Wed, Dec 09, 2020 at 02:00:44AM +0900, Fujii Masao wrote:\r\n>> I agree it might be helpful to display something like\r\n>> \"checkpointing\" or \"waiting for checkpoint\" in PS title for the\r\n>> startup process.\r\n>\r\n> Thanks.\r\n>\r\n>> But, at least for me, it seems strange to display \"idle\" only for\r\n>> checkpointer.\r\n>\r\n> Yeah, I'd rather leave out the part of the patch where we have the\r\n> checkpointer say \"idle\". So I still think that what v3 did upthread,\r\n> by just resetting the ps display in all cases, feels more natural. So\r\n> we should remove the parts of v5 from checkpointer.c.\r\n\r\nThat seems fine to me. I think it is most important that the end-of-\r\nrecovery and shutdown checkpoints are shown. I'm not sure there's\r\nmuch value in updating the process title for automatic checkpoints and\r\ncheckpoints triggered via the CHECKPOINT command, so IMO we could skip\r\nthose, too. I made these changes in the new version of the patch.\r\n\r\n> + * Reset the ps status display. We set the status to \"idle\" for the\r\n> + * checkpointer process, and we clear it for anything else that runs this\r\n> + * code.\r\n> + */\r\n> + if (MyBackendType == B_CHECKPOINTER)\r\n> + set_ps_display(\"idle\");\r\n> + else\r\n> + set_ps_display(\"\");\r\n> Regarding this part, this just needs a reset with an empty string.\r\n> The second sentence is confusing (it partially comes fronm v3, from\r\n> me..). Do we lose anything by removing the second sentence of this\r\n> comment?\r\n\r\nI've fixed this in the new version of the patch.\r\n\r\n> + snprintf(activitymsg, sizeof(activitymsg), \"creating %s%scheckpoint\",\r\n> [...]\r\n> + snprintf(activitymsg, sizeof(activitymsg), \"creating %srestartpoint\",\r\n> FWIW, I still fing \"running\" to sound better than \"creating\" here. An\r\n> extra term I can think of that could be adapted is \"performing\".\r\n\r\nI think I prefer \"performing\" over \"running\" because that's what the\r\ndocs use [0]. I've changed it to \"performing\" in the new version of\r\nthe patch.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/docs/devel/wal-configuration.html",
"msg_date": "Wed, 9 Dec 2020 18:37:22 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Wed, Dec 09, 2020 at 06:37:22PM +0000, Bossart, Nathan wrote:\n> On 12/8/20, 10:16 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n>> Yeah, I'd rather leave out the part of the patch where we have the\n>> checkpointer say \"idle\". So I still think that what v3 did upthread,\n>> by just resetting the ps display in all cases, feels more natural. So\n>> we should remove the parts of v5 from checkpointer.c.\n> \n> That seems fine to me. I think it is most important that the end-of-\n> recovery and shutdown checkpoints are shown. I'm not sure there's\n> much value in updating the process title for automatic checkpoints and\n> checkpoints triggered via the CHECKPOINT command, so IMO we could skip\n> those, too. I made these changes in the new version of the patch.\n\nIt would be possible to use pg_stat_activity in most cases here, so I\nagree to settle down to the minimum we can agree on for now, and maybe\ndiscuss separately if this should be extended in some or another in\nthe future if there is a use-case for that. So I'd say that what you\nhave here is logically fine. \n\n> > + snprintf(activitymsg, sizeof(activitymsg), \"creating %s%scheckpoint\",\n> > [...]\n> > + snprintf(activitymsg, sizeof(activitymsg), \"creating %srestartpoint\",\n> > FWIW, I still fing \"running\" to sound better than \"creating\" here. An\n> > extra term I can think of that could be adapted is \"performing\".\n> \n> I think I prefer \"performing\" over \"running\" because that's what the\n> docs use [0]. I've changed it to \"performing\" in the new version of\n> the patch.\n\nThat's also used in the code comments, FWIW.\n\n@@ -9282,6 +9296,7 @@ CreateRestartPoint(int flags)\n XLogRecPtr endptr;\n XLogSegNo _logSegNo;\n TimestampTz xtime;\n+ bool shutdown = (flags & CHECKPOINT_IS_SHUTDOWN);\nYou are right that CHECKPOINT_END_OF_RECOVERY should not be called for\na restart point, so that's correct.\n\nHowever, I think that it would be better to have all those four code\npaths controlled by a single routine, where we pass down the\ncheckpoint flags and fill in the ps string directly depending on what\nthe caller wants to do. This way, we will avoid any inconsistent\nupdates if this stuff gets extended in the future, and there will be\nall the information at hand to extend the logic. So I have simplified\nyour patch as per the attached. Thoughts?\n--\nMichael",
"msg_date": "Fri, 11 Dec 2020 12:54:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "Isn't the sense of \"reset\" inverted ?\n\nI think maybe you mean to do set_ps_display(\"\"); in the \"if reset\".\n\nOn Fri, Dec 11, 2020 at 12:54:22PM +0900, Michael Paquier wrote:\n> +update_checkpoint_display(int flags, bool restartpoint, bool reset)\n> +{\n> +\tif (reset)\n> +\t{\n> +\t\tchar activitymsg[128];\n> +\n> +\t\tsnprintf(activitymsg, sizeof(activitymsg), \"performing %s%s%s\",\n> +\t\t\t\t (flags & CHECKPOINT_END_OF_RECOVERY) ? \"end-of-recovery \" : \"\",\n> +\t\t\t\t (flags & CHECKPOINT_IS_SHUTDOWN) ? \"shutdown \" : \"\",\n> +\t\t\t\t restartpoint ? \"restartpoint\" : \"checkpoint\");\n> +\t\tset_ps_display(activitymsg);\n> +\t}\n> +\telse\n> +\t\tset_ps_display(\"\");\n> +}\n> +\n> +\n> /*\n> * Perform a checkpoint --- either during shutdown, or on-the-fly\n> *\n> @@ -8905,6 +8937,9 @@ CreateCheckPoint(int flags)\n> \tif (log_checkpoints)\n> \t\tLogCheckpointStart(flags, false);\n> \n> +\t/* Update the process title */\n> +\tupdate_checkpoint_display(flags, false, false);\n> +\n> \tTRACE_POSTGRESQL_CHECKPOINT_START(flags);\n> \n> \t/*\n> @@ -9120,6 +9155,9 @@ CreateCheckPoint(int flags)\n> \t/* Real work is done, but log and update stats before releasing lock. */\n> \tLogCheckpointEnd(false);\n> \n> +\t/* Reset the process title */\n> +\tupdate_checkpoint_display(flags, false, true);\n> +\n> \tTRACE_POSTGRESQL_CHECKPOINT_DONE(CheckpointStats.ckpt_bufs_written,\n> \t\t\t\t\t\t\t\t\t NBuffers,\n> \t\t\t\t\t\t\t\t\t CheckpointStats.ckpt_segs_added,\n> @@ -9374,6 +9412,9 @@ CreateRestartPoint(int flags)\n> \tif (log_checkpoints)\n> \t\tLogCheckpointStart(flags, true);\n> \n> +\t/* Update the process title */\n> +\tupdate_checkpoint_display(flags, true, false);\n> +\n> \tCheckPointGuts(lastCheckPoint.redo, flags);\n> \n> \t/*\n> @@ -9492,6 +9533,9 @@ CreateRestartPoint(int flags)\n> \t/* Real work is done, but log and update before releasing lock. */\n> \tLogCheckpointEnd(true);\n> \n> +\t/* Reset the process title */\n> +\tupdate_checkpoint_display(flags, true, true);\n> +\n> \txtime = GetLatestXTime();\n> \tereport((log_checkpoints ? LOG : DEBUG2),\n> \t\t\t(errmsg(\"recovery restart point at %X/%X\",\n\n\n",
"msg_date": "Thu, 10 Dec 2020 22:02:10 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Thu, Dec 10, 2020 at 10:02:10PM -0600, Justin Pryzby wrote:\n> Isn't the sense of \"reset\" inverted ?\n\nIt is ;p\n--\nMichael",
"msg_date": "Fri, 11 Dec 2020 13:17:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On 12/10/20, 7:54 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> However, I think that it would be better to have all those four code\r\n> paths controlled by a single routine, where we pass down the\r\n> checkpoint flags and fill in the ps string directly depending on what\r\n> the caller wants to do. This way, we will avoid any inconsistent\r\n> updates if this stuff gets extended in the future, and there will be\r\n> all the information at hand to extend the logic. So I have simplified\r\n> your patch as per the attached. Thoughts?\r\n\r\nThis approach seems reasonable to me. I've attached my take on it.\r\n\r\nNathan",
"msg_date": "Fri, 11 Dec 2020 18:54:42 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Fri, Dec 11, 2020 at 06:54:42PM +0000, Bossart, Nathan wrote:\n> This approach seems reasonable to me. I've attached my take on it.\n\n+ /* Reset the process title */\n+ set_ps_display(\"\");\nI would still recommend to avoid calling set_ps_display() if there is\nno need to so as we avoid useless system calls, so I think that this\nstuff had better use a common path for the set and reset logic.\n\nMy counter-proposal is like the attached, with the set/reset part not\nreversed this time, and the code indented :p\n--\nMichael",
"msg_date": "Sat, 12 Dec 2020 08:59:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On 12/11/20, 4:00 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Fri, Dec 11, 2020 at 06:54:42PM +0000, Bossart, Nathan wrote:\r\n>> This approach seems reasonable to me. I've attached my take on it.\r\n>\r\n> + /* Reset the process title */\r\n> + set_ps_display(\"\");\r\n> I would still recommend to avoid calling set_ps_display() if there is\r\n> no need to so as we avoid useless system calls, so I think that this\r\n> stuff had better use a common path for the set and reset logic.\r\n>\r\n> My counter-proposal is like the attached, with the set/reset part not\r\n> reversed this time, and the code indented :p\r\n\r\nHaha. LGTM.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Sat, 12 Dec 2020 00:41:25 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Sat, Dec 12, 2020 at 12:41:25AM +0000, Bossart, Nathan wrote:\n> On 12/11/20, 4:00 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n>> My counter-proposal is like the attached, with the set/reset part not\n>> reversed this time, and the code indented :p\n> \n> Haha. LGTM.\n\nThanks. I have applied this one, then.\n--\nMichael",
"msg_date": "Mon, 14 Dec 2020 12:01:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 12:01:33PM +0900, Michael Paquier wrote:\n> On Sat, Dec 12, 2020 at 12:41:25AM +0000, Bossart, Nathan wrote:\n> > On 12/11/20, 4:00 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n> >> My counter-proposal is like the attached, with the set/reset part not\n> >> reversed this time, and the code indented :p\n> > \n> > Haha. LGTM.\n> \n> Thanks. I have applied this one, then.\n\nThank you.\n\nI'm not sure, but we could consider backpatching something to clear the\n\"recovering NNN\" that's currently displayed during checkpoint, even though\nrecovery of NNN has already completed. Possibly just calling\nset_ps_display(\"\"); after \"Real work is done\".\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 13 Dec 2020 21:22:24 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Sun, Dec 13, 2020 at 09:22:24PM -0600, Justin Pryzby wrote:\n> I'm not sure, but we could consider backpatching something to clear the\n> \"recovering NNN\" that's currently displayed during checkpoint, even though\n> recovery of NNN has already completed. Possibly just calling\n> set_ps_display(\"\"); after \"Real work is done\".\n\nThis behavior exists for ages and there were not a lot of complaints\non this matter, so I see no reason to touch back-branches more than\nnecessary.\n--\nMichael",
"msg_date": "Mon, 14 Dec 2020 12:52:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Mon, Dec 14, 2020 at 12:01:33PM +0900, Michael Paquier wrote:\n> On Sat, Dec 12, 2020 at 12:41:25AM +0000, Bossart, Nathan wrote:\n> > On 12/11/20, 4:00 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n> >> My counter-proposal is like the attached, with the set/reset part not\n> >> reversed this time, and the code indented :p\n> > \n> > Haha. LGTM.\n> \n> Thanks. I have applied this one, then.\n\nTo refresh: commit df9274adf adds \"checkpoint\" info to \"ps\", which previously\ncontinued to say \"recovering NNNNN\" even after finishing WAL replay, and\nthroughout the checkpoint.\n\nNow, I wonder whether the startup process should also include some detail about\n\"syncing data dir\". It's possible to strace the process to see what it's\ndoing, but most DBA would probably not know that, and it's helpful to know the\nstatus of recovery and what part of recovery is slow: sync, replay, or\ncheckpoint. commit df9274adf improved the situation between replay and\nckpoint, but it's still not clear what \"postgres: startup\" means before replay\nstarts.\n\nThere's some interaction between Thomas' commit 61752afb2 and\nrecovery_init_sync_method - if we include a startup message, it should\ndistinguish between \"syncing data dirs (fsync)\" and (syncfs).\n\nPutting this into fd.c seems to assume that we can clobber \"ps\", which is fine\nwhen called by StartupXLOG(), but it's a public interface, so I'm not sure if\nit's okay to assume that's the only caller. Maybe it should check if\nMyAuxProcType == B_STARTUP.\n\n-- \nJustin",
"msg_date": "Sun, 6 Jun 2021 21:13:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On 6/6/21, 7:14 PM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\r\n> Now, I wonder whether the startup process should also include some detail about\r\n> \"syncing data dir\". It's possible to strace the process to see what it's\r\n> doing, but most DBA would probably not know that, and it's helpful to know the\r\n> status of recovery and what part of recovery is slow: sync, replay, or\r\n> checkpoint. commit df9274adf improved the situation between replay and\r\n> ckpoint, but it's still not clear what \"postgres: startup\" means before replay\r\n> starts.\r\n\r\nI've seen a few functions cause lengthy startups, including\r\nSyncDataDirectory() (for which I was grateful to see 61752afb),\r\nStartupReorderBuffer(), and RemovePgTempFiles(). I like the idea of\r\nadding additional information in the ps title, but I also think it is\r\nworth exploring additional ways to improve on these O(n) startup\r\ntasks.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 7 Jun 2021 16:02:03 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Mon, Jun 7, 2021 at 12:02 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> On 6/6/21, 7:14 PM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\n> > Now, I wonder whether the startup process should also include some detail about\n> > \"syncing data dir\". It's possible to strace the process to see what it's\n> > doing, but most DBA would probably not know that, and it's helpful to know the\n> > status of recovery and what part of recovery is slow: sync, replay, or\n> > checkpoint. commit df9274adf improved the situation between replay and\n> > ckpoint, but it's still not clear what \"postgres: startup\" means before replay\n> > starts.\n>\n> I've seen a few functions cause lengthy startups, including\n> SyncDataDirectory() (for which I was grateful to see 61752afb),\n> StartupReorderBuffer(), and RemovePgTempFiles(). I like the idea of\n> adding additional information in the ps title, but I also think it is\n> worth exploring additional ways to improve on these O(n) startup\n> tasks.\n\nSee also the nearby thread entitled \"when the startup process doesn't\"\nwhich touches on this same issue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Jun 2021 13:28:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Mon, Jun 07, 2021 at 01:28:06PM -0400, Robert Haas wrote:\n> On Mon, Jun 7, 2021 at 12:02 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>> I've seen a few functions cause lengthy startups, including\n>> SyncDataDirectory() (for which I was grateful to see 61752afb),\n>> StartupReorderBuffer(), and RemovePgTempFiles(). I like the idea of\n>> adding additional information in the ps title, but I also think it is\n>> worth exploring additional ways to improve on these O(n) startup\n>> tasks.\n\n+1. I also agree with doing something for the ps output of the\nstartup process when these are happening in crash recovery.\n\n> See also the nearby thread entitled \"when the startup process doesn't\"\n> which touches on this same issue.\n\nHere is a link to the thread:\nhttps://www.postgresql.org/message-id/CA+TgmoaHQrgDFOBwgY16XCoMtXxsrVGFB2jNCvb7-ubuEe1MGg@mail.gmail.com\n--\nMichael",
"msg_date": "Thu, 10 Jun 2021 10:32:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
},
{
"msg_contents": "On Sun, Jun 06, 2021 at 09:13:48PM -0500, Justin Pryzby wrote:\n> Putting this into fd.c seems to assume that we can clobber \"ps\", which is fine\n> when called by StartupXLOG(), but it's a public interface, so I'm not sure if\n> it's okay to assume that's the only caller. Maybe it should check if\n> MyAuxProcType == B_STARTUP.\n\nI would be tempted to just add that into StartupXLOG() rather than\nimplying that callers of SyncDataDirectory() are fine to get their ps\noutput enforced all the time.\n--\nMichael",
"msg_date": "Thu, 10 Jun 2021 10:35:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: please update ps display for recovery checkpoint"
}
] |
[
{
"msg_contents": "Hi,\n\nI started to hack on making pg_rewind crash-safe (see [1]), but I \nquickly got side-tracked into refactoring and tidying up up the code in \ngeneral. I ended up with this series of patches:\n\nThe first four patches are just refactoring that make the code and the \nlogic more readable. Tom Lane commented about the messy comments earlier \n(see [2]), and I hope these patches will alleviate that confusion. See \ncommit messages for details.\n\nThe last patch refactors the logic in libpq_fetch.c, so that it no \nlonger uses a temporary table in the source system. That allows using a \nhot standby server as the pg_rewind source.\n\nThis doesn't do anything about pg_rewind's crash-safety yet, but I'll \ntry work on that after these patches.\n\n[1] \nhttps://www.postgresql.org/message-id/d8dcc760-8780-5084-f066-6d663801d2e2%40iki.fi\n\n[2] https://www.postgresql.org/message-id/30255.1522711675%40sss.pgh.pa.us\n\n- Heikki",
"msg_date": "Wed, 19 Aug 2020 15:50:16 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "Hello.\n\nAt Wed, 19 Aug 2020 15:50:16 +0300, Heikki Linnakangas <hlinnaka@iki.fi> wrote in \n> Hi,\n> \n> I started to hack on making pg_rewind crash-safe (see [1]), but I\n> quickly got side-tracked into refactoring and tidying up up the code\n> in general. I ended up with this series of patches:\n\n^^;\n\n> The first four patches are just refactoring that make the code and the\n> logic more readable. Tom Lane commented about the messy comments\n> earlier (see [2]), and I hope these patches will alleviate that\n> confusion. See commit messages for details.\n\n0001: It looks fine. The new location is reasonable but adding one\n extern is a bit annoying. But I don't object it.\n\n0002: Rewording that old->target and new->source makes the meaning far\n clearer. Moving decisions core code into filemap_finalize is\n reasonable.\n\n By the way, some of the rules are remaining in\n process_source/target_file. For example, pg_wal that is a symlink,\n or tmporary directories. and excluded files. The number of\n excluded files doesn't seem so large so it doesn't seem that the\n exclusion give advantage so much. They seem to me movable to\n filemap_finalize, and we can get rid of the callbacks by doing\n so. Is there any reason that the remaining rules should be in the\n callbacks?\n\n0003: Thomas is propsing sort template. It could be used if committed.\n\n0004:\n\n The names of many of the functions gets an additional word \"local\"\n but I don't get the meaning clearly. but its about linguistic sense\n and I'm not fit to that..\n \n-rewind_copy_file_range(const char *path, off_t begin, off_t end, bool trunc)\n+local_fetch_file_range(rewind_source *source, const char *path, uint64 off,\n\n The function actually copying the soruce range to the target file. So\n \"fetch\" might give somewhat different meaning, but its about\n linguistic (omitted..).\n\n\n> The last patch refactors the logic in libpq_fetch.c, so that it no\n> longer uses a temporary table in the source system. That allows using\n> a hot standby server as the pg_rewind source.\n\nThat sounds nice.\n\n> This doesn't do anything about pg_rewind's crash-safety yet, but I'll\n> try work on that after these patches.\n> \n> [1]\n> https://www.postgresql.org/message-id/d8dcc760-8780-5084-f066-6d663801d2e2%40iki.fi\n> \n> [2]\n> https://www.postgresql.org/message-id/30255.1522711675%40sss.pgh.pa.us\n> \n> - Heikki\n\nI'm going to continue reviewing this later.\n\nreagards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 20 Aug 2020 17:32:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "On 20/08/2020 11:32, Kyotaro Horiguchi wrote:\n> 0002: Rewording that old->target and new->source makes the meaning far\n> clearer. Moving decisions core code into filemap_finalize is\n> reasonable.\n> \n> By the way, some of the rules are remaining in\n> process_source/target_file. For example, pg_wal that is a symlink,\n> or tmporary directories. and excluded files. The number of\n> excluded files doesn't seem so large so it doesn't seem that the\n> exclusion give advantage so much. They seem to me movable to\n> filemap_finalize, and we can get rid of the callbacks by doing\n> so. Is there any reason that the remaining rules should be in the\n> callbacks?\n\nGood idea! I changed the patch that way.\n\n> 0003: Thomas is propsing sort template. It could be used if committed.\n> \n> 0004:\n> \n> The names of many of the functions gets an additional word \"local\"\n> but I don't get the meaning clearly. but its about linguistic sense\n> and I'm not fit to that..\n> \n> -rewind_copy_file_range(const char *path, off_t begin, off_t end, bool trunc)\n> +local_fetch_file_range(rewind_source *source, const char *path, uint64 off,\n> \n> The function actually copying the soruce range to the target file. So\n> \"fetch\" might give somewhat different meaning, but its about\n> linguistic (omitted..).\n\nHmm. It is \"fetching\" the range from the source server, and writing it \nto the target. The term makes more sense with a libpq source. Perhaps \nthis function should be called \"local_copy_range\" or something, but it'd \nalso be nice to have \"fetch\" in the name because the function pointer \nit's assigned to is called \"queue_fetch_range\".\n\n> I'm going to continue reviewing this later.\n\nThanks! Attached is a new set of patches. The only meaningful change is \nin the 2nd patch, which I modified per your suggestion. Also, I moved \nthe logic to decide each file's fate into a new subroutine called \ndecide_file_action().\n\n- Heikki",
"msg_date": "Tue, 25 Aug 2020 16:32:02 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "On Tue, Aug 25, 2020 at 04:32:02PM +0300, Heikki Linnakangas wrote:\n> On 20/08/2020 11:32, Kyotaro Horiguchi wrote:\n> > 0002: Rewording that old->target and new->source makes the meaning far\n> > clearer. Moving decisions core code into filemap_finalize is\n> > reasonable.\n> > \n> > By the way, some of the rules are remaining in\n> > process_source/target_file. For example, pg_wal that is a symlink,\n> > or tmporary directories. and excluded files. The number of\n> > excluded files doesn't seem so large so it doesn't seem that the\n> > exclusion give advantage so much. They seem to me movable to\n> > filemap_finalize, and we can get rid of the callbacks by doing\n> > so. Is there any reason that the remaining rules should be in the\n> > callbacks?\n> \n> Good idea! I changed the patch that way.\n> \n> > 0003: Thomas is propsing sort template. It could be used if committed.\n> > \n> > 0004:\n> > \n> > The names of many of the functions gets an additional word \"local\"\n> > but I don't get the meaning clearly. but its about linguistic sense\n> > and I'm not fit to that..\n> > -rewind_copy_file_range(const char *path, off_t begin, off_t end, bool trunc)\n> > +local_fetch_file_range(rewind_source *source, const char *path, uint64 off,\n> > \n> > The function actually copying the soruce range to the target file. So\n> > \"fetch\" might give somewhat different meaning, but its about\n> > linguistic (omitted..).\n> \n> Hmm. It is \"fetching\" the range from the source server, and writing it to\n> the target. The term makes more sense with a libpq source. Perhaps this\n> function should be called \"local_copy_range\" or something, but it'd also be\n> nice to have \"fetch\" in the name because the function pointer it's assigned\n> to is called \"queue_fetch_range\".\n> \n> > I'm going to continue reviewing this later.\n> \n> Thanks! Attached is a new set of patches. The only meaningful change is in\n> the 2nd patch, which I modified per your suggestion. Also, I moved the logic\n> to decide each file's fate into a new subroutine called\n> decide_file_action().\n\nThe patch set fails to apply from 0002~, so this needs a rebase. I\nhave not looked at all that in details, but no objections to apply\n0001 from me. It makes sense to move the sync subroutine for the\ntarget folder to file_ops.c.\n--\nMichael",
"msg_date": "Thu, 17 Sep 2020 13:58:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "Hello.\n\nIt needed rebasing. (Attached)\n\nAt Tue, 25 Aug 2020 16:32:02 +0300, Heikki Linnakangas <hlinnaka@iki.fi> wrote in \n> On 20/08/2020 11:32, Kyotaro Horiguchi wrote:\n> > 0002: Rewording that old->target and new->source makes the meaning far\n> \n> Good idea! I changed the patch that way.\n\nLooks Good.\n\n> > 0003: Thomas is propsing sort template. It could be used if committed.\n\n\n+\t * If the block is beyond the EOF in the source system, or the file doesn't\n+\t * doesn'exist in the source at all, we're going to truncate/remove it away\n\n\"the file doesn't doesn'exist\"\n\nI don't think filemap_finalize needs to iterate over filemap twice.\n\nhash_string_pointer is a copy of that of pg_verifybackup.c. Is it\nworth having in hashfn.h or .c?\n\n> --- a/src/bin/pg_rewind/pg_rewind.c\n> +++ b/src/bin/pg_rewind/pg_rewind.c\n> ...\n> +\tfilemap_t *filemap;\n> ..\n> +\tfilemap_init();\n> ...\n> +\tfilemap = filemap_finalize();\n\nI'm a bit confused by this, and realized that what filemap_init\ninitializes is *not* the filemap, but the filehash. So for example,\nthe name of the functions might should be something like this?\n\nfilehash_init()\nfilemap = filehash_finalyze()/create_filemap()\n\n\n> > 0004:\n> > The names of many of the functions gets an additional word \"local\"\n> > but I don't get the meaning clearly. but its about linguistic sense\n> > and I'm not fit to that..\n> > -rewind_copy_file_range(const char *path, off_t begin, off_t end, bool\n> > -trunc)\n> > +local_fetch_file_range(rewind_source *source, const char *path,\n> > uint64 off,\n> > The function actually copying the soruce range to the target file. So\n> > \"fetch\" might give somewhat different meaning, but its about\n> > linguistic (omitted..).\n> \n> Hmm. It is \"fetching\" the range from the source server, and writing it\n> to the target. The term makes more sense with a libpq source. Perhaps\n> this function should be called \"local_copy_range\" or something, but\n> it'd also be nice to have \"fetch\" in the name because the function\n> pointer it's assigned to is called \"queue_fetch_range\".\n\nThanks. Yeah libpq_fetch_file makes sense. I agree to the name.\nThe refactoring looks good to me.\n\n> > I'm going to continue reviewing this later.\n> \n> Thanks! Attached is a new set of patches. The only meaningful change\n> is in the 2nd patch, which I modified per your suggestion. Also, I\n> moved the logic to decide each file's fate into a new subroutine\n> called decide_file_action().\n\nThat's looks good.\n\n0005:\n\n+\t/*\n+\t * We don't intend do any updates. Put the connection in read-only mode\n+\t * to keep us honest.\n+\t */\n \trun_simple_command(conn, \"SET default_transaction_read_only = off\");\n\nThe comment is wrong since the time it was added by 0004 but that's\nnot a problem since it was to be fixed by 0005. However, we need the\nvariable turned on in order to be really honest:p\n\n> /*\n> * Also check that full_page_writes is enabled. We can get torn pages if\n> * a page is modified while we read it with pg_read_binary_file(), and we\n> * rely on full page images to fix them.\n> */\n> str = run_simple_query(conn, \"SHOW full_page_writes\");\n> if (strcmp(str, \"on\") != 0)\n> \tpg_fatal(\"full_page_writes must be enabled in the source server\");\n> pg_free(str);\n\nThis is a part not changed by this patch set. But If we allow to\nconnect to a standby, this check can be tricked by setting off on the\nprimary and \"on\" on the standby (FWIW, though). Some protection\nmeasure might be necessary. (Maybe standby should be restricted to\nhave the same value with the primary.)\n\n\n+\t\t\tthislen = Min(len, CHUNK_SIZE - prev->length);\n+\t\t\tsrc->request_queue[src->num_requests - 1].length += thislen;\n\nprev == &src->request_queue[src->num_requests - 1] here.\n\n\n+\t\tif (chunksize > rq->length)\n+\t\t{\n+\t\t\tpg_fatal(\"received more than requested for file \\\"%s\\\"\",\n+\t\t\t\t\t rq->path);\n+\t\t\t/* receiving less is OK, though */\n\nDon't we need to truncate the target file, though?\n\n\n+\t\t * Source is a local data directory. It should've shut down cleanly,\n+\t\t * and we must to the latest shutdown checkpoint.\n\n\"and we must to the\" => \"and we must replay to the\" ?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 18 Sep 2020 16:41:50 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "Hey Heikki,\n\nThanks for refactoring and making the code much easier to read!\n\nBefore getting into the code review for the patch, I wanted to know why\nwe don't use a Bitmapset for target_modified_pages?\n\nCode review:\n\n\n1. We need to update the comments for process_source_file and\nprocess_target_file. We don't decide the action on the file until later.\n\n\n2. Rename target_modified_pages to target_pages_to_overwrite?\ntarget_modified_pages can lead to confusion as to whether it includes pages\nthat were modified on the target but not even present in the source and\nthe other exclusionary cases. target_pages_to_overwrite is much clearer.\n\n\n3.\n\n> /*\n> * If this is a relation file, copy the modified blocks.\n> *\n> * This is in addition to any other changes.\n> */\n> iter = datapagemap_iterate(&entry->target_modified_pages);\n> while (datapagemap_next(iter, &blkno))\n> {\n> offset = blkno * BLCKSZ;\n>\n> source->queue_fetch_range(source, entry->path, offset, BLCKSZ);\n> }\n> pg_free(iter);\n\nCan we put this hunk into a static function overwrite_pages()?\n\n\n4.\n\n> * block that have changed in the target system. It makes note of all the\n> * changed blocks in the pagemap of the file.\n\nCan we replace the above with:\n\n> * block that has changed in the target system. It decides if the given\nblkno in the target relfile needs to be overwritten from the source.\n\n\n5.\n\n> /*\n> * Doesn't exist in either server. Why does it have an entry in the\n> * first place??\n> */\n> return FILE_ACTION_NONE;\n\nCan we delete the above hunk and add the following assert to the very\ntop of decide_file_action():\n\nAssert(entry->target_exists || entry->source_exists);\n\n\n6.\n\n> pg_fatal(\"unexpected page modification for directory or symbolic link \\\"%s\\\"\",\n> entry->path);\n\nCan we replace above with:\n\npg_fatal(\"unexpected page modification for non-regular file \\\"%s\\\"\",\nentry->path);\n\nThis way we can address the undefined file type.\n\n\n7. Please address the FIXME for the symlink case:\n/* FIXME: Check if it points to the same target? */\n\n\n8.\n\n* it anyway. But there's no harm in copying it now.)\n\nand\n\n* copy them here. But we don't know which scenario we're\n* dealing with, and there's no harm in copying the missing\n* blocks now, so do it now.\n\nCould you add a line or two explaining why there is \"no harm\" in these\ntwo hunks above?\n\n\n9. Can we add pg_control, /pgsql_tmp/... and .../pgsql_tmp.* and PG_VERSION\nfiles to check_file_excluded()?\n\n\n10.\n\n- * block that have changed in the target system. It makes note of all the\n+ * block that have changed in the target system. It makes note of all the\n\nWhitespace typo\n\n\n11.\n\n> * If the block is beyond the EOF in the source system, or the file doesn't\n> * doesn'exist\n\nTypo: Two doesnt's\n\n\n12.\n\n> /*\n> * This represents the final decisions on what to do with each file.\n> * 'actions' array contains an entry for each file, sorted in the order\n> * that their actions should executed.\n> */\n> typedef struct filemap_t\n> {\n> /* Summary information, filled by calculate_totals() */\n> uint64 total_size; /* total size of the source cluster */\n> uint64 fetch_size; /* number of bytes that needs to be copied */\n>\n> int nactions; /* size of 'actions' array */\n> file_entry_t *actions[FLEXIBLE_ARRAY_MEMBER];\n> } filemap_t;\n\nReplace nactions/actions with nentries/entries..clearer in intent as\nit is easier to reconcile the modified pages stuff to an entry rather\nthan an action. It could be:\n\n/*\n * This contains the final decisions on what to do with each file.\n * 'entries' array contains an entry for each file, sorted in the order\n * that their actions should executed.\n */\ntypedef struct filemap_t\n{\n/* Summary information, filled by calculate_totals() */\nuint64 total_size; /* total size of the source cluster */\nuint64 fetch_size; /* number of bytes that needs to be copied */\nint nentries; /* size of 'entries' array */\nfile_entry_t *entries[FLEXIBLE_ARRAY_MEMBER];\n} filemap_t;\n\n\n13.\n\n> filehash = filehash_create(1000, NULL);\n\nUse a constant for the 1000 in (FILEMAP_INITIAL_SIZE):\n\n\n14. queue_overwrite_range(), finish_overwrite() instead of\nqueue_fetch_range(), finish_fetch()? Similarly update\\\n*_fetch_file_range() and *_finish_fetch()\n\n\n15. Let's have local_source.c and libpq_source.c instead of *_fetch.c\n\n\n16.\n\n> conn = PQconnectdb(connstr_source);\n>\n> if (PQstatus(conn) == CONNECTION_BAD)\n> pg_fatal(\"could not connect to server: %s\",\n> PQerrorMessage(conn));\n>\n> if (showprogress)\n> pg_log_info(\"connected to server\");\n\n\nThe above hunk should be part of init_libpq_source(). Consequently,\ninit_libpq_source() should take a connection string instead of a conn.\n\n\n17.\n\n> if (conn)\n> {\n> PQfinish(conn);\n> conn = NULL;\n> }\n\nThe hunk above should be part of libpq_destroy()\n\n\n18.\n\n> /*\n> * Files are fetched max CHUNK_SIZE bytes at a time, and with a\n> * maximum of MAX_CHUNKS_PER_QUERY chunks in a single query.\n> */\n> #define CHUNK_SIZE (1024 * 1024)\n\nCan we rename CHUNK_SIZE to MAX_CHUNK_SIZE and update the comment?\n\n\n19.\n\n> typedef struct\n> {\n> const char *path; /* path relative to data directory root */\n> uint64 offset;\n> uint32 length;\n> } fetch_range_request;\n\noffset should be of type off_t\n\n20.\n\n> * Request to fetch (part of) a file in the source system, and write it\n> * the corresponding file in the target system.\n\nCan we change the above hunk to?\n\n> * Request to fetch (part of) a file in the source system, specified\n> * by an offset and length, and write it to the same offset in the\n> * corresponding target file.\n\n\n21.\n\n> * Fetche all the queued chunks and writes them to the target data directory.\n\nTypo in word \"fetch\".\n\n\nRegards,\nSoumyadeep\n\n\n",
"msg_date": "Sun, 20 Sep 2020 13:44:28 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "Thanks for the review! I'll post a new version shortly, with your \ncomments incorporated. More detailed response to a few of them below:\n\nOn 18/09/2020 10:41, Kyotaro Horiguchi wrote:\n> I don't think filemap_finalize needs to iterate over filemap twice.\n\nTrue, but I thought it's more clear that way, doing one thing at a time.\n\n> hash_string_pointer is a copy of that of pg_verifybackup.c. Is it\n> worth having in hashfn.h or .c?\n\nI think it's fine for now. Maybe in the future if more copies crop up.\n\n>> --- a/src/bin/pg_rewind/pg_rewind.c\n>> +++ b/src/bin/pg_rewind/pg_rewind.c\n>> ...\n>> +\tfilemap_t *filemap;\n>> ..\n>> +\tfilemap_init();\n>> ...\n>> +\tfilemap = filemap_finalize();\n> \n> I'm a bit confused by this, and realized that what filemap_init\n> initializes is *not* the filemap, but the filehash. So for example,\n> the name of the functions might should be something like this?\n> \n> filehash_init()\n> filemap = filehash_finalyze()/create_filemap()\n\nMy thinking was that filemap_* is the prefix for the operations in \nfilemap.c, hence filemap_init(). I can see the confusion, though, and I \nthink you're right that renaming would be good. I renamed them to \nfilehash_init(), and decide_file_actions(). I think those names make the \ncalling code in pg_rewind.c quite clear.\n\n>> /*\n>> * Also check that full_page_writes is enabled. We can get torn pages if\n>> * a page is modified while we read it with pg_read_binary_file(), and we\n>> * rely on full page images to fix them.\n>> */\n>> str = run_simple_query(conn, \"SHOW full_page_writes\");\n>> if (strcmp(str, \"on\") != 0)\n>> \tpg_fatal(\"full_page_writes must be enabled in the source server\");\n>> pg_free(str);\n> \n> This is a part not changed by this patch set. But If we allow to\n> connect to a standby, this check can be tricked by setting off on the\n> primary and \"on\" on the standby (FWIW, though). Some protection\n> measure might be necessary.\n\nGood point, the value in the primary is what matters. In fact, even when \nconnected to the primary, the value might change while pg_rewind is \nrunning. I'm not sure how we could tighten that up.\n\n> +\t\tif (chunksize > rq->length)\n> +\t\t{\n> +\t\t\tpg_fatal(\"received more than requested for file \\\"%s\\\"\",\n> +\t\t\t\t\t rq->path);\n> +\t\t\t/* receiving less is OK, though */\n> \n> Don't we need to truncate the target file, though?\n\nIf a file is truncated in the source while pg_rewind is running, there \nshould be a WAL record about the truncation that gets replayed when you \nstart the server after pg_rewind has finished. We could truncate the \nfile if we wanted to, but it's not necessary. I'll add a comment.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 24 Sep 2020 17:54:11 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "On 20/09/2020 23:44, Soumyadeep Chakraborty wrote:\n> Before getting into the code review for the patch, I wanted to know why\n> we don't use a Bitmapset for target_modified_pages?\n\nBitmapset is not available in client code. Perhaps it could be moved to \nsrc/common with some changes, but doesn't seem worth it until there's \nmore client code that would need it.\n\nI'm not sure that a bitmap is the best data structure for tracking the \nchanged blocks in the first place. A hash table might be better if there \nare only a few changed blocks, or something like \nsrc/backend/lib/integerset.c if there are many. But as long as the \nsimple bitmap gets the job done, let's keep it simple.\n\n> 2. Rename target_modified_pages to target_pages_to_overwrite?\n> target_modified_pages can lead to confusion as to whether it includes pages\n> that were modified on the target but not even present in the source and\n> the other exclusionary cases. target_pages_to_overwrite is much clearer.\n\nAgreed, I'll rename it.\n\nConceptually, while we're scanning source WAL, we're just making note of \nthe modified blocks. The decision on what to do with them happens only \nlater, in decide_file_action(). The difference is purely theoretical, \nthough. There is no real decision to be made, all the modified blocks \nwill be overwritten. So on the whole, I agree 'target_page_to_overwrite' \nis better.\n\n>> /*\n>> * If this is a relation file, copy the modified blocks.\n>> *\n>> * This is in addition to any other changes.\n>> */\n>> iter = datapagemap_iterate(&entry->target_modified_pages);\n>> while (datapagemap_next(iter, &blkno))\n>> {\n>> offset = blkno * BLCKSZ;\n>>\n>> source->queue_fetch_range(source, entry->path, offset, BLCKSZ);\n>> }\n>> pg_free(iter);\n> \n> Can we put this hunk into a static function overwrite_pages()?\n\nMeh, it's only about 10 lines of code, and one caller.\n\n> 4.\n> \n>> * block that have changed in the target system. It makes note of all the\n>> * changed blocks in the pagemap of the file.\n> \n> Can we replace the above with:\n> \n>> * block that has changed in the target system. It decides if the given\n> blkno in the target relfile needs to be overwritten from the source.\n\nOk. Again conceptually though, process_target_wal_block_change() just \ncollects information, and the decisions are made later. But you're right \nthat we do leave out truncated-away blocks here, so we are doing more \nthan just noting all the changed blocks.\n\n>> /*\n>> * Doesn't exist in either server. Why does it have an entry in the\n>> * first place??\n>> */\n>> return FILE_ACTION_NONE;\n> \n> Can we delete the above hunk and add the following assert to the very\n> top of decide_file_action():\n> \n> Assert(entry->target_exists || entry->source_exists);\n\nI would like to keep the check even when assertions are not enabled. \nI'll add an Assert(false) there.\n\n> 7. Please address the FIXME for the symlink case:\n> /* FIXME: Check if it points to the same target? */\n\nIt's not a new issue. Would be nice to fix, of course. I'm not sure what \nthe right thing to do would be. If you have e.g. replaced \npostgresql.conf with a symlink that points outside the data directory, \nwould it be appropriate to overwrite it? Or perhaps we should throw an \nerror? We also throw an error if a file is a symlink in the source but a \nregular file in the target, or vice versa.\n\n> 8.\n> \n> * it anyway. But there's no harm in copying it now.)\n> \n> and\n> \n> * copy them here. But we don't know which scenario we're\n> * dealing with, and there's no harm in copying the missing\n> * blocks now, so do it now.\n> \n> Could you add a line or two explaining why there is \"no harm\" in these\n> two hunks above?\n\nThe previous sentences explain that there's a WAL record covering them. \nSo they will be overwritten by WAL replay anyway. Does it need more \nexplanation?\n\n> 14. queue_overwrite_range(), finish_overwrite() instead of\n> queue_fetch_range(), finish_fetch()? Similarly update\\\n> *_fetch_file_range() and *_finish_fetch()\n> \n> \n> 15. Let's have local_source.c and libpq_source.c instead of *_fetch.c\n\nGood idea! And fetch.h -> rewind_source.h.\n\nI also moved the code in execute_file_actions() function to pg_rewind.c, \ninto a new function: perform_rewind(). It felt a bit silly to have just \nexecute_file_actions() in a file of its own. perform_rewind() now does \nall the modifications to the data directory, writing the backup file. \nExcept for writing the recovery config: that also needs to be when \nthere's no rewind to do, so it makes sense to keep it separate. What do \nyou think?\n\n> 16.\n> \n>> conn = PQconnectdb(connstr_source);\n>>\n>> if (PQstatus(conn) == CONNECTION_BAD)\n>> pg_fatal(\"could not connect to server: %s\",\n>> PQerrorMessage(conn));\n>>\n>> if (showprogress)\n>> pg_log_info(\"connected to server\");\n> \n> \n> The above hunk should be part of init_libpq_source(). Consequently,\n> init_libpq_source() should take a connection string instead of a conn.\n\nThe libpq connection is also needed by WriteRecoveryConfig(), that's why \nit's not fully encapsulated in libpq_source.\n\n> 19.\n> \n>> typedef struct\n>> {\n>> const char *path; /* path relative to data directory root */\n>> uint64 offset;\n>> uint32 length;\n>> } fetch_range_request;\n> \n> offset should be of type off_t\n\nThe 'offset' argument to the queue_fetch_range function is uint64, and \nthe argument to the SQL-callable pg_read_binary_file() isint8, so it's \nconsistent with them. Then again, the 'len' argument to \nqueue_fetch_range() is a size_t, and to pg_read_binary_file() int8, so \nit's not fully consistent with that either. I'll try to make it more \nconsistent.\n\nThanks for the review! Attached is a new version of the patch set.\n\n- Heikki",
"msg_date": "Thu, 24 Sep 2020 20:27:22 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "On Thu, Sep 24, 2020 at 10:27 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >> /*\n> >> * If this is a relation file, copy the modified blocks.\n> >> *\n> >> * This is in addition to any other changes.\n> >> */\n> >> iter = datapagemap_iterate(&entry->target_modified_pages);\n> >> while (datapagemap_next(iter, &blkno))\n> >> {\n> >> offset = blkno * BLCKSZ;\n> >>\n> >> source->queue_fetch_range(source, entry->path, offset, BLCKSZ);\n> >> }\n> >> pg_free(iter);\n> >\n> > Can we put this hunk into a static function overwrite_pages()?\n>\n> Meh, it's only about 10 lines of code, and one caller.\n\nFair.\n\n>\n> > 7. Please address the FIXME for the symlink case:\n> > /* FIXME: Check if it points to the same target? */\n>\n> It's not a new issue. Would be nice to fix, of course. I'm not sure what\n> the right thing to do would be. If you have e.g. replaced\n> postgresql.conf with a symlink that points outside the data directory,\n> would it be appropriate to overwrite it? Or perhaps we should throw an\n> error? We also throw an error if a file is a symlink in the source but a\n> regular file in the target, or vice versa.\n>\n\nHmm, I can imagine a use case for 2 different symlink targets on the\nsource and target clusters. For example the primary's pg_wal directory\ncan have a different symlink target as compared to a standby's\n(different mount points on the same network maybe?). An end user might\nnot desire pg_rewind meddling with that setup or may desire pg_rewind to\ntreat the source as a source-of-truth with respect to this as well and\nwould want pg_rewind to overwrite the target's symlink. Maybe doing a\ncheck and emitting a warning with hint/detail is prudent here while\ntaking no action.\n\n\n> > 8.\n> >\n> > * it anyway. But there's no harm in copying it now.)\n> >\n> > and\n> >\n> > * copy them here. But we don't know which scenario we're\n> > * dealing with, and there's no harm in copying the missing\n> > * blocks now, so do it now.\n> >\n> > Could you add a line or two explaining why there is \"no harm\" in these\n> > two hunks above?\n>\n> The previous sentences explain that there's a WAL record covering them.\n> So they will be overwritten by WAL replay anyway. Does it need more\n> explanation?\n\nYeah you are right, that is reason enough.\n\n> > 14. queue_overwrite_range(), finish_overwrite() instead of\n> > queue_fetch_range(), finish_fetch()? Similarly update\\\n> > *_fetch_file_range() and *_finish_fetch()\n> >\n> >\n> > 15. Let's have local_source.c and libpq_source.c instead of *_fetch.c\n>\n> Good idea! And fetch.h -> rewind_source.h.\n\n+1. You might have missed the changes to rename \"fetch\" -> \"overwrite\"\nas was mentioned in 14.\n\n>\n> I also moved the code in execute_file_actions() function to pg_rewind.c,\n> into a new function: perform_rewind(). It felt a bit silly to have just\n> execute_file_actions() in a file of its own. perform_rewind() now does\n> all the modifications to the data directory, writing the backup file.\n> Except for writing the recovery config: that also needs to be when\n> there's no rewind to do, so it makes sense to keep it separate. What do\n> you think?\n\nI don't mind inlining that functionality into perform_rewind(). +1.\nHeads up: The function declaration for execute_file_actions() is still\nthere in rewind_source.h.\n\n> > 16.\n> >\n> >> conn = PQconnectdb(connstr_source);\n> >>\n> >> if (PQstatus(conn) == CONNECTION_BAD)\n> >> pg_fatal(\"could not connect to server: %s\",\n> >> PQerrorMessage(conn));\n> >>\n> >> if (showprogress)\n> >> pg_log_info(\"connected to server\");\n> >\n> >\n> > The above hunk should be part of init_libpq_source(). Consequently,\n> > init_libpq_source() should take a connection string instead of a conn.\n>\n> The libpq connection is also needed by WriteRecoveryConfig(), that's why\n> it's not fully encapsulated in libpq_source.\n\nAh. I find it pretty weird why we need to specify --source-server to\nhave ----write-recovery-conf work. From the code, we only need the conn\nfor calling PQserverVersion(), something we can easily get by slurping\npg_controldata on the source side? Maybe we can remove this limitation?\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Thu, 24 Sep 2020 16:56:46 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "On 25/09/2020 02:56, Soumyadeep Chakraborty wrote:\n> On Thu, Sep 24, 2020 at 10:27 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>> 7. Please address the FIXME for the symlink case:\n>>> /* FIXME: Check if it points to the same target? */\n>>\n>> It's not a new issue. Would be nice to fix, of course. I'm not sure what\n>> the right thing to do would be. If you have e.g. replaced\n>> postgresql.conf with a symlink that points outside the data directory,\n>> would it be appropriate to overwrite it? Or perhaps we should throw an\n>> error? We also throw an error if a file is a symlink in the source but a\n>> regular file in the target, or vice versa.\n> \n> Hmm, I can imagine a use case for 2 different symlink targets on the\n> source and target clusters. For example the primary's pg_wal directory\n> can have a different symlink target as compared to a standby's\n> (different mount points on the same network maybe?). An end user might\n> not desire pg_rewind meddling with that setup or may desire pg_rewind to\n> treat the source as a source-of-truth with respect to this as well and\n> would want pg_rewind to overwrite the target's symlink. Maybe doing a\n> check and emitting a warning with hint/detail is prudent here while\n> taking no action.\n\nWe have special handling for 'pg_wal' to pretend that it's a regular \ndirectory (see process_source_file()), so that's taken care of. But if \nyou did a something similar with some other subdirectory, that would be \na problem.\n\n>>> 14. queue_overwrite_range(), finish_overwrite() instead of\n>>> queue_fetch_range(), finish_fetch()? Similarly update\\\n>>> *_fetch_file_range() and *_finish_fetch()\n>>>\n>>>\n>>> 15. Let's have local_source.c and libpq_source.c instead of *_fetch.c\n>>\n>> Good idea! And fetch.h -> rewind_source.h.\n> \n> +1. You might have missed the changes to rename \"fetch\" -> \"overwrite\"\n> as was mentioned in 14.\n\nI preferred the \"fetch\" nomenclature in those function names. They fetch \nand overwrite the file ranges, so 'fetch' still seems appropriate. \n\"fetch\" -> \"overwrite\" would make sense if you wanted to emphasize the \n\"overwrite\" part more. Or we could rename it to \"fetch_and_overwrite\". \nBut overall I think \"fetch\" is fine.\n\n>>> 16.\n>>>\n>>>> conn = PQconnectdb(connstr_source);\n>>>>\n>>>> if (PQstatus(conn) == CONNECTION_BAD)\n>>>> pg_fatal(\"could not connect to server: %s\",\n>>>> PQerrorMessage(conn));\n>>>>\n>>>> if (showprogress)\n>>>> pg_log_info(\"connected to server\");\n>>>\n>>>\n>>> The above hunk should be part of init_libpq_source(). Consequently,\n>>> init_libpq_source() should take a connection string instead of a conn.\n>>\n>> The libpq connection is also needed by WriteRecoveryConfig(), that's why\n>> it's not fully encapsulated in libpq_source.\n> \n> Ah. I find it pretty weird why we need to specify --source-server to\n> have ----write-recovery-conf work. From the code, we only need the conn\n> for calling PQserverVersion(), something we can easily get by slurping\n> pg_controldata on the source side? Maybe we can remove this limitation?\n\nYeah, perhaps. In another patch :-).\n\nI read through the patches one more time, fixed a bunch of typos and \nsuch, and pushed patches 1-4. I'm going to spend some more time on \ntesting the last patch. It allows using a standby server as the source, \nand we don't have any tests for that yet. Thanks for the review!\n\n- Heikki\n\n\n",
"msg_date": "Wed, 4 Nov 2020 11:23:58 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "On 04/11/2020 11:23, Heikki Linnakangas wrote:\n> I read through the patches one more time, fixed a bunch of typos and\n> such, and pushed patches 1-4. I'm going to spend some more time on\n> testing the last patch. It allows using a standby server as the source,\n> and we don't have any tests for that yet. Thanks for the review!\n\nDid some more testing, fixed one bug, and pushed.\n\nTo test this, I set up a cluster with one primary, a standby, and a \ncascaded standby. I launched a test workload against the primary that \ncreates tables, inserts rows, and drops tables continuously. In another \nshell, I promoted the cascaded standby, run some updates on the promoted \nserver, and finally, run pg_rewind pointed at the standby, and start it \nagain as a cascaded standby. Repeat.\n\nAttached are the scripts I used. I edited them between test runs to test \nslightly different scenarios. I don't expect them to be very useful to \nanyone else, but the Internet is my backup.\n\nI did find one bug in the patch with that, so the time was well spent: \nthe code in process_queued_fetch_requests() got confused and errored \nout, if a file was removed in the source system while pg_rewind was \nrunning. There was code to deal with that, but it was broken. Fixed that.\n\n- Heikki",
"msg_date": "Thu, 12 Nov 2020 14:58:02 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "Not sure if you noticed, but piculet has twice failed the\n007_standby_source.pl test that was added by 9c4f5192f:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2020-11-15%2006%3A00%3A11\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2020-11-13%2011%3A20%3A10\n\nand francolin failed once:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2020-11-12%2018%3A57%3A33\n\nThese failures look the same:\n\n# Failed test 'table content after rewind and insert: query result matches'\n# at t/007_standby_source.pl line 160.\n# got: 'in A\n# in A, before promotion\n# in A, after C was promoted\n# '\n# expected: 'in A\n# in A, before promotion\n# in A, after C was promoted\n# in A, after rewind\n# '\n# Looks like you failed 1 test of 3.\n[11:27:01] t/007_standby_source.pl ... \nDubious, test returned 1 (wstat 256, 0x100)\nFailed 1/3 subtests \n\nNow, I'm not sure what to make of that, but I can't help noticing that\npiculet uses --disable-atomics while francolin uses --disable-spinlocks.\nThat leads the mind towards some kind of low-level synchronization\nbug ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 15 Nov 2020 01:48:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "I wrote:\n> Not sure if you noticed, but piculet has twice failed the\n> 007_standby_source.pl test that was added by 9c4f5192f:\n> ...\n> Now, I'm not sure what to make of that, but I can't help noticing that\n> piculet uses --disable-atomics while francolin uses --disable-spinlocks.\n> That leads the mind towards some kind of low-level synchronization\n> bug ...\n\nOr, maybe it's less mysterious than that. The failure looks like we\nhave not waited long enough for the just-inserted row to get replicated\nto node C. That wait is implemented as\n\n\t$lsn = $node_a->lsn('insert');\n\t$node_b->wait_for_catchup('node_c', 'write', $lsn);\n\nwhich looks fishy ... shouldn't wait_for_catchup be told to\nwait for replay of that LSN, not just write-the-WAL?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 15 Nov 2020 02:07:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "On 15/11/2020 09:07, Tom Lane wrote:\n> I wrote:\n>> Not sure if you noticed, but piculet has twice failed the\n>> 007_standby_source.pl test that was added by 9c4f5192f:\n>> ...\n>> Now, I'm not sure what to make of that, but I can't help noticing that\n>> piculet uses --disable-atomics while francolin uses --disable-spinlocks.\n>> That leads the mind towards some kind of low-level synchronization\n>> bug ...\n> \n> Or, maybe it's less mysterious than that. The failure looks like we\n> have not waited long enough for the just-inserted row to get replicated\n> to node C. That wait is implemented as\n> \n> \t$lsn = $node_a->lsn('insert');\n> \t$node_b->wait_for_catchup('node_c', 'write', $lsn);\n> \n> which looks fishy ... shouldn't wait_for_catchup be told to\n> wait for replay of that LSN, not just write-the-WAL?\n\nYep, quite right. Fixed that way, thanks for the debugging!\n\n- Heikki\n\n\n",
"msg_date": "Sun, 15 Nov 2020 17:10:53 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-15 17:10:53 +0200, Heikki Linnakangas wrote:\n> Yep, quite right. Fixed that way, thanks for the debugging!\n\nI locally, on a heavily modified branch (AIO support), started to get\nconsistent failures in this test. I *suspect*, but am not sure, that\nit's the test's fault, not the fault of modifications.\n\nAs far as I can tell, after the pg_rewind call, there's no guarantee\nthat node_c has fully caught up to the 'in A, after C was promoted'\ninsertion on node a. Thus at the check_query() I sometimes get just 'in\nA, before promotion' back.\n\nAfter adding a wait that problem seems to be fixed. Here's what I did\n\ndiff --git i/src/bin/pg_rewind/t/007_standby_source.pl w/src/bin/pg_rewind/t/007_standby_source.pl\nindex f6abcc2d987..48898bef2f5 100644\n--- i/src/bin/pg_rewind/t/007_standby_source.pl\n+++ w/src/bin/pg_rewind/t/007_standby_source.pl\n@@ -88,6 +88,7 @@ $node_c->safe_psql('postgres', \"checkpoint\");\n # - you need to rewind.\n $node_a->safe_psql('postgres',\n \"INSERT INTO tbl1 VALUES ('in A, after C was promoted')\");\n+$lsn = $node_a->lsn('insert');\n \n # Also insert a new row in the standby, which won't be present in the\n # old primary.\n@@ -142,6 +143,8 @@ $node_primary = $node_c;\n # Run some checks to verify that C has been successfully rewound,\n # and connected back to follow B.\n \n+$node_b->wait_for_catchup('node_c', 'replay', $lsn);\n+\n check_query(\n 'SELECT * FROM tbl1',\n qq(in A\n\n\n- Andres\n\n\n",
"msg_date": "Thu, 19 Nov 2020 16:38:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "On 20/11/2020 02:38, Andres Freund wrote:\n> I locally, on a heavily modified branch (AIO support), started to get\n> consistent failures in this test. I *suspect*, but am not sure, that\n> it's the test's fault, not the fault of modifications.\n> \n> As far as I can tell, after the pg_rewind call, there's no guarantee\n> that node_c has fully caught up to the 'in A, after C was promoted'\n> insertion on node a. Thus at the check_query() I sometimes get just 'in\n> A, before promotion' back.\n> \n> After adding a wait that problem seems to be fixed. Here's what I did\n> \n> diff --git i/src/bin/pg_rewind/t/007_standby_source.pl w/src/bin/pg_rewind/t/007_standby_source.pl\n> index f6abcc2d987..48898bef2f5 100644\n> --- i/src/bin/pg_rewind/t/007_standby_source.pl\n> +++ w/src/bin/pg_rewind/t/007_standby_source.pl\n> @@ -88,6 +88,7 @@ $node_c->safe_psql('postgres', \"checkpoint\");\n> # - you need to rewind.\n> $node_a->safe_psql('postgres',\n> \"INSERT INTO tbl1 VALUES ('in A, after C was promoted')\");\n> +$lsn = $node_a->lsn('insert');\n> \n> # Also insert a new row in the standby, which won't be present in the\n> # old primary.\n> @@ -142,6 +143,8 @@ $node_primary = $node_c;\n> # Run some checks to verify that C has been successfully rewound,\n> # and connected back to follow B.\n> \n> +$node_b->wait_for_catchup('node_c', 'replay', $lsn);\n> +\n> check_query(\n> 'SELECT * FROM tbl1',\n> qq(in A\n\nYes, I was able to reproduced that by inserting a strategic sleep in the \ntest and pausing replication by attaching gdb to the walsender process.\n\nPushed a fix similar to your patch, but I put the wait_for_catchup() \nbefore running pg_rewind. The point of inserting the 'in A, after C was \npromoted' row is that it's present in B when pg_rewind runs.\n\nThanks!\n\n- Heikki\n\n\n",
"msg_date": "Fri, 20 Nov 2020 16:19:03 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "Hi,\n\nOn 2020-11-20 16:19:03 +0200, Heikki Linnakangas wrote:\n> Pushed a fix similar to your patch, but I put the wait_for_catchup() before\n> running pg_rewind. The point of inserting the 'in A, after C was promoted'\n> row is that it's present in B when pg_rewind runs.\n\nHm - don't we possibly need *both*? Since post pg_rewind recovery starts\nat the previous checkpoint, it's quite possible for C to get ready to\nanswer queries before that record has been replayed?\n\nThanks,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Nov 2020 09:14:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
},
{
"msg_contents": "On 20/11/2020 19:14, Andres Freund wrote:\n> Hi,\n> \n> On 2020-11-20 16:19:03 +0200, Heikki Linnakangas wrote:\n>> Pushed a fix similar to your patch, but I put the wait_for_catchup() before\n>> running pg_rewind. The point of inserting the 'in A, after C was promoted'\n>> row is that it's present in B when pg_rewind runs.\n> \n> Hm - don't we possibly need *both*? Since post pg_rewind recovery starts\n> at the previous checkpoint, it's quite possible for C to get ready to\n> answer queries before that record has been replayed?\n\nNo, C will not reach consistent state until all the WAL in the source \nsystem has been replayed. pg_rewind will set minRecoveryPoint to the \nminRecoveryPoint of the source system, after copying all the files. (Or \nits insert point, if it's not a standby server, but in this case it is). \nSame as when taking an online backup.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 20 Nov 2020 23:09:34 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Refactor pg_rewind code and make it work against a standby"
}
] |
[
{
"msg_contents": "As a note I tried to use the deb repo today:\n\nhttps://www.postgresql.org/download/linux/debian/\n\nwith an old box on Wheezy.\nIt only seems to have binaries up to postgres 10.\n\nMight be nice to make a note on the web page so people realize some\ndistro's aren't supported fully instead of (if they're like me)\nwondering \"why don't these instructions work? It says to run apt-get\ninstall postgresql-12\" ....\n\nThanks!\n\n\n",
"msg_date": "Wed, 19 Aug 2020 11:04:27 -0600",
"msg_from": "Roger Pack <rogerdpack2@gmail.com>",
"msg_from_op": true,
"msg_subject": "deb repo doesn't have latest. or possible to update web page?"
},
{
"msg_contents": "On Wed, Aug 19, 2020 at 7:04 PM Roger Pack <rogerdpack2@gmail.com> wrote:\n\n> As a note I tried to use the deb repo today:\n>\n> https://www.postgresql.org/download/linux/debian/\n>\n> with an old box on Wheezy.\n> It only seems to have binaries up to postgres 10.\n>\n> Might be nice to make a note on the web page so people realize some\n> distro's aren't supported fully instead of (if they're like me)\n> wondering \"why don't these instructions work? It says to run apt-get\n> install postgresql-12\" ...\n>\n\nThe page lists which distros *are* supported. You can assume that anything\n*not* listed is unsupported.\n\nIn the case of wheezy, whatever was the latest when it stopped being\nsupported, is still there. Which I guess can cause some confusing if you\njust run the script without reading the note that's there. I'm unsure how\nto fix that though.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Aug 19, 2020 at 7:04 PM Roger Pack <rogerdpack2@gmail.com> wrote:As a note I tried to use the deb repo today:\n\nhttps://www.postgresql.org/download/linux/debian/\n\nwith an old box on Wheezy.\nIt only seems to have binaries up to postgres 10.\n\nMight be nice to make a note on the web page so people realize some\ndistro's aren't supported fully instead of (if they're like me)\nwondering \"why don't these instructions work? It says to run apt-get\ninstall postgresql-12\" ...The page lists which distros *are* supported. You can assume that anything *not* listed is unsupported.In the case of wheezy, whatever was the latest when it stopped being supported, is still there. Which I guess can cause some confusing if you just run the script without reading the note that's there. I'm unsure how to fix that though.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 19 Aug 2020 19:23:39 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: deb repo doesn't have latest. or possible to update web page?"
},
{
"msg_contents": "On Wed, Aug 19, 2020 at 11:23 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n>\n>\n> On Wed, Aug 19, 2020 at 7:04 PM Roger Pack <rogerdpack2@gmail.com> wrote:\n>>\n>> As a note I tried to use the deb repo today:\n>>\n>> https://www.postgresql.org/download/linux/debian/\n>>\n>> with an old box on Wheezy.\n>> It only seems to have binaries up to postgres 10.\n>>\n>> Might be nice to make a note on the web page so people realize some\n>> distro's aren't supported fully instead of (if they're like me)\n>> wondering \"why don't these instructions work? It says to run apt-get\n>> install postgresql-12\" ...\n>\n>\n> The page lists which distros *are* supported. You can assume that anything *not* listed is unsupported.\n>\n> In the case of wheezy, whatever was the latest when it stopped being supported, is still there. Which I guess can cause some confusing if you just run the script without reading the note that's there. I'm unsure how to fix that though.\n\nThe confusion in my case is I wasn't sure why my distro was named,\ntried the instructions and it...half worked.\n\nMaybe something like this?\n\nThe PostgreSQL apt repository supports the currently supported stable\nversions of Debian with the latest versions of Postgres:\n\nxxx\nxxx\n\nOlder versions of Debian may also be supported with older versions of Postgres.\n\n\n\nOr get rid of the wheezy side altogether?\n\nCheers!\n\n\n",
"msg_date": "Thu, 20 Aug 2020 14:38:38 -0600",
"msg_from": "Roger Pack <rogerdpack2@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: deb repo doesn't have latest. or possible to update web page?"
},
{
"msg_contents": "On Thu, Aug 20, 2020 at 10:38 PM Roger Pack <rogerdpack2@gmail.com> wrote:\n\n> On Wed, Aug 19, 2020 at 11:23 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >\n> >\n> >\n> > On Wed, Aug 19, 2020 at 7:04 PM Roger Pack <rogerdpack2@gmail.com>\n> wrote:\n> >>\n> >> As a note I tried to use the deb repo today:\n> >>\n> >> https://www.postgresql.org/download/linux/debian/\n> >>\n> >> with an old box on Wheezy.\n> >> It only seems to have binaries up to postgres 10.\n> >>\n> >> Might be nice to make a note on the web page so people realize some\n> >> distro's aren't supported fully instead of (if they're like me)\n> >> wondering \"why don't these instructions work? It says to run apt-get\n> >> install postgresql-12\" ...\n> >\n> >\n> > The page lists which distros *are* supported. You can assume that\n> anything *not* listed is unsupported.\n> >\n> > In the case of wheezy, whatever was the latest when it stopped being\n> supported, is still there. Which I guess can cause some confusing if you\n> just run the script without reading the note that's there. I'm unsure how\n> to fix that though.\n>\n> The confusion in my case is I wasn't sure why my distro was named,\n> tried the instructions and it...half worked.\n>\n> Maybe something like this?\n>\n> The PostgreSQL apt repository supports the currently supported stable\n> versions of Debian with the latest versions of Postgres:\n>\n> xxx\n> xxx\n>\n> Older versions of Debian may also be supported with older versions of\n> Postgres.\n>\n\nWell, they are not supported. The packages may be there, but they are not\nsupported. I think that's an important distinction. Maybe add something\nlike \"some packages may be available for older versions of Debian, but are\nnot supported\" or such?\n\n\n\nOr get rid of the wheezy side altogether?\n>\n\nOr move it to the archive. I'm not entirely sure why it's still there.\nChristoph?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Aug 20, 2020 at 10:38 PM Roger Pack <rogerdpack2@gmail.com> wrote:On Wed, Aug 19, 2020 at 11:23 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n>\n>\n> On Wed, Aug 19, 2020 at 7:04 PM Roger Pack <rogerdpack2@gmail.com> wrote:\n>>\n>> As a note I tried to use the deb repo today:\n>>\n>> https://www.postgresql.org/download/linux/debian/\n>>\n>> with an old box on Wheezy.\n>> It only seems to have binaries up to postgres 10.\n>>\n>> Might be nice to make a note on the web page so people realize some\n>> distro's aren't supported fully instead of (if they're like me)\n>> wondering \"why don't these instructions work? It says to run apt-get\n>> install postgresql-12\" ...\n>\n>\n> The page lists which distros *are* supported. You can assume that anything *not* listed is unsupported.\n>\n> In the case of wheezy, whatever was the latest when it stopped being supported, is still there. Which I guess can cause some confusing if you just run the script without reading the note that's there. I'm unsure how to fix that though.\n\nThe confusion in my case is I wasn't sure why my distro was named,\ntried the instructions and it...half worked.\n\nMaybe something like this?\n\nThe PostgreSQL apt repository supports the currently supported stable\nversions of Debian with the latest versions of Postgres:\n\nxxx\nxxx\n\nOlder versions of Debian may also be supported with older versions of Postgres.Well, they are not supported. The packages may be there, but they are not supported. I think that's an important distinction. Maybe add something like \"some packages may be available for older versions of Debian, but are not supported\" or such?Or get rid of the wheezy side altogether?Or move it to the archive. I'm not entirely sure why it's still there. Christoph? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 21 Aug 2020 19:12:49 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: deb repo doesn't have latest. or possible to update web page?"
},
{
"msg_contents": "Re: Magnus Hagander\n> > The confusion in my case is I wasn't sure why my distro was named,\n> > tried the instructions and it...half worked.\n> >\n> > Maybe something like this?\n> >\n> > The PostgreSQL apt repository supports the currently supported stable\n> > versions of Debian with the latest versions of Postgres:\n> >\n> > xxx\n> > xxx\n> >\n> > Older versions of Debian may also be supported with older versions of\n> > Postgres.\n> >\n> \n> Well, they are not supported. The packages may be there, but they are not\n> supported. I think that's an important distinction. Maybe add something\n> like \"some packages may be available for older versions of Debian, but are\n> not supported\" or such?\n\nWe have had \"Packages for older PostgreSQL versions and older\nDebian/Ubuntu distributions will continue to stay in the repository,\nbut will in most cases not be updated anymore.\" right in the first\nparagraph on the front-page since about the first revision.\n\n> Or get rid of the wheezy side altogether?\n> >\n> \n> Or move it to the archive. I'm not entirely sure why it's still there.\n> Christoph?\n\nLast time I checked there were still some docker containers using\nwheezy to pull older PG server versions. Though I guess it's time to\nput it to rest now that jessie is also EOL.\n\nChristoph\n\n\n",
"msg_date": "Mon, 24 Aug 2020 13:33:25 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: deb repo doesn't have latest. or possible to update web page?"
},
{
"msg_contents": "Re: Magnus Hagander\n> Well, they are not supported. The packages may be there, but they are not\n> supported. I think that's an important distinction. Maybe add something\n> like \"some packages may be available for older versions of Debian, but are\n> not supported\" or such?\n\nI'm talking about https://wiki.postgresql.org/wiki/Apt, which is where\nyou get redirected if you go to http://apt.postgresql.org.\n\nThe /download page should have a similar note I think.\n\nChristoph\n\n\n",
"msg_date": "Mon, 24 Aug 2020 13:34:45 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: deb repo doesn't have latest. or possible to update web page?"
},
{
"msg_contents": "On Mon, Aug 24, 2020 at 1:34 PM Christoph Berg <myon@debian.org> wrote:\n\n> Re: Magnus Hagander\n> > Well, they are not supported. The packages may be there, but they are not\n> > supported. I think that's an important distinction. Maybe add something\n> > like \"some packages may be available for older versions of Debian, but\n> are\n> > not supported\" or such?\n>\n> I'm talking about https://wiki.postgresql.org/wiki/Apt, which is where\n> you get redirected if you go to http://apt.postgresql.org.\n>\n> The /download page should have a similar note I think.\n>\n\nYeah, that's the one I was referring to, and that's the one that I think\nthe vast majority of people see.\n\nMaybe something similar to your line, but with a link, e.g. something like\n\"Note! Packages for older versions of PostgreSQL or the operating system\nmay remain in the repository, but are not supported and will in most cases\nnot be updated anymore. For details, see the apt repository wiki page.\"\n\nwith the wiki page being a link. I do like to get the word \"unsupported\" in\nthere as well, unless you think that's a bad idea?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Aug 24, 2020 at 1:34 PM Christoph Berg <myon@debian.org> wrote:Re: Magnus Hagander\n> Well, they are not supported. The packages may be there, but they are not\n> supported. I think that's an important distinction. Maybe add something\n> like \"some packages may be available for older versions of Debian, but are\n> not supported\" or such?\n\nI'm talking about https://wiki.postgresql.org/wiki/Apt, which is where\nyou get redirected if you go to http://apt.postgresql.org.\n\nThe /download page should have a similar note I think.Yeah, that's the one I was referring to, and that's the one that I think the vast majority of people see.Maybe something similar to your line, but with a link, e.g. something like\"Note! Packages for older versions of PostgreSQL or the operating system may remain in the repository, but are not supported and will in most cases not be updated anymore. For details, see the apt repository wiki page.\"with the wiki page being a link. I do like to get the word \"unsupported\" in there as well, unless you think that's a bad idea? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 24 Aug 2020 14:22:10 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: deb repo doesn't have latest. or possible to update web page?"
},
{
"msg_contents": "On Mon, Aug 24, 2020 at 1:33 PM Christoph Berg <myon@debian.org> wrote:\n\n> > Or get rid of the wheezy side altogether?\n> > >\n> >\n> > Or move it to the archive. I'm not entirely sure why it's still there.\n> > Christoph?\n>\n> Last time I checked there were still some docker containers using\n> wheezy to pull older PG server versions. Though I guess it's time to\n> put it to rest now that jessie is also EOL.\n>\n\nYeah, for wheezy I think that's entirely reasonable -- there's a limit how\nmany \"old\" makes sense in oldoldoldoldoldstable :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Aug 24, 2020 at 1:33 PM Christoph Berg <myon@debian.org> wrote:> Or get rid of the wheezy side altogether?\n> >\n> \n> Or move it to the archive. I'm not entirely sure why it's still there.\n> Christoph?\n\nLast time I checked there were still some docker containers using\nwheezy to pull older PG server versions. Though I guess it's time to\nput it to rest now that jessie is also EOL.Yeah, for wheezy I think that's entirely reasonable -- there's a limit how many \"old\" makes sense in oldoldoldoldoldstable :) -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 24 Aug 2020 14:23:30 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: deb repo doesn't have latest. or possible to update web page?"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nMore than month ago I have sent bug report to pgsql-bugs:\n\nhttps://www.postgresql.org/message-id/flat/5d335911-fb25-60cd-4aa7-a5bd0954aea0%40postgrespro.ru\n\nwith the proposed patch but have not received any response.\n\nI wonder if there is some other way to fix this issue and does somebody \nworking on it.\nWhile the added check itself is trivial (just one line) the total patch \nis not so small because I have added walker for\nplpgsql statements tree. It is not strictly needed in this case (it is \npossible to find some other way to determine that stored procedure\ncontains transaction control statements), but I hope such walker may be \nuseful in other cases.\n\nIn any case, I will be glad to receive any response,\nbecause this problem was reported by one of our customers and we need to \nprovide some fix.\nIt is better to include it in vanilla, rather than in our pgpro-ee fork.\n\nIf it is desirable, I can add this patch to commitfest.\n\nThanks in advance,\nKonstantin\n\n\n\n",
"msg_date": "Wed, 19 Aug 2020 20:22:50 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "Hi\n\nst 19. 8. 2020 v 19:22 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n> Hi hackers,\n>\n> More than month ago I have sent bug report to pgsql-bugs:\n>\n>\n> https://www.postgresql.org/message-id/flat/5d335911-fb25-60cd-4aa7-a5bd0954aea0%40postgrespro.ru\n>\n> with the proposed patch but have not received any response.\n>\n> I wonder if there is some other way to fix this issue and does somebody\n> working on it.\n> While the added check itself is trivial (just one line) the total patch\n> is not so small because I have added walker for\n> plpgsql statements tree. It is not strictly needed in this case (it is\n> possible to find some other way to determine that stored procedure\n> contains transaction control statements), but I hope such walker may be\n> useful in other cases.\n>\n> In any case, I will be glad to receive any response,\n> because this problem was reported by one of our customers and we need to\n> provide some fix.\n> It is better to include it in vanilla, rather than in our pgpro-ee fork.\n>\n> If it is desirable, I can add this patch to commitfest.\n>\n\n\nI don't like this design. It is not effective to repeat the walker for\nevery execution. Introducing a walker just for this case looks like\noverengineering.\nPersonally I am not sure if a walker for plpgsql is a good idea (I thought\nabout it more times, when I wrote plpgsql_check). But anyway - there should\nbe good reason for introducing the walker and clean use case.\n\nIf you want to introduce stmt walker, then it should be a separate patch\nwith some benefit on plpgsql environment length.\n\nRegards\n\nPavel\n\n\n> Thanks in advance,\n> Konstantin\n>\n>",
"msg_date": "Wed, 19 Aug 2020 20:50:56 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "On 19.08.2020 21:50, Pavel Stehule wrote:\n> Hi\n>\n> st 19. 8. 2020 v 19:22 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n> Hi hackers,\n>\n> More than month ago I have sent bug report to pgsql-bugs:\n>\n> https://www.postgresql.org/message-id/flat/5d335911-fb25-60cd-4aa7-a5bd0954aea0%40postgrespro.ru\n>\n> with the proposed patch but have not received any response.\n>\n> I wonder if there is some other way to fix this issue and does\n> somebody\n> working on it.\n> While the added check itself is trivial (just one line) the total\n> patch\n> is not so small because I have added walker for\n> plpgsql statements tree. It is not strictly needed in this case\n> (it is\n> possible to find some other way to determine that stored procedure\n> contains transaction control statements), but I hope such walker\n> may be\n> useful in other cases.\n>\n> In any case, I will be glad to receive any response,\n> because this problem was reported by one of our customers and we\n> need to\n> provide some fix.\n> It is better to include it in vanilla, rather than in our pgpro-ee\n> fork.\n>\n> If it is desirable, I can add this patch to commitfest.\n>\n>\n>\n> I don't like this design. It is not effective to repeat the walker for \n> every execution. Introducing a walker just for this case looks like \n> overengineering.\n> Personally I am not sure if a walker for plpgsql is a good idea (I \n> thought about it more times, when I wrote plpgsql_check). But anyway - \n> there should be good reason for introducing the walker and clean use case.\n>\n> If you want to introduce stmt walker, then it should be a separate \n> patch with some benefit on plpgsql environment length.\n>\nIf you think that plpgsql statement walker is not needed, then I do not \ninsist.\nAre you going to commit your version of the patch?\n\n\n\n\n\n\n\n\n\n\nOn 19.08.2020 21:50, Pavel Stehule\n wrote:\n\n\n\n\nHi\n\n\n\nst 19. 8. 2020 v 19:22\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\nHi hackers,\n\n More than month ago I have sent bug report to pgsql-bugs:\n\nhttps://www.postgresql.org/message-id/flat/5d335911-fb25-60cd-4aa7-a5bd0954aea0%40postgrespro.ru\n\n with the proposed patch but have not received any response.\n\n I wonder if there is some other way to fix this issue and\n does somebody \n working on it.\n While the added check itself is trivial (just one line) the\n total patch \n is not so small because I have added walker for\n plpgsql statements tree. It is not strictly needed in this\n case (it is \n possible to find some other way to determine that stored\n procedure\n contains transaction control statements), but I hope such\n walker may be \n useful in other cases.\n\n In any case, I will be glad to receive any response,\n because this problem was reported by one of our customers\n and we need to \n provide some fix.\n It is better to include it in vanilla, rather than in our\n pgpro-ee fork.\n\n If it is desirable, I can add this patch to commitfest.\n\n\n\n\n\nI don't like this design. It is not effective to repeat\n the walker for every execution. Introducing a walker just\n for this case looks like overengineering.\nPersonally I am not sure if a walker for plpgsql is a\n good idea (I thought about it more times, when I wrote\n plpgsql_check). But anyway - there should be good reason for\n introducing the walker and clean use case.\n\n\n\n\nIf you want to introduce stmt walker,\n then it should be a separate patch with some benefit on\n plpgsql environment length. \n\n\n\n\n\n If you think that plpgsql statement walker is not needed, then I do\n not insist.\n Are you going to commit your version of the patch?",
"msg_date": "Wed, 19 Aug 2020 21:59:55 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "st 19. 8. 2020 v 20:59 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 19.08.2020 21:50, Pavel Stehule wrote:\n>\n> Hi\n>\n> st 19. 8. 2020 v 19:22 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>> Hi hackers,\n>>\n>> More than month ago I have sent bug report to pgsql-bugs:\n>>\n>>\n>> https://www.postgresql.org/message-id/flat/5d335911-fb25-60cd-4aa7-a5bd0954aea0%40postgrespro.ru\n>>\n>> with the proposed patch but have not received any response.\n>>\n>> I wonder if there is some other way to fix this issue and does somebody\n>> working on it.\n>> While the added check itself is trivial (just one line) the total patch\n>> is not so small because I have added walker for\n>> plpgsql statements tree. It is not strictly needed in this case (it is\n>> possible to find some other way to determine that stored procedure\n>> contains transaction control statements), but I hope such walker may be\n>> useful in other cases.\n>>\n>> In any case, I will be glad to receive any response,\n>> because this problem was reported by one of our customers and we need to\n>> provide some fix.\n>> It is better to include it in vanilla, rather than in our pgpro-ee fork.\n>>\n>> If it is desirable, I can add this patch to commitfest.\n>>\n>\n>\n> I don't like this design. It is not effective to repeat the walker for\n> every execution. Introducing a walker just for this case looks like\n> overengineering.\n> Personally I am not sure if a walker for plpgsql is a good idea (I thought\n> about it more times, when I wrote plpgsql_check). But anyway - there should\n> be good reason for introducing the walker and clean use case.\n>\n> If you want to introduce stmt walker, then it should be a separate patch\n> with some benefit on plpgsql environment length.\n>\n> If you think that plpgsql statement walker is not needed, then I do not\n> insist.\n> Are you going to commit your version of the patch?\n>\n\nI am afraid so it needs significantly much more work :(. My version is\ncorrect just for the case that you describe, but it doesn't fix the\npossibility of the end of the transaction inside the nested CALL.\n\nSome like\n\nDO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM toasted LOOP\nINSERT INTO toasted(data) VALUES(v_r.data) CALL check_and_commit();END\nLOOP;END;$$;\n\nProbably my patch (or your patch) will fix on 99%, but still there will be\na possibility of this issue. It is very similar to fixing releasing expr\nplans inside CALL statements. Current design of CALL statement is ugly\nworkaround - it is slow, and there is brutal memory leak. Fixing memory\nleak is not hard. Fixing every time replaning (and sometimes useless) needs\ndepper fix. Please check patch\nhttps://www.postgresql.org/message-id/attachment/112489/plpgsql-stmt_call-fix-2.patch\nMaybe this mechanism can be used for a clean fix of the problem mentioned\nin this thread.\n\nRegards\n\nPavel\n\nst 19. 8. 2020 v 20:59 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 19.08.2020 21:50, Pavel Stehule\n wrote:\n\n\n\nHi\n\n\n\nst 19. 8. 2020 v 19:22\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\nHi hackers,\n\n More than month ago I have sent bug report to pgsql-bugs:\n\nhttps://www.postgresql.org/message-id/flat/5d335911-fb25-60cd-4aa7-a5bd0954aea0%40postgrespro.ru\n\n with the proposed patch but have not received any response.\n\n I wonder if there is some other way to fix this issue and\n does somebody \n working on it.\n While the added check itself is trivial (just one line) the\n total patch \n is not so small because I have added walker for\n plpgsql statements tree. It is not strictly needed in this\n case (it is \n possible to find some other way to determine that stored\n procedure\n contains transaction control statements), but I hope such\n walker may be \n useful in other cases.\n\n In any case, I will be glad to receive any response,\n because this problem was reported by one of our customers\n and we need to \n provide some fix.\n It is better to include it in vanilla, rather than in our\n pgpro-ee fork.\n\n If it is desirable, I can add this patch to commitfest.\n\n\n\n\n\nI don't like this design. It is not effective to repeat\n the walker for every execution. Introducing a walker just\n for this case looks like overengineering.\nPersonally I am not sure if a walker for plpgsql is a\n good idea (I thought about it more times, when I wrote\n plpgsql_check). But anyway - there should be good reason for\n introducing the walker and clean use case.\n\n\n\n\nIf you want to introduce stmt walker,\n then it should be a separate patch with some benefit on\n plpgsql environment length. \n\n\n\n\n\n If you think that plpgsql statement walker is not needed, then I do\n not insist.\n Are you going to commit your version of the patch?I am afraid so it needs significantly much more work :(. My version is correct just for the case that you describe, but it doesn't fix the possibility of the end of the transaction inside the nested CALL.Some likeDO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM toasted LOOP\n INSERT INTO toasted(data) VALUES(v_r.data) CALL check_and_commit();END LOOP;END;$$;Probably my patch (or your patch) will fix on 99%, but still there will be a possibility of this issue. It is very similar to fixing releasing expr plans inside CALL statements. Current design of CALL statement is ugly workaround - it is slow, and there is brutal memory leak. Fixing memory leak is not hard. Fixing every time replaning (and sometimes useless) needs depper fix. Please check patch https://www.postgresql.org/message-id/attachment/112489/plpgsql-stmt_call-fix-2.patch Maybe this mechanism can be used for a clean fix of the problem mentioned in this thread.RegardsPavel",
"msg_date": "Wed, 19 Aug 2020 21:20:51 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "On 19.08.2020 22:20, Pavel Stehule wrote:\n>\n>\n> st 19. 8. 2020 v 20:59 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n>\n>\n> On 19.08.2020 21:50, Pavel Stehule wrote:\n>> Hi\n>>\n>> st 19. 8. 2020 v 19:22 odesílatel Konstantin Knizhnik\n>> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>>\n>> napsal:\n>>\n>> Hi hackers,\n>>\n>> More than month ago I have sent bug report to pgsql-bugs:\n>>\n>> https://www.postgresql.org/message-id/flat/5d335911-fb25-60cd-4aa7-a5bd0954aea0%40postgrespro.ru\n>>\n>> with the proposed patch but have not received any response.\n>>\n>> I wonder if there is some other way to fix this issue and\n>> does somebody\n>> working on it.\n>> While the added check itself is trivial (just one line) the\n>> total patch\n>> is not so small because I have added walker for\n>> plpgsql statements tree. It is not strictly needed in this\n>> case (it is\n>> possible to find some other way to determine that stored\n>> procedure\n>> contains transaction control statements), but I hope such\n>> walker may be\n>> useful in other cases.\n>>\n>> In any case, I will be glad to receive any response,\n>> because this problem was reported by one of our customers and\n>> we need to\n>> provide some fix.\n>> It is better to include it in vanilla, rather than in our\n>> pgpro-ee fork.\n>>\n>> If it is desirable, I can add this patch to commitfest.\n>>\n>>\n>>\n>> I don't like this design. It is not effective to repeat the\n>> walker for every execution. Introducing a walker just for this\n>> case looks like overengineering.\n>> Personally I am not sure if a walker for plpgsql is a good idea\n>> (I thought about it more times, when I wrote plpgsql_check). But\n>> anyway - there should be good reason for introducing the walker\n>> and clean use case.\n>>\n>> If you want to introduce stmt walker, then it should be a\n>> separate patch with some benefit on plpgsql environment length.\n>>\n> If you think that plpgsql statement walker is not needed, then I\n> do not insist.\n> Are you going to commit your version of the patch?\n>\n>\n> I am afraid so it needs significantly much more work :(. My version is \n> correct just for the case that you describe, but it doesn't fix the \n> possibility of the end of the transaction inside the nested CALL.\n>\n> Some like\n>\n> DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM toasted \n> LOOP INSERT INTO toasted(data) VALUES(v_r.data) CALL \n> check_and_commit();END LOOP;END;$$;\n>\n> Probably my patch (or your patch) will fix on 99%, but still there \n> will be a possibility of this issue. It is very similar to fixing \n> releasing expr plans inside CALL statements. Current design of CALL \n> statement is ugly workaround - it is slow, and there is brutal memory \n> leak. Fixing memory leak is not hard. Fixing every time replaning (and \n> sometimes useless) needs depper fix. Please check patch \n> https://www.postgresql.org/message-id/attachment/112489/plpgsql-stmt_call-fix-2.patch \n> Maybe this mechanism can be used for a clean fix of the problem \n> mentioned in this thread.\n\nSorry for delay with answer.\nToday we have received another bug report from the client.\nAnd now as you warned, there was no direct call of COMMIT/ROLLBACK \nstatements in stored procedures, but instead of it it is calling some \nother pprocedures\nwhich I suspect contains some transaction control statements.\n\nI looked at the plpgsql-stmt_call-fix-2.patch \n<https://www.postgresql.org/message-id/attachment/112489/plpgsql-stmt_call-fix-2.patch>\nIt invalidates prepared plan in case of nested procedure call.\nBut here invalidation approach will not work. We have already prefetched \nrows and to access them we need snapshot.\nWe can not restore the same snapshot after CALL - it will be not correct.\nIn case of ATX (autonomous transactions supported by PgPro) we really \nsave/restore context after ATX. But transaction control semantic in \nprocedures is different:\nwe commit current transaction and start new one.\n\nSo I didn't find better solution than just slightly extend you patch and \nconsider any procedures containing CALLs as potentially performing \ntransaction control.\nI updated version of your patch.\nWhat do you think about it?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 18 Feb 2021 18:01:39 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "čt 18. 2. 2021 v 16:01 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 19.08.2020 22:20, Pavel Stehule wrote:\n>\n>\n>\n> st 19. 8. 2020 v 20:59 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>>\n>>\n>> On 19.08.2020 21:50, Pavel Stehule wrote:\n>>\n>> Hi\n>>\n>> st 19. 8. 2020 v 19:22 odesílatel Konstantin Knizhnik <\n>> k.knizhnik@postgrespro.ru> napsal:\n>>\n>>> Hi hackers,\n>>>\n>>> More than month ago I have sent bug report to pgsql-bugs:\n>>>\n>>>\n>>> https://www.postgresql.org/message-id/flat/5d335911-fb25-60cd-4aa7-a5bd0954aea0%40postgrespro.ru\n>>>\n>>> with the proposed patch but have not received any response.\n>>>\n>>> I wonder if there is some other way to fix this issue and does somebody\n>>> working on it.\n>>> While the added check itself is trivial (just one line) the total patch\n>>> is not so small because I have added walker for\n>>> plpgsql statements tree. It is not strictly needed in this case (it is\n>>> possible to find some other way to determine that stored procedure\n>>> contains transaction control statements), but I hope such walker may be\n>>> useful in other cases.\n>>>\n>>> In any case, I will be glad to receive any response,\n>>> because this problem was reported by one of our customers and we need to\n>>> provide some fix.\n>>> It is better to include it in vanilla, rather than in our pgpro-ee fork.\n>>>\n>>> If it is desirable, I can add this patch to commitfest.\n>>>\n>>\n>>\n>> I don't like this design. It is not effective to repeat the walker for\n>> every execution. Introducing a walker just for this case looks like\n>> overengineering.\n>> Personally I am not sure if a walker for plpgsql is a good idea (I\n>> thought about it more times, when I wrote plpgsql_check). But anyway -\n>> there should be good reason for introducing the walker and clean use case.\n>>\n>> If you want to introduce stmt walker, then it should be a separate patch\n>> with some benefit on plpgsql environment length.\n>>\n>> If you think that plpgsql statement walker is not needed, then I do not\n>> insist.\n>> Are you going to commit your version of the patch?\n>>\n>\n> I am afraid so it needs significantly much more work :(. My version is\n> correct just for the case that you describe, but it doesn't fix the\n> possibility of the end of the transaction inside the nested CALL.\n>\n> Some like\n>\n> DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM toasted LOOP\n> INSERT INTO toasted(data) VALUES(v_r.data) CALL check_and_commit();END\n> LOOP;END;$$;\n>\n> Probably my patch (or your patch) will fix on 99%, but still there will be\n> a possibility of this issue. It is very similar to fixing releasing expr\n> plans inside CALL statements. Current design of CALL statement is ugly\n> workaround - it is slow, and there is brutal memory leak. Fixing memory\n> leak is not hard. Fixing every time replaning (and sometimes useless) needs\n> depper fix. Please check patch\n> https://www.postgresql.org/message-id/attachment/112489/plpgsql-stmt_call-fix-2.patch\n> Maybe this mechanism can be used for a clean fix of the problem mentioned\n> in this thread.\n>\n>\n> Sorry for delay with answer.\n> Today we have received another bug report from the client.\n> And now as you warned, there was no direct call of COMMIT/ROLLBACK\n> statements in stored procedures, but instead of it it is calling some other\n> pprocedures\n> which I suspect contains some transaction control statements.\n>\n> I looked at the plpgsql-stmt_call-fix-2.patch\n> <https://www.postgresql.org/message-id/attachment/112489/plpgsql-stmt_call-fix-2.patch>\n> It invalidates prepared plan in case of nested procedure call.\n> But here invalidation approach will not work. We have already prefetched\n> rows and to access them we need snapshot.\n> We can not restore the same snapshot after CALL - it will be not correct.\n> In case of ATX (autonomous transactions supported by PgPro) we really\n> save/restore context after ATX. But transaction control semantic in\n> procedures is different:\n> we commit current transaction and start new one.\n>\n> So I didn't find better solution than just slightly extend you patch and\n> consider any procedures containing CALLs as potentially performing\n> transaction control.\n> I updated version of your patch.\n> What do you think about it?\n>\n\nThis has a negative impact on performance - and a lot of users use\nprocedures without transaction control. So it doesn't look like a good\nsolution.\n\nI am more concentrated on the Pg 14 release, where the work with SPI is\nredesigned, and I hope so this issue is fixed there. For older releases, I\ndon't know. Is this issue related to Postgres or it is related to PgPro\nonly? If it is related to community pg, then we should fix and we should\naccept not too good performance, because there is no better non invasive\nsolution. If it is PgPro issue (because there are ATX support) you can fix\nit (or you can try backport the patch\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n). You have more possibilities on PgPro code base.\n\nRegards\n\nPavel\n\n\n\n\n\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nčt 18. 2. 2021 v 16:01 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 19.08.2020 22:20, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\nst 19. 8. 2020 v 20:59\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 19.08.2020 21:50, Pavel Stehule wrote:\n\n\n\nHi\n\n\n\nst 19. 8. 2020\n v 19:22 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\nHi hackers,\n\n More than month ago I have sent bug report to\n pgsql-bugs:\n\nhttps://www.postgresql.org/message-id/flat/5d335911-fb25-60cd-4aa7-a5bd0954aea0%40postgrespro.ru\n\n with the proposed patch but have not received any\n response.\n\n I wonder if there is some other way to fix this\n issue and does somebody \n working on it.\n While the added check itself is trivial (just one\n line) the total patch \n is not so small because I have added walker for\n plpgsql statements tree. It is not strictly needed\n in this case (it is \n possible to find some other way to determine that\n stored procedure\n contains transaction control statements), but I\n hope such walker may be \n useful in other cases.\n\n In any case, I will be glad to receive any\n response,\n because this problem was reported by one of our\n customers and we need to \n provide some fix.\n It is better to include it in vanilla, rather than\n in our pgpro-ee fork.\n\n If it is desirable, I can add this patch to\n commitfest.\n\n\n\n\n\nI don't like this design. It is not effective\n to repeat the walker for every execution.\n Introducing a walker just for this case looks like\n overengineering.\nPersonally I am not sure if a walker for\n plpgsql is a good idea (I thought about it more\n times, when I wrote plpgsql_check). But anyway -\n there should be good reason for introducing the\n walker and clean use case.\n\n\n\n\nIf you want to introduce stmt\n walker, then it should be a separate patch with some\n benefit on plpgsql environment length. \n\n\n\n\n\n If you think that plpgsql statement walker is not needed,\n then I do not insist.\n Are you going to commit your version of the patch?\n\n\n\n\n\nI am afraid so it needs significantly\n much more work :(. My version is correct just for the case\n that you describe, but it doesn't fix the possibility of the\n end of the transaction inside the nested CALL.\n\n\nSome like\n\n\nDO $$ DECLARE v_r record; BEGIN FOR v_r\n in SELECT data FROM toasted LOOP INSERT INTO toasted(data)\n VALUES(v_r.data) CALL check_and_commit();END LOOP;END;$$;\n\n\nProbably my patch (or your patch) will\n fix on 99%, but still there will be a possibility of this\n issue. It is very similar to fixing releasing expr plans\n inside CALL statements. Current design of CALL statement is\n ugly workaround - it is slow, and there is brutal memory leak.\n Fixing memory leak is not hard. Fixing every time replaning\n (and sometimes useless) needs depper fix. Please check patch https://www.postgresql.org/message-id/attachment/112489/plpgsql-stmt_call-fix-2.patch\n Maybe this mechanism can be used for a clean fix of the\n problem mentioned in this thread.\n\n\n\n\n Sorry for delay with answer.\n Today we have received another bug report from the client.\n And now as you warned, there was no direct call of COMMIT/ROLLBACK\n statements in stored procedures, but instead of it it is calling\n some other pprocedures\n which I suspect contains some transaction control statements.\n\n I looked at the plpgsql-stmt_call-fix-2.patch\n \n It invalidates prepared plan in case of nested procedure call.\n But here invalidation approach will not work. We have already\n prefetched rows and to access them we need snapshot.\n We can not restore the same snapshot after CALL - it will be not\n correct.\n In case of ATX (autonomous transactions supported by PgPro) we\n really save/restore context after ATX. But transaction control\n semantic in procedures is different:\n we commit current transaction and start new one.\n\n So I didn't find better solution than just slightly extend you patch\n and consider any procedures containing CALLs as potentially\n performing transaction control.\n I updated version of your patch.\n What do you think about it?This has a negative impact on performance - and a lot of users use procedures without transaction control. So it doesn't look like a good solution.I am more concentrated on the Pg 14 release, where the work with SPI is redesigned, and I hope so this issue is fixed there. For older releases, I don't know. Is this issue related to Postgres or it is related to PgPro only? If it is related to community pg, then we should fix and we should accept not too good performance, because there is no better non invasive solution. If it is PgPro issue (because there are ATX support) you can fix it (or you can try backport the patch https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf ). You have more possibilities on PgPro code base. RegardsPavel\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 18 Feb 2021 18:10:24 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "čt 18. 2. 2021 v 18:10 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> čt 18. 2. 2021 v 16:01 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>>\n>>\n>> On 19.08.2020 22:20, Pavel Stehule wrote:\n>>\n>>\n>>\n>> st 19. 8. 2020 v 20:59 odesílatel Konstantin Knizhnik <\n>> k.knizhnik@postgrespro.ru> napsal:\n>>\n>>>\n>>>\n>>> On 19.08.2020 21:50, Pavel Stehule wrote:\n>>>\n>>> Hi\n>>>\n>>> st 19. 8. 2020 v 19:22 odesílatel Konstantin Knizhnik <\n>>> k.knizhnik@postgrespro.ru> napsal:\n>>>\n>>>> Hi hackers,\n>>>>\n>>>> More than month ago I have sent bug report to pgsql-bugs:\n>>>>\n>>>>\n>>>> https://www.postgresql.org/message-id/flat/5d335911-fb25-60cd-4aa7-a5bd0954aea0%40postgrespro.ru\n>>>>\n>>>> with the proposed patch but have not received any response.\n>>>>\n>>>> I wonder if there is some other way to fix this issue and does somebody\n>>>> working on it.\n>>>> While the added check itself is trivial (just one line) the total patch\n>>>> is not so small because I have added walker for\n>>>> plpgsql statements tree. It is not strictly needed in this case (it is\n>>>> possible to find some other way to determine that stored procedure\n>>>> contains transaction control statements), but I hope such walker may be\n>>>> useful in other cases.\n>>>>\n>>>> In any case, I will be glad to receive any response,\n>>>> because this problem was reported by one of our customers and we need\n>>>> to\n>>>> provide some fix.\n>>>> It is better to include it in vanilla, rather than in our pgpro-ee fork.\n>>>>\n>>>> If it is desirable, I can add this patch to commitfest.\n>>>>\n>>>\n>>>\n>>> I don't like this design. It is not effective to repeat the walker for\n>>> every execution. Introducing a walker just for this case looks like\n>>> overengineering.\n>>> Personally I am not sure if a walker for plpgsql is a good idea (I\n>>> thought about it more times, when I wrote plpgsql_check). But anyway -\n>>> there should be good reason for introducing the walker and clean use case.\n>>>\n>>> If you want to introduce stmt walker, then it should be a separate patch\n>>> with some benefit on plpgsql environment length.\n>>>\n>>> If you think that plpgsql statement walker is not needed, then I do not\n>>> insist.\n>>> Are you going to commit your version of the patch?\n>>>\n>>\n>> I am afraid so it needs significantly much more work :(. My version is\n>> correct just for the case that you describe, but it doesn't fix the\n>> possibility of the end of the transaction inside the nested CALL.\n>>\n>> Some like\n>>\n>> DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM toasted LOOP\n>> INSERT INTO toasted(data) VALUES(v_r.data) CALL check_and_commit();END\n>> LOOP;END;$$;\n>>\n>> Probably my patch (or your patch) will fix on 99%, but still there will\n>> be a possibility of this issue. It is very similar to fixing releasing expr\n>> plans inside CALL statements. Current design of CALL statement is ugly\n>> workaround - it is slow, and there is brutal memory leak. Fixing memory\n>> leak is not hard. Fixing every time replaning (and sometimes useless) needs\n>> depper fix. Please check patch\n>> https://www.postgresql.org/message-id/attachment/112489/plpgsql-stmt_call-fix-2.patch\n>> Maybe this mechanism can be used for a clean fix of the problem mentioned\n>> in this thread.\n>>\n>>\n>> Sorry for delay with answer.\n>> Today we have received another bug report from the client.\n>> And now as you warned, there was no direct call of COMMIT/ROLLBACK\n>> statements in stored procedures, but instead of it it is calling some other\n>> pprocedures\n>> which I suspect contains some transaction control statements.\n>>\n>> I looked at the plpgsql-stmt_call-fix-2.patch\n>> <https://www.postgresql.org/message-id/attachment/112489/plpgsql-stmt_call-fix-2.patch>\n>> It invalidates prepared plan in case of nested procedure call.\n>> But here invalidation approach will not work. We have already prefetched\n>> rows and to access them we need snapshot.\n>> We can not restore the same snapshot after CALL - it will be not correct.\n>> In case of ATX (autonomous transactions supported by PgPro) we really\n>> save/restore context after ATX. But transaction control semantic in\n>> procedures is different:\n>> we commit current transaction and start new one.\n>>\n>> So I didn't find better solution than just slightly extend you patch and\n>> consider any procedures containing CALLs as potentially performing\n>> transaction control.\n>> I updated version of your patch.\n>> What do you think about it?\n>>\n>\n> This has a negative impact on performance - and a lot of users use\n> procedures without transaction control. So it doesn't look like a good\n> solution.\n>\n> I am more concentrated on the Pg 14 release, where the work with SPI is\n> redesigned, and I hope so this issue is fixed there. For older releases, I\n> don't know. Is this issue related to Postgres or it is related to PgPro\n> only? If it is related to community pg, then we should fix and we should\n> accept not too good performance, because there is no better non invasive\n> solution. If it is PgPro issue (because there are ATX support) you can fix\n> it (or you can try backport the patch\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n> ). You have more possibilities on PgPro code base.\n>\n\nI am sorry, maybe my reply was not (is not) correct - this issue was\nreported four months ago, and now I think more about your words about ATX,\nand I have no idea, how much it is related to community pg or to pgpro.\n\nI am sure so implementation of autonomous transactions is pretty hard, but\nthe described issue is related to PgPro implementation of ATX, and then it\nshould be fixed there. Disabling prefetching doesn't look like a good idea.\nYou try to fix the result, not the source of the problem - but I have not\nany idea, what is possible and what not, because I don't know how PgPro ATX\nis implemented.\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>> --\n>> Konstantin Knizhnik\n>> Postgres Professional: http://www.postgrespro.com\n>> The Russian Postgres Company\n>>\n>>\n\nčt 18. 2. 2021 v 18:10 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:čt 18. 2. 2021 v 16:01 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 19.08.2020 22:20, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\nst 19. 8. 2020 v 20:59\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 19.08.2020 21:50, Pavel Stehule wrote:\n\n\n\nHi\n\n\n\nst 19. 8. 2020\n v 19:22 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\nHi hackers,\n\n More than month ago I have sent bug report to\n pgsql-bugs:\n\nhttps://www.postgresql.org/message-id/flat/5d335911-fb25-60cd-4aa7-a5bd0954aea0%40postgrespro.ru\n\n with the proposed patch but have not received any\n response.\n\n I wonder if there is some other way to fix this\n issue and does somebody \n working on it.\n While the added check itself is trivial (just one\n line) the total patch \n is not so small because I have added walker for\n plpgsql statements tree. It is not strictly needed\n in this case (it is \n possible to find some other way to determine that\n stored procedure\n contains transaction control statements), but I\n hope such walker may be \n useful in other cases.\n\n In any case, I will be glad to receive any\n response,\n because this problem was reported by one of our\n customers and we need to \n provide some fix.\n It is better to include it in vanilla, rather than\n in our pgpro-ee fork.\n\n If it is desirable, I can add this patch to\n commitfest.\n\n\n\n\n\nI don't like this design. It is not effective\n to repeat the walker for every execution.\n Introducing a walker just for this case looks like\n overengineering.\nPersonally I am not sure if a walker for\n plpgsql is a good idea (I thought about it more\n times, when I wrote plpgsql_check). But anyway -\n there should be good reason for introducing the\n walker and clean use case.\n\n\n\n\nIf you want to introduce stmt\n walker, then it should be a separate patch with some\n benefit on plpgsql environment length. \n\n\n\n\n\n If you think that plpgsql statement walker is not needed,\n then I do not insist.\n Are you going to commit your version of the patch?\n\n\n\n\n\nI am afraid so it needs significantly\n much more work :(. My version is correct just for the case\n that you describe, but it doesn't fix the possibility of the\n end of the transaction inside the nested CALL.\n\n\nSome like\n\n\nDO $$ DECLARE v_r record; BEGIN FOR v_r\n in SELECT data FROM toasted LOOP INSERT INTO toasted(data)\n VALUES(v_r.data) CALL check_and_commit();END LOOP;END;$$;\n\n\nProbably my patch (or your patch) will\n fix on 99%, but still there will be a possibility of this\n issue. It is very similar to fixing releasing expr plans\n inside CALL statements. Current design of CALL statement is\n ugly workaround - it is slow, and there is brutal memory leak.\n Fixing memory leak is not hard. Fixing every time replaning\n (and sometimes useless) needs depper fix. Please check patch https://www.postgresql.org/message-id/attachment/112489/plpgsql-stmt_call-fix-2.patch\n Maybe this mechanism can be used for a clean fix of the\n problem mentioned in this thread.\n\n\n\n\n Sorry for delay with answer.\n Today we have received another bug report from the client.\n And now as you warned, there was no direct call of COMMIT/ROLLBACK\n statements in stored procedures, but instead of it it is calling\n some other pprocedures\n which I suspect contains some transaction control statements.\n\n I looked at the plpgsql-stmt_call-fix-2.patch\n \n It invalidates prepared plan in case of nested procedure call.\n But here invalidation approach will not work. We have already\n prefetched rows and to access them we need snapshot.\n We can not restore the same snapshot after CALL - it will be not\n correct.\n In case of ATX (autonomous transactions supported by PgPro) we\n really save/restore context after ATX. But transaction control\n semantic in procedures is different:\n we commit current transaction and start new one.\n\n So I didn't find better solution than just slightly extend you patch\n and consider any procedures containing CALLs as potentially\n performing transaction control.\n I updated version of your patch.\n What do you think about it?This has a negative impact on performance - and a lot of users use procedures without transaction control. So it doesn't look like a good solution.I am more concentrated on the Pg 14 release, where the work with SPI is redesigned, and I hope so this issue is fixed there. For older releases, I don't know. Is this issue related to Postgres or it is related to PgPro only? If it is related to community pg, then we should fix and we should accept not too good performance, because there is no better non invasive solution. If it is PgPro issue (because there are ATX support) you can fix it (or you can try backport the patch https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf ). You have more possibilities on PgPro code base. I am sorry, maybe my reply was not (is not) correct - this issue was reported four months ago, and now I think more about your words about ATX, and I have no idea, how much it is related to community pg or to pgpro. I am sure so implementation of autonomous transactions is pretty hard, but the described issue is related to PgPro implementation of ATX, and then it should be fixed there. Disabling prefetching doesn't look like a good idea. You try to fix the result, not the source of the problem - but I have not any idea, what is possible and what not, because I don't know how PgPro ATX is implemented.RegardsPavel\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 18 Feb 2021 18:25:53 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "On 18.02.2021 20:10, Pavel Stehule wrote:\n> This has a negative impact on performance - and a lot of users use \n> procedures without transaction control. So it doesn't look like a good \n> solution.\n>\n> I am more concentrated on the Pg 14 release, where the work with SPI \n> is redesigned, and I hope so this issue is fixed there. For older \n> releases, I don't know. Is this issue related to Postgres or it is \n> related to PgPro only? If it is related to community pg, then we \n> should fix and we should accept not too good performance, because \n> there is no better non invasive solution. If it is PgPro issue \n> (because there are ATX support) you can fix it (or you can try \n> backport the patch \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf \n> ). You have more possibilities on PgPro code base.\n\nSorry, it is not PgPro specific problem and recent master suffers from \nthis bug as well.\nIn the original bug report there was simple scenario of reproducing the \nproblem:\n\nCREATE TABLE toasted(id serial primary key, data text);\nINSERT INTO toasted(data) VALUES((SELECT string_agg(random()::text,':') \nFROM generate_series(1, 1000)));\nINSERT INTO toasted(data) VALUES((SELECT string_agg(random()::text,':') \nFROM generate_series(1, 1000)));\nDO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM toasted LOOP \nINSERT INTO toasted(data) VALUES(v_r.data);COMMIT;END LOOP;END;$$;\n\n\n-- \n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 18.02.2021 20:10, Pavel Stehule\n wrote:\n\n\n\nThis has a negative impact on performance - and a\n lot of users use procedures without transaction control. So it\n doesn't look like a good solution.\n \n\nI am more concentrated on the Pg 14\n release, where the work with SPI is redesigned, and I hope so\n this issue is fixed there. For older releases, I don't know.\n Is this issue related to Postgres or it is related to PgPro\n only? If it is related to community pg, then we should fix and\n we should accept not too good performance, because there is no\n better non invasive solution. If it is PgPro issue (because\n there are ATX support) you can fix it (or you can try backport\n the patch https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n ). You have more possibilities on PgPro code base. \n\n\n\n\n Sorry, it is not PgPro specific problem and recent master suffers\n from this bug as well.\n In the original bug report there was simple scenario of reproducing\n the problem:\n\n CREATE TABLE toasted(id serial primary key, data text);\n INSERT INTO toasted(data) VALUES((SELECT\n string_agg(random()::text,':') FROM generate_series(1, 1000)));\n INSERT INTO toasted(data) VALUES((SELECT\n string_agg(random()::text,':') FROM generate_series(1, 1000)));\n DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM toasted\n LOOP INSERT INTO toasted(data) VALUES(v_r.data);COMMIT;END\n LOOP;END;$$;\n\n\n --\n Konstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Feb 2021 09:51:23 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "pá 19. 2. 2021 v 7:51 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 18.02.2021 20:10, Pavel Stehule wrote:\n>\n> This has a negative impact on performance - and a lot of users use\n> procedures without transaction control. So it doesn't look like a good\n> solution.\n>\n> I am more concentrated on the Pg 14 release, where the work with SPI is\n> redesigned, and I hope so this issue is fixed there. For older releases, I\n> don't know. Is this issue related to Postgres or it is related to PgPro\n> only? If it is related to community pg, then we should fix and we should\n> accept not too good performance, because there is no better non invasive\n> solution. If it is PgPro issue (because there are ATX support) you can fix\n> it (or you can try backport the patch\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n> ). You have more possibilities on PgPro code base.\n>\n>\n> Sorry, it is not PgPro specific problem and recent master suffers from\n> this bug as well.\n> In the original bug report there was simple scenario of reproducing the\n> problem:\n>\n> CREATE TABLE toasted(id serial primary key, data text);\n> INSERT INTO toasted(data) VALUES((SELECT string_agg(random()::text,':')\n> FROM generate_series(1, 1000)));\n> INSERT INTO toasted(data) VALUES((SELECT string_agg(random()::text,':')\n> FROM generate_series(1, 1000)));\n> DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM toasted LOOP\n> INSERT INTO toasted(data) VALUES(v_r.data);COMMIT;END LOOP;END;$$;\n>\n\ncan you use new procedure_resowner?\n\nRegards\n\nPavel\n\n\n\n>\n>\n> --\n>\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npá 19. 2. 2021 v 7:51 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 18.02.2021 20:10, Pavel Stehule\n wrote:\n\n\nThis has a negative impact on performance - and a\n lot of users use procedures without transaction control. So it\n doesn't look like a good solution.\n \n\nI am more concentrated on the Pg 14\n release, where the work with SPI is redesigned, and I hope so\n this issue is fixed there. For older releases, I don't know.\n Is this issue related to Postgres or it is related to PgPro\n only? If it is related to community pg, then we should fix and\n we should accept not too good performance, because there is no\n better non invasive solution. If it is PgPro issue (because\n there are ATX support) you can fix it (or you can try backport\n the patch https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n ). You have more possibilities on PgPro code base. \n\n\n\n\n Sorry, it is not PgPro specific problem and recent master suffers\n from this bug as well.\n In the original bug report there was simple scenario of reproducing\n the problem:\n\n CREATE TABLE toasted(id serial primary key, data text);\n INSERT INTO toasted(data) VALUES((SELECT\n string_agg(random()::text,':') FROM generate_series(1, 1000)));\n INSERT INTO toasted(data) VALUES((SELECT\n string_agg(random()::text,':') FROM generate_series(1, 1000)));\n DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM toasted\n LOOP INSERT INTO toasted(data) VALUES(v_r.data);COMMIT;END\n LOOP;END;$$;can you use new procedure_resowner?RegardsPavel \n\n\n --\n Konstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Feb 2021 08:14:17 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "> I am sorry, maybe my reply was not (is not) correct - this issue was \n> reported four months ago, and now I think more about your words about \n> ATX, and I have no idea, how much it is related to community pg or to \n> pgpro.\n>\n> I am sure so implementation of autonomous transactions is pretty hard, \n> but the described issue is related to PgPro implementation of ATX, and \n> then it should be fixed there. Disabling prefetching doesn't look like \n> a good idea. You try to fix the result, not the source of the problem \n> - but I have not any idea, what is possible and what not, because I \n> don't know how PgPro ATX is implemented.\n>\n\nI think there is some misunderstanding.\nSorry if I my explanation was not clear.\n\nThis problem is not related with ATX and PgPro. Actually ATX correctly \nhandle this case (when iteration through query results cross transaction \ncommit).\nIt is the problem of transaction control in stored procedures in vanilla \nPostgres and it is not yet resolved.\nI refer to ATX in PgPro just as example of how this problem can be \nsolved with different transaction control model.\nBut this approach is not (IMHO) applicable to stored procedures.\n\nI do not think that this problem is so critical.\nNot so many people are using stored procedures (which were added to the \nPostgres not so long time ago),\nnot all of them are performing transaction control inside them and even \nless of them interleave loop over query results with transactions commits.\nBut there are such people and we have received correspondent bug reports.\nSo I think it should be somehow fixed.\n\nI do not know good solution of the problem.\nThere are three possibilities:\n1. Disable prefetch\n2. Keep snapshot (which seems to be incorrect)\n3. Materialize prefetched tuples before commit (seems to be non-trivial)\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\n\n\nI am sorry, maybe my reply was not (is not) correct -\n this issue was reported four months ago, and now I think\n more about your words about ATX, and I have no idea, how\n much it is related to community pg or to pgpro. \n\n\n\nI am sure so implementation of autonomous transactions is\n pretty hard, but the described issue is related to PgPro\n implementation of ATX, and then it should be fixed there.\n Disabling prefetching doesn't look like a good idea. You try\n to fix the result, not the source of the problem - but I\n have not any idea, what is possible and what not, because I\n don't know how PgPro ATX is implemented.\n\n\n\n\n\n\n I think there is some misunderstanding.\n Sorry if I my explanation was not clear.\n\n This problem is not related with ATX and PgPro. Actually ATX\n correctly handle this case (when iteration through query results\n cross transaction commit).\n It is the problem of transaction control in stored procedures in\n vanilla Postgres and it is not yet resolved.\n I refer to ATX in PgPro just as example of how this problem can be\n solved with different transaction control model.\n But this approach is not (IMHO) applicable to stored procedures.\n\n I do not think that this problem is so critical.\n Not so many people are using stored procedures (which were added to\n the Postgres not so long time ago),\n not all of them are performing transaction control inside them and\n even less of them interleave loop over query results with\n transactions commits.\n But there are such people and we have received correspondent bug\n reports.\n So I think it should be somehow fixed.\n\n I do not know good solution of the problem.\n There are three possibilities:\n 1. Disable prefetch \n 2. Keep snapshot (which seems to be incorrect)\n 3. Materialize prefetched tuples before commit (seems to be\n non-trivial)\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Feb 2021 10:17:25 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "On 19.02.2021 10:14, Pavel Stehule wrote:\n>\n>\n> pá 19. 2. 2021 v 7:51 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n>\n>\n> On 18.02.2021 20:10, Pavel Stehule wrote:\n>> This has a negative impact on performance - and a lot of users\n>> use procedures without transaction control. So it doesn't look\n>> like a good solution.\n>>\n>> I am more concentrated on the Pg 14 release, where the work with\n>> SPI is redesigned, and I hope so this issue is fixed there. For\n>> older releases, I don't know. Is this issue related to Postgres\n>> or it is related to PgPro only? If it is related to community pg,\n>> then we should fix and we should accept not too good performance,\n>> because there is no better non invasive solution. If it is PgPro\n>> issue (because there are ATX support) you can fix it (or you can\n>> try backport the patch\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n>> ). You have more possibilities on PgPro code base.\n>\n> Sorry, it is not PgPro specific problem and recent master suffers\n> from this bug as well.\n> In the original bug report there was simple scenario of\n> reproducing the problem:\n>\n> CREATE TABLE toasted(id serial primary key, data text);\n> INSERT INTO toasted(data) VALUES((SELECT\n> string_agg(random()::text,':') FROM generate_series(1, 1000)));\n> INSERT INTO toasted(data) VALUES((SELECT\n> string_agg(random()::text,':') FROM generate_series(1, 1000)));\n> DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM\n> toasted LOOP INSERT INTO toasted(data) VALUES(v_r.data);COMMIT;END\n> LOOP;END;$$;\n>\n>\n> can you use new procedure_resowner?\n>\nSorry, I do not understanf your suggestion.\nHow procedure_resowner can help to solve this problem?\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 19.02.2021 10:14, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\npá 19. 2. 2021 v 7:51\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 18.02.2021 20:10, Pavel Stehule wrote:\n\n\nThis has a negative impact on performance\n - and a lot of users use procedures without\n transaction control. So it doesn't look like a good\n solution.\n \n\nI am more concentrated on the\n Pg 14 release, where the work with SPI is\n redesigned, and I hope so this issue is fixed there.\n For older releases, I don't know. Is this issue\n related to Postgres or it is related to PgPro only?\n If it is related to community pg, then we should fix\n and we should accept not too good performance,\n because there is no better non invasive solution. If\n it is PgPro issue (because there are ATX support)\n you can fix it (or you can try backport the patch https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n ). You have more possibilities on PgPro code base. \n\n\n\n\n Sorry, it is not PgPro specific problem and recent master\n suffers from this bug as well.\n In the original bug report there was simple scenario of\n reproducing the problem:\n\n CREATE TABLE toasted(id serial primary key, data text);\n INSERT INTO toasted(data) VALUES((SELECT\n string_agg(random()::text,':') FROM generate_series(1,\n 1000)));\n INSERT INTO toasted(data) VALUES((SELECT\n string_agg(random()::text,':') FROM generate_series(1,\n 1000)));\n DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data\n FROM toasted LOOP INSERT INTO toasted(data)\n VALUES(v_r.data);COMMIT;END LOOP;END;$$;\n\n\n\n\ncan you use new procedure_resowner?\n\n\n\n\n Sorry, I do not understanf your suggestion.\n How procedure_resowner can help to solve this problem?\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Feb 2021 10:39:46 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "pá 19. 2. 2021 v 8:17 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n> I am sorry, maybe my reply was not (is not) correct - this issue was\n> reported four months ago, and now I think more about your words about ATX,\n> and I have no idea, how much it is related to community pg or to pgpro.\n>\n> I am sure so implementation of autonomous transactions is pretty hard, but\n> the described issue is related to PgPro implementation of ATX, and then it\n> should be fixed there. Disabling prefetching doesn't look like a good idea.\n> You try to fix the result, not the source of the problem - but I have not\n> any idea, what is possible and what not, because I don't know how PgPro ATX\n> is implemented.\n>\n>\n> I think there is some misunderstanding.\n> Sorry if I my explanation was not clear.\n>\n> This problem is not related with ATX and PgPro. Actually ATX correctly\n> handle this case (when iteration through query results cross transaction\n> commit).\n> It is the problem of transaction control in stored procedures in vanilla\n> Postgres and it is not yet resolved.\n> I refer to ATX in PgPro just as example of how this problem can be solved\n> with different transaction control model.\n> But this approach is not (IMHO) applicable to stored procedures.\n>\n> I do not think that this problem is so critical.\n> Not so many people are using stored procedures (which were added to the\n> Postgres not so long time ago),\n> not all of them are performing transaction control inside them and even\n> less of them interleave loop over query results with transactions commits.\n> But there are such people and we have received correspondent bug reports.\n> So I think it should be somehow fixed.\n>\n> I do not know good solution of the problem.\n> There are three possibilities:\n> 1. Disable prefetch\n> 2. Keep snapshot (which seems to be incorrect)\n> 3. Materialize prefetched tuples before commit (seems to be non-trivial)\n>\n>\nI am not sure if disabling prefetch for this case is the correct solution.\nProbably not if you got a new snapshot, then the cursor will be\n\"sensitive\", but other Postgres cursors are \"insensitive\".\n\nImplementation of materialization should not be very hard - you will do\nonly copy tuples to some local buffers, but it doesn't say if the result\nwill be correct, because you mix more snapshots.\n\nSo keeping snapshots looks like a more correct solution - although there\ncan be inconsistency against current snapshot, the result is very similar\nto full materialization.\n\nRegards\n\nPavel\n\n\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npá 19. 2. 2021 v 8:17 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\n\n\nI am sorry, maybe my reply was not (is not) correct -\n this issue was reported four months ago, and now I think\n more about your words about ATX, and I have no idea, how\n much it is related to community pg or to pgpro. \n\n\n\nI am sure so implementation of autonomous transactions is\n pretty hard, but the described issue is related to PgPro\n implementation of ATX, and then it should be fixed there.\n Disabling prefetching doesn't look like a good idea. You try\n to fix the result, not the source of the problem - but I\n have not any idea, what is possible and what not, because I\n don't know how PgPro ATX is implemented.\n\n\n\n\n\n\n I think there is some misunderstanding.\n Sorry if I my explanation was not clear.\n\n This problem is not related with ATX and PgPro. Actually ATX\n correctly handle this case (when iteration through query results\n cross transaction commit).\n It is the problem of transaction control in stored procedures in\n vanilla Postgres and it is not yet resolved.\n I refer to ATX in PgPro just as example of how this problem can be\n solved with different transaction control model.\n But this approach is not (IMHO) applicable to stored procedures.\n\n I do not think that this problem is so critical.\n Not so many people are using stored procedures (which were added to\n the Postgres not so long time ago),\n not all of them are performing transaction control inside them and\n even less of them interleave loop over query results with\n transactions commits.\n But there are such people and we have received correspondent bug\n reports.\n So I think it should be somehow fixed.\n\n I do not know good solution of the problem.\n There are three possibilities:\n 1. Disable prefetch \n 2. Keep snapshot (which seems to be incorrect)\n 3. Materialize prefetched tuples before commit (seems to be\n non-trivial)\nI am not sure if disabling prefetch for this case is the correct solution. Probably not if you got a new snapshot, then the cursor will be \"sensitive\", but other Postgres cursors are \"insensitive\". Implementation of materialization should not be very hard - you will do only copy tuples to some local buffers, but it doesn't say if the result will be correct, because you mix more snapshots.So keeping snapshots looks like a more correct solution - although there can be inconsistency against current snapshot, the result is very similar to full materialization.RegardsPavel \n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Feb 2021 08:43:07 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "pá 19. 2. 2021 v 8:39 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 19.02.2021 10:14, Pavel Stehule wrote:\n>\n>\n>\n> pá 19. 2. 2021 v 7:51 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>>\n>>\n>> On 18.02.2021 20:10, Pavel Stehule wrote:\n>>\n>> This has a negative impact on performance - and a lot of users use\n>> procedures without transaction control. So it doesn't look like a good\n>> solution.\n>>\n>> I am more concentrated on the Pg 14 release, where the work with SPI is\n>> redesigned, and I hope so this issue is fixed there. For older releases, I\n>> don't know. Is this issue related to Postgres or it is related to PgPro\n>> only? If it is related to community pg, then we should fix and we should\n>> accept not too good performance, because there is no better non invasive\n>> solution. If it is PgPro issue (because there are ATX support) you can fix\n>> it (or you can try backport the patch\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n>> ). You have more possibilities on PgPro code base.\n>>\n>>\n>> Sorry, it is not PgPro specific problem and recent master suffers from\n>> this bug as well.\n>> In the original bug report there was simple scenario of reproducing the\n>> problem:\n>>\n>> CREATE TABLE toasted(id serial primary key, data text);\n>> INSERT INTO toasted(data) VALUES((SELECT string_agg(random()::text,':')\n>> FROM generate_series(1, 1000)));\n>> INSERT INTO toasted(data) VALUES((SELECT string_agg(random()::text,':')\n>> FROM generate_series(1, 1000)));\n>> DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM toasted LOOP\n>> INSERT INTO toasted(data) VALUES(v_r.data);COMMIT;END LOOP;END;$$;\n>>\n>\n> can you use new procedure_resowner?\n>\n> Sorry, I do not understanf your suggestion.\n> How procedure_resowner can help to solve this problem?\n>\n\nThis is just an idea - I think the most correct with zero performance\nimpact is keeping snapshot, and this can be stored in procedure_resowner.\n\nThe fundamental question is if we want or allow more snapshots per query.\nThe implementation is a secondary issue.\n\nPavel\n\n\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npá 19. 2. 2021 v 8:39 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 19.02.2021 10:14, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\npá 19. 2. 2021 v 7:51\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 18.02.2021 20:10, Pavel Stehule wrote:\n\n\nThis has a negative impact on performance\n - and a lot of users use procedures without\n transaction control. So it doesn't look like a good\n solution.\n \n\nI am more concentrated on the\n Pg 14 release, where the work with SPI is\n redesigned, and I hope so this issue is fixed there.\n For older releases, I don't know. Is this issue\n related to Postgres or it is related to PgPro only?\n If it is related to community pg, then we should fix\n and we should accept not too good performance,\n because there is no better non invasive solution. If\n it is PgPro issue (because there are ATX support)\n you can fix it (or you can try backport the patch https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n ). You have more possibilities on PgPro code base. \n\n\n\n\n Sorry, it is not PgPro specific problem and recent master\n suffers from this bug as well.\n In the original bug report there was simple scenario of\n reproducing the problem:\n\n CREATE TABLE toasted(id serial primary key, data text);\n INSERT INTO toasted(data) VALUES((SELECT\n string_agg(random()::text,':') FROM generate_series(1,\n 1000)));\n INSERT INTO toasted(data) VALUES((SELECT\n string_agg(random()::text,':') FROM generate_series(1,\n 1000)));\n DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data\n FROM toasted LOOP INSERT INTO toasted(data)\n VALUES(v_r.data);COMMIT;END LOOP;END;$$;\n\n\n\n\ncan you use new procedure_resowner?\n\n\n\n\n Sorry, I do not understanf your suggestion.\n How procedure_resowner can help to solve this problem?This is just an idea - I think the most correct with zero performance impact is keeping snapshot, and this can be stored in procedure_resowner. The fundamental question is if we want or allow more snapshots per query. The implementation is a secondary issue.Pavel\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Feb 2021 08:47:19 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "On 19.02.2021 10:47, Pavel Stehule wrote:\n>\n>\n> pá 19. 2. 2021 v 8:39 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n>\n>\n> On 19.02.2021 10:14, Pavel Stehule wrote:\n>>\n>>\n>> pá 19. 2. 2021 v 7:51 odesílatel Konstantin Knizhnik\n>> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>>\n>> napsal:\n>>\n>>\n>>\n>> On 18.02.2021 20:10, Pavel Stehule wrote:\n>>> This has a negative impact on performance - and a lot of\n>>> users use procedures without transaction control. So it\n>>> doesn't look like a good solution.\n>>>\n>>> I am more concentrated on the Pg 14 release, where the work\n>>> with SPI is redesigned, and I hope so this issue is fixed\n>>> there. For older releases, I don't know. Is this issue\n>>> related to Postgres or it is related to PgPro only? If it is\n>>> related to community pg, then we should fix and we should\n>>> accept not too good performance, because there is no better\n>>> non invasive solution. If it is PgPro issue (because there\n>>> are ATX support) you can fix it (or you can try backport the\n>>> patch\n>>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n>>> ). You have more possibilities on PgPro code base.\n>>\n>> Sorry, it is not PgPro specific problem and recent master\n>> suffers from this bug as well.\n>> In the original bug report there was simple scenario of\n>> reproducing the problem:\n>>\n>> CREATE TABLE toasted(id serial primary key, data text);\n>> INSERT INTO toasted(data) VALUES((SELECT\n>> string_agg(random()::text,':') FROM generate_series(1, 1000)));\n>> INSERT INTO toasted(data) VALUES((SELECT\n>> string_agg(random()::text,':') FROM generate_series(1, 1000)));\n>> DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM\n>> toasted LOOP INSERT INTO toasted(data)\n>> VALUES(v_r.data);COMMIT;END LOOP;END;$$;\n>>\n>>\n>> can you use new procedure_resowner?\n>>\n> Sorry, I do not understanf your suggestion.\n> How procedure_resowner can help to solve this problem?\n>\n>\n> This is just an idea - I think the most correct with zero performance \n> impact is keeping snapshot, and this can be stored in procedure_resowner.\n>\n> The fundamental question is if we want or allow more snapshots per \n> query. The implementation is a secondary issue.\n\nI wonder if it is correct from logical point of view.\nIf we commit transaction in stored procedure, then we actually \nimplicitly start new transaction.\nAnd new transaction should have new snapshot. Otherwise its behavior \nwill change.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 19.02.2021 10:47, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\npá 19. 2. 2021 v 8:39\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 19.02.2021 10:14, Pavel Stehule wrote:\n\n\n\n\n\n\n\npá 19. 2. 2021\n v 7:51 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 18.02.2021 20:10, Pavel Stehule wrote:\n\n\nThis has a negative impact on\n performance - and a lot of users use\n procedures without transaction control. So\n it doesn't look like a good solution.\n \n\nI am more\n concentrated on the Pg 14 release, where\n the work with SPI is redesigned, and I\n hope so this issue is fixed there. For\n older releases, I don't know. Is this\n issue related to Postgres or it is related\n to PgPro only? If it is related to\n community pg, then we should fix and we\n should accept not too good performance,\n because there is no better non invasive\n solution. If it is PgPro issue (because\n there are ATX support) you can fix it (or\n you can try backport the patch https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n ). You have more possibilities on PgPro\n code base. \n\n\n\n\n Sorry, it is not PgPro specific problem and\n recent master suffers from this bug as well.\n In the original bug report there was simple\n scenario of reproducing the problem:\n\n CREATE TABLE toasted(id serial primary key, data\n text);\n INSERT INTO toasted(data) VALUES((SELECT\n string_agg(random()::text,':') FROM\n generate_series(1, 1000)));\n INSERT INTO toasted(data) VALUES((SELECT\n string_agg(random()::text,':') FROM\n generate_series(1, 1000)));\n DO $$ DECLARE v_r record; BEGIN FOR v_r in\n SELECT data FROM toasted LOOP INSERT INTO\n toasted(data) VALUES(v_r.data);COMMIT;END\n LOOP;END;$$;\n\n\n\n\ncan you use new procedure_resowner?\n\n\n\n\n Sorry, I do not understanf your suggestion.\n How procedure_resowner can help to solve this problem?\n\n\n\n\nThis is just an idea - I think the most correct with zero\n performance impact is keeping snapshot, and this can be\n stored in procedure_resowner. \n\n\n\nThe fundamental question is if we want or allow more\n snapshots per query. The implementation is a secondary\n issue.\n\n\n\n\n I wonder if it is correct from logical point of view.\n If we commit transaction in stored procedure, then we actually\n implicitly start new transaction.\n And new transaction should have new snapshot. Otherwise its behavior\n will change.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Feb 2021 11:08:10 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "pá 19. 2. 2021 v 9:08 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 19.02.2021 10:47, Pavel Stehule wrote:\n>\n>\n>\n> pá 19. 2. 2021 v 8:39 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>>\n>>\n>> On 19.02.2021 10:14, Pavel Stehule wrote:\n>>\n>>\n>>\n>> pá 19. 2. 2021 v 7:51 odesílatel Konstantin Knizhnik <\n>> k.knizhnik@postgrespro.ru> napsal:\n>>\n>>>\n>>>\n>>> On 18.02.2021 20:10, Pavel Stehule wrote:\n>>>\n>>> This has a negative impact on performance - and a lot of users use\n>>> procedures without transaction control. So it doesn't look like a good\n>>> solution.\n>>>\n>>> I am more concentrated on the Pg 14 release, where the work with SPI is\n>>> redesigned, and I hope so this issue is fixed there. For older releases, I\n>>> don't know. Is this issue related to Postgres or it is related to PgPro\n>>> only? If it is related to community pg, then we should fix and we should\n>>> accept not too good performance, because there is no better non invasive\n>>> solution. If it is PgPro issue (because there are ATX support) you can fix\n>>> it (or you can try backport the patch\n>>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n>>> ). You have more possibilities on PgPro code base.\n>>>\n>>>\n>>> Sorry, it is not PgPro specific problem and recent master suffers from\n>>> this bug as well.\n>>> In the original bug report there was simple scenario of reproducing the\n>>> problem:\n>>>\n>>> CREATE TABLE toasted(id serial primary key, data text);\n>>> INSERT INTO toasted(data) VALUES((SELECT string_agg(random()::text,':')\n>>> FROM generate_series(1, 1000)));\n>>> INSERT INTO toasted(data) VALUES((SELECT string_agg(random()::text,':')\n>>> FROM generate_series(1, 1000)));\n>>> DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM toasted LOOP\n>>> INSERT INTO toasted(data) VALUES(v_r.data);COMMIT;END LOOP;END;$$;\n>>>\n>>\n>> can you use new procedure_resowner?\n>>\n>> Sorry, I do not understanf your suggestion.\n>> How procedure_resowner can help to solve this problem?\n>>\n>\n> This is just an idea - I think the most correct with zero performance\n> impact is keeping snapshot, and this can be stored in procedure_resowner.\n>\n> The fundamental question is if we want or allow more snapshots per query.\n> The implementation is a secondary issue.\n>\n>\n> I wonder if it is correct from logical point of view.\n> If we commit transaction in stored procedure, then we actually implicitly\n> start new transaction.\n> And new transaction should have new snapshot. Otherwise its behavior will\n> change.\n>\n\nI have no problem with this. I have a problem with cycle implementation -\nwhen I iterate over some result, then this result should be consistent over\nall cycles. In other cases, the behaviour is not deterministic.\n\n\n\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npá 19. 2. 2021 v 9:08 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 19.02.2021 10:47, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\npá 19. 2. 2021 v 8:39\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 19.02.2021 10:14, Pavel Stehule wrote:\n\n\n\n\n\n\n\npá 19. 2. 2021\n v 7:51 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 18.02.2021 20:10, Pavel Stehule wrote:\n\n\nThis has a negative impact on\n performance - and a lot of users use\n procedures without transaction control. So\n it doesn't look like a good solution.\n \n\nI am more\n concentrated on the Pg 14 release, where\n the work with SPI is redesigned, and I\n hope so this issue is fixed there. For\n older releases, I don't know. Is this\n issue related to Postgres or it is related\n to PgPro only? If it is related to\n community pg, then we should fix and we\n should accept not too good performance,\n because there is no better non invasive\n solution. If it is PgPro issue (because\n there are ATX support) you can fix it (or\n you can try backport the patch https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n ). You have more possibilities on PgPro\n code base. \n\n\n\n\n Sorry, it is not PgPro specific problem and\n recent master suffers from this bug as well.\n In the original bug report there was simple\n scenario of reproducing the problem:\n\n CREATE TABLE toasted(id serial primary key, data\n text);\n INSERT INTO toasted(data) VALUES((SELECT\n string_agg(random()::text,':') FROM\n generate_series(1, 1000)));\n INSERT INTO toasted(data) VALUES((SELECT\n string_agg(random()::text,':') FROM\n generate_series(1, 1000)));\n DO $$ DECLARE v_r record; BEGIN FOR v_r in\n SELECT data FROM toasted LOOP INSERT INTO\n toasted(data) VALUES(v_r.data);COMMIT;END\n LOOP;END;$$;\n\n\n\n\ncan you use new procedure_resowner?\n\n\n\n\n Sorry, I do not understanf your suggestion.\n How procedure_resowner can help to solve this problem?\n\n\n\n\nThis is just an idea - I think the most correct with zero\n performance impact is keeping snapshot, and this can be\n stored in procedure_resowner. \n\n\n\nThe fundamental question is if we want or allow more\n snapshots per query. The implementation is a secondary\n issue.\n\n\n\n\n I wonder if it is correct from logical point of view.\n If we commit transaction in stored procedure, then we actually\n implicitly start new transaction.\n And new transaction should have new snapshot. Otherwise its behavior\n will change.I have no problem with this. I have a problem with cycle implementation - when I iterate over some result, then this result should be consistent over all cycles. In other cases, the behaviour is not deterministic.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Feb 2021 09:12:59 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "On 19.02.2021 11:12, Pavel Stehule wrote:\n>\n>\n> pá 19. 2. 2021 v 9:08 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n>\n>\n> On 19.02.2021 10:47, Pavel Stehule wrote:\n>>\n>>\n>> pá 19. 2. 2021 v 8:39 odesílatel Konstantin Knizhnik\n>> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>>\n>> napsal:\n>>\n>>\n>>\n>> On 19.02.2021 10:14, Pavel Stehule wrote:\n>>>\n>>>\n>>> pá 19. 2. 2021 v 7:51 odesílatel Konstantin Knizhnik\n>>> <k.knizhnik@postgrespro.ru\n>>> <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>>>\n>>>\n>>>\n>>> On 18.02.2021 20:10, Pavel Stehule wrote:\n>>>> This has a negative impact on performance - and a lot\n>>>> of users use procedures without transaction control. So\n>>>> it doesn't look like a good solution.\n>>>>\n>>>> I am more concentrated on the Pg 14 release, where the\n>>>> work with SPI is redesigned, and I hope so this issue\n>>>> is fixed there. For older releases, I don't know. Is\n>>>> this issue related to Postgres or it is related to\n>>>> PgPro only? If it is related to community pg, then we\n>>>> should fix and we should accept not too good\n>>>> performance, because there is no better non invasive\n>>>> solution. If it is PgPro issue (because there are ATX\n>>>> support) you can fix it (or you can try backport the\n>>>> patch\n>>>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n>>>> ). You have more possibilities on PgPro code base.\n>>>\n>>> Sorry, it is not PgPro specific problem and recent\n>>> master suffers from this bug as well.\n>>> In the original bug report there was simple scenario of\n>>> reproducing the problem:\n>>>\n>>> CREATE TABLE toasted(id serial primary key, data text);\n>>> INSERT INTO toasted(data) VALUES((SELECT\n>>> string_agg(random()::text,':') FROM generate_series(1,\n>>> 1000)));\n>>> INSERT INTO toasted(data) VALUES((SELECT\n>>> string_agg(random()::text,':') FROM generate_series(1,\n>>> 1000)));\n>>> DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data\n>>> FROM toasted LOOP INSERT INTO toasted(data)\n>>> VALUES(v_r.data);COMMIT;END LOOP;END;$$;\n>>>\n>>>\n>>> can you use new procedure_resowner?\n>>>\n>> Sorry, I do not understanf your suggestion.\n>> How procedure_resowner can help to solve this problem?\n>>\n>>\n>> This is just an idea - I think the most correct with zero\n>> performance impact is keeping snapshot, and this can be stored in\n>> procedure_resowner.\n>>\n>> The fundamental question is if we want or allow more snapshots\n>> per query. The implementation is a secondary issue.\n>\n> I wonder if it is correct from logical point of view.\n> If we commit transaction in stored procedure, then we actually\n> implicitly start new transaction.\n> And new transaction should have new snapshot. Otherwise its\n> behavior will change.\n>\n>\n> I have no problem with this. I have a problem with cycle \n> implementation - when I iterate over some result, then this result \n> should be consistent over all cycles. In other cases, the behaviour \n> is not deterministic.\n\nI have investigated more the problem with toast data in stored \nprocedures and come to very strange conclusion:\nto fix the problem it is enough to pass expand_external=false to \nexpanded_record_set_tuple instead of !estate->atomic:\n\n {\n /* Only need to assign a new \ntuple value */\nexpanded_record_set_tuple(rec->erh, tuptab->vals[i],\n- true, !estate->atomic);\n+ true, false);\n }\n\nWhy it is correct?\nBecause in assign_simple_var we already forced detoasting for data:\n\n /*\n * In non-atomic contexts, we do not want to store TOAST pointers in\n * variables, because such pointers might become stale after a commit.\n * Forcibly detoast in such cases. We don't want to detoast (flatten)\n * expanded objects, however; those should be OK across a transaction\n * boundary since they're just memory-resident objects. (Elsewhere in\n * this module, operations on expanded records likewise need to request\n * detoasting of record fields when !estate->atomic. Expanded \narrays are\n * not a problem since all array entries are always detoasted.)\n */\n if (!estate->atomic && !isnull && var->datatype->typlen == -1 &&\n VARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n {\n MemoryContext oldcxt;\n Datum detoasted;\n\n /*\n * Do the detoasting in the eval_mcontext to avoid long-term \nleakage\n * of whatever memory toast fetching might leak. Then we have \nto copy\n * the detoasted datum to the function's main context, which is a\n * pain, but there's little choice.\n */\n oldcxt = MemoryContextSwitchTo(get_eval_mcontext(estate));\n detoasted = PointerGetDatum(detoast_external_attr((struct \nvarlena *) DatumGetPointer(newvalue)));\n\n\nSo, there is no need to initialize TOAST snapshot and \"no known \nsnapshots\" error is false alarm.\nWhat do you think about it?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 19.02.2021 11:12, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\npá 19. 2. 2021 v 9:08\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 19.02.2021 10:47, Pavel Stehule wrote:\n\n\n\n\n\n\n\npá 19. 2. 2021\n v 8:39 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 19.02.2021 10:14, Pavel Stehule wrote:\n\n\n\n\n\n\n\npá 19.\n 2. 2021 v 7:51 odesílatel Konstantin\n Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 18.02.2021 20:10, Pavel\n Stehule wrote:\n\n\nThis has a negative\n impact on performance - and a lot\n of users use procedures without\n transaction control. So it doesn't\n look like a good solution.\n \n\nI am more\n concentrated on the Pg 14\n release, where the work with SPI\n is redesigned, and I hope so\n this issue is fixed there. For\n older releases, I don't know. Is\n this issue related to Postgres\n or it is related to PgPro only?\n If it is related to community\n pg, then we should fix and we\n should accept not too good\n performance, because there is no\n better non invasive solution. If\n it is PgPro issue (because there\n are ATX support) you can fix it\n (or you can try backport the\n patch https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n ). You have more possibilities\n on PgPro code base. \n\n\n\n\n Sorry, it is not PgPro specific\n problem and recent master suffers from\n this bug as well.\n In the original bug report there was\n simple scenario of reproducing the\n problem:\n\n CREATE TABLE toasted(id serial primary\n key, data text);\n INSERT INTO toasted(data)\n VALUES((SELECT\n string_agg(random()::text,':') FROM\n generate_series(1, 1000)));\n INSERT INTO toasted(data)\n VALUES((SELECT\n string_agg(random()::text,':') FROM\n generate_series(1, 1000)));\n DO $$ DECLARE v_r record; BEGIN FOR\n v_r in SELECT data FROM toasted LOOP\n INSERT INTO toasted(data)\n VALUES(v_r.data);COMMIT;END\n LOOP;END;$$;\n\n\n\n\ncan you use new procedure_resowner?\n\n\n\n\n Sorry, I do not understanf your suggestion.\n How procedure_resowner can help to solve this\n problem?\n\n\n\n\nThis is just an idea - I think the most correct\n with zero performance impact is keeping snapshot,\n and this can be stored in procedure_resowner. \n\n\n\nThe fundamental question is if we want or allow\n more snapshots per query. The implementation is a\n secondary issue.\n\n\n\n\n I wonder if it is correct from logical point of view.\n If we commit transaction in stored procedure, then we\n actually implicitly start new transaction.\n And new transaction should have new snapshot. Otherwise\n its behavior will change.\n\n\n\n\nI have no problem with this. I have a problem with cycle\n implementation - when I iterate over some result, then this\n result should be consistent over all cycles. In other\n cases, the behaviour is not deterministic.\n\n\n\n\n I have investigated more the problem with toast data in stored\n procedures and come to very strange conclusion:\n to fix the problem it is enough to pass expand_external=false to\n expanded_record_set_tuple instead of !estate->atomic:\n\n {\n /* Only need to assign a new\n tuple value */\n \n expanded_record_set_tuple(rec->erh, tuptab->vals[i],\n- \n true, !estate->atomic);\n+ \n true, false);\n }\n\n Why it is correct?\n Because in assign_simple_var we already forced detoasting for data:\n\n /*\n * In non-atomic contexts, we do not want to store TOAST\n pointers in\n * variables, because such pointers might become stale after a\n commit.\n * Forcibly detoast in such cases. We don't want to detoast\n (flatten)\n * expanded objects, however; those should be OK across a\n transaction\n * boundary since they're just memory-resident objects. \n (Elsewhere in\n * this module, operations on expanded records likewise need to\n request\n * detoasting of record fields when !estate->atomic. \n Expanded arrays are\n * not a problem since all array entries are always detoasted.)\n */\n if (!estate->atomic && !isnull &&\n var->datatype->typlen == -1 &&\n VARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n {\n MemoryContext oldcxt;\n Datum detoasted;\n\n /*\n * Do the detoasting in the eval_mcontext to avoid long-term\n leakage\n * of whatever memory toast fetching might leak. Then we\n have to copy\n * the detoasted datum to the function's main context, which\n is a\n * pain, but there's little choice.\n */\n oldcxt = MemoryContextSwitchTo(get_eval_mcontext(estate));\n detoasted = PointerGetDatum(detoast_external_attr((struct\n varlena *) DatumGetPointer(newvalue)));\n\n\n So, there is no need to initialize TOAST snapshot and \"no known\n snapshots\" error is false alarm.\n What do you think about it?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Feb 2021 18:19:00 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
},
{
"msg_contents": "pá 19. 2. 2021 v 16:19 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 19.02.2021 11:12, Pavel Stehule wrote:\n>\n>\n>\n> pá 19. 2. 2021 v 9:08 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>>\n>>\n>> On 19.02.2021 10:47, Pavel Stehule wrote:\n>>\n>>\n>>\n>> pá 19. 2. 2021 v 8:39 odesílatel Konstantin Knizhnik <\n>> k.knizhnik@postgrespro.ru> napsal:\n>>\n>>>\n>>>\n>>> On 19.02.2021 10:14, Pavel Stehule wrote:\n>>>\n>>>\n>>>\n>>> pá 19. 2. 2021 v 7:51 odesílatel Konstantin Knizhnik <\n>>> k.knizhnik@postgrespro.ru> napsal:\n>>>\n>>>>\n>>>>\n>>>> On 18.02.2021 20:10, Pavel Stehule wrote:\n>>>>\n>>>> This has a negative impact on performance - and a lot of users use\n>>>> procedures without transaction control. So it doesn't look like a good\n>>>> solution.\n>>>>\n>>>> I am more concentrated on the Pg 14 release, where the work with SPI is\n>>>> redesigned, and I hope so this issue is fixed there. For older releases, I\n>>>> don't know. Is this issue related to Postgres or it is related to PgPro\n>>>> only? If it is related to community pg, then we should fix and we should\n>>>> accept not too good performance, because there is no better non invasive\n>>>> solution. If it is PgPro issue (because there are ATX support) you can fix\n>>>> it (or you can try backport the patch\n>>>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n>>>> ). You have more possibilities on PgPro code base.\n>>>>\n>>>>\n>>>> Sorry, it is not PgPro specific problem and recent master suffers from\n>>>> this bug as well.\n>>>> In the original bug report there was simple scenario of reproducing the\n>>>> problem:\n>>>>\n>>>> CREATE TABLE toasted(id serial primary key, data text);\n>>>> INSERT INTO toasted(data) VALUES((SELECT string_agg(random()::text,':')\n>>>> FROM generate_series(1, 1000)));\n>>>> INSERT INTO toasted(data) VALUES((SELECT string_agg(random()::text,':')\n>>>> FROM generate_series(1, 1000)));\n>>>> DO $$ DECLARE v_r record; BEGIN FOR v_r in SELECT data FROM toasted\n>>>> LOOP INSERT INTO toasted(data) VALUES(v_r.data);COMMIT;END LOOP;END;$$;\n>>>>\n>>>\n>>> can you use new procedure_resowner?\n>>>\n>>> Sorry, I do not understanf your suggestion.\n>>> How procedure_resowner can help to solve this problem?\n>>>\n>>\n>> This is just an idea - I think the most correct with zero performance\n>> impact is keeping snapshot, and this can be stored in procedure_resowner.\n>>\n>> The fundamental question is if we want or allow more snapshots per query.\n>> The implementation is a secondary issue.\n>>\n>>\n>> I wonder if it is correct from logical point of view.\n>> If we commit transaction in stored procedure, then we actually implicitly\n>> start new transaction.\n>> And new transaction should have new snapshot. Otherwise its behavior will\n>> change.\n>>\n>\n> I have no problem with this. I have a problem with cycle implementation -\n> when I iterate over some result, then this result should be consistent over\n> all cycles. In other cases, the behaviour is not deterministic.\n>\n>\n> I have investigated more the problem with toast data in stored procedures\n> and come to very strange conclusion:\n> to fix the problem it is enough to pass expand_external=false to\n> expanded_record_set_tuple instead of !estate->atomic:\n>\n> {\n> /* Only need to assign a new tuple\n> value */\n>\n> expanded_record_set_tuple(rec->erh, tuptab->vals[i],\n> -\n> true, !estate->atomic);\n> +\n> true, false);\n> }\n>\n> Why it is correct?\n> Because in assign_simple_var we already forced detoasting for data:\n>\n> /*\n> * In non-atomic contexts, we do not want to store TOAST pointers in\n> * variables, because such pointers might become stale after a commit.\n> * Forcibly detoast in such cases. We don't want to detoast (flatten)\n> * expanded objects, however; those should be OK across a transaction\n> * boundary since they're just memory-resident objects. (Elsewhere in\n> * this module, operations on expanded records likewise need to request\n> * detoasting of record fields when !estate->atomic. Expanded arrays\n> are\n> * not a problem since all array entries are always detoasted.)\n> */\n> if (!estate->atomic && !isnull && var->datatype->typlen == -1 &&\n> VARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n> {\n> MemoryContext oldcxt;\n> Datum detoasted;\n>\n> /*\n> * Do the detoasting in the eval_mcontext to avoid long-term\n> leakage\n> * of whatever memory toast fetching might leak. Then we have to\n> copy\n> * the detoasted datum to the function's main context, which is a\n> * pain, but there's little choice.\n> */\n> oldcxt = MemoryContextSwitchTo(get_eval_mcontext(estate));\n> detoasted = PointerGetDatum(detoast_external_attr((struct varlena\n> *) DatumGetPointer(newvalue)));\n>\n>\n> So, there is no need to initialize TOAST snapshot and \"no known snapshots\"\n> error is false alarm.\n> What do you think about it?\n>\n\nThis is Tom's code, so important is his opinion.\n\nRegards\n\nPavel\n\n\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\npá 19. 2. 2021 v 16:19 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 19.02.2021 11:12, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\npá 19. 2. 2021 v 9:08\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 19.02.2021 10:47, Pavel Stehule wrote:\n\n\n\n\n\n\n\npá 19. 2. 2021\n v 8:39 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 19.02.2021 10:14, Pavel Stehule wrote:\n\n\n\n\n\n\n\npá 19.\n 2. 2021 v 7:51 odesílatel Konstantin\n Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n \n\nOn 18.02.2021 20:10, Pavel\n Stehule wrote:\n\n\nThis has a negative\n impact on performance - and a lot\n of users use procedures without\n transaction control. So it doesn't\n look like a good solution.\n \n\nI am more\n concentrated on the Pg 14\n release, where the work with SPI\n is redesigned, and I hope so\n this issue is fixed there. For\n older releases, I don't know. Is\n this issue related to Postgres\n or it is related to PgPro only?\n If it is related to community\n pg, then we should fix and we\n should accept not too good\n performance, because there is no\n better non invasive solution. If\n it is PgPro issue (because there\n are ATX support) you can fix it\n (or you can try backport the\n patch https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee895a655ce4341546facd6f23e3e8f2931b96bf\n ). You have more possibilities\n on PgPro code base. \n\n\n\n\n Sorry, it is not PgPro specific\n problem and recent master suffers from\n this bug as well.\n In the original bug report there was\n simple scenario of reproducing the\n problem:\n\n CREATE TABLE toasted(id serial primary\n key, data text);\n INSERT INTO toasted(data)\n VALUES((SELECT\n string_agg(random()::text,':') FROM\n generate_series(1, 1000)));\n INSERT INTO toasted(data)\n VALUES((SELECT\n string_agg(random()::text,':') FROM\n generate_series(1, 1000)));\n DO $$ DECLARE v_r record; BEGIN FOR\n v_r in SELECT data FROM toasted LOOP\n INSERT INTO toasted(data)\n VALUES(v_r.data);COMMIT;END\n LOOP;END;$$;\n\n\n\n\ncan you use new procedure_resowner?\n\n\n\n\n Sorry, I do not understanf your suggestion.\n How procedure_resowner can help to solve this\n problem?\n\n\n\n\nThis is just an idea - I think the most correct\n with zero performance impact is keeping snapshot,\n and this can be stored in procedure_resowner. \n\n\n\nThe fundamental question is if we want or allow\n more snapshots per query. The implementation is a\n secondary issue.\n\n\n\n\n I wonder if it is correct from logical point of view.\n If we commit transaction in stored procedure, then we\n actually implicitly start new transaction.\n And new transaction should have new snapshot. Otherwise\n its behavior will change.\n\n\n\n\nI have no problem with this. I have a problem with cycle\n implementation - when I iterate over some result, then this\n result should be consistent over all cycles. In other\n cases, the behaviour is not deterministic.\n\n\n\n\n I have investigated more the problem with toast data in stored\n procedures and come to very strange conclusion:\n to fix the problem it is enough to pass expand_external=false to\n expanded_record_set_tuple instead of !estate->atomic:\n\n {\n /* Only need to assign a new\n tuple value */\n \n expanded_record_set_tuple(rec->erh, tuptab->vals[i],\n- \n true, !estate->atomic);\n+ \n true, false);\n }\n\n Why it is correct?\n Because in assign_simple_var we already forced detoasting for data:\n\n /*\n * In non-atomic contexts, we do not want to store TOAST\n pointers in\n * variables, because such pointers might become stale after a\n commit.\n * Forcibly detoast in such cases. We don't want to detoast\n (flatten)\n * expanded objects, however; those should be OK across a\n transaction\n * boundary since they're just memory-resident objects. \n (Elsewhere in\n * this module, operations on expanded records likewise need to\n request\n * detoasting of record fields when !estate->atomic. \n Expanded arrays are\n * not a problem since all array entries are always detoasted.)\n */\n if (!estate->atomic && !isnull &&\n var->datatype->typlen == -1 &&\n VARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n {\n MemoryContext oldcxt;\n Datum detoasted;\n\n /*\n * Do the detoasting in the eval_mcontext to avoid long-term\n leakage\n * of whatever memory toast fetching might leak. Then we\n have to copy\n * the detoasted datum to the function's main context, which\n is a\n * pain, but there's little choice.\n */\n oldcxt = MemoryContextSwitchTo(get_eval_mcontext(estate));\n detoasted = PointerGetDatum(detoast_external_attr((struct\n varlena *) DatumGetPointer(newvalue)));\n\n\n So, there is no need to initialize TOAST snapshot and \"no known\n snapshots\" error is false alarm.\n What do you think about it?This is Tom's code, so important is his opinion.RegardsPavel\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Feb 2021 16:28:02 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with accessing TOAST data in stored procedures"
}
] |
[
{
"msg_contents": "Hello\n\nThe REINDEX CONCURRENTLY documentation states that if a transient index\nused lingers, the fix is to drop the invalid index and perform RC again;\nand that this is to be done for \"ccnew\" indexes and also for \"ccold\"\nindexes:\n\n The recommended recovery method in such cases is to drop the invalid index\n and try again to perform <command>REINDEX CONCURRENTLY</command>. The\n concurrent index created during the processing has a name ending in the\n suffix <literal>ccnew</literal>, or <literal>ccold</literal> if it is an\n old index definition which we failed to drop. Invalid indexes can be\n dropped using <literal>DROP INDEX</literal>, including invalid toast\n indexes.\n\nBut this seems misleading to me. It is correct advice for \"ccnew\"\nindexes, of course. But if the index is named \"ccold\", then the rebuild\nof the index actually succeeded, so you can just drop the ccold index\nand not rebuild anything.\n\nIn other words I propose to reword this paragraph as follows:\n\n If the transient index created during the concurrent operation is\n suffixed <literal>ccnew</literal>, the recommended recovery method\n is to drop the invalid index using <literal>DROP INDEX</literal>,\n and try to perform <command>REINDEX CONCURRENTLY</command> again. \n If the transient index is instead suffixed <literal>ccold</literal>,\n it corresponds to the original index which we failed to drop;\n the recommended recovery method is to just drop said index, since the\n rebuild proper has been successful.\n\n(The original talks about \"the concurrent index\", which seems somewhat\nsloppy thinking. I used the term \"transient index\" instead.)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Wed, 19 Aug 2020 17:13:12 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "\"ccold\" left by reindex concurrently are droppable?"
},
{
"msg_contents": "On Wed, Aug 19, 2020 at 05:13:12PM -0400, Alvaro Herrera wrote:\n> In other words I propose to reword this paragraph as follows:\n> \n> If the transient index created during the concurrent operation is\n> suffixed <literal>ccnew</literal>, the recommended recovery method\n> is to drop the invalid index using <literal>DROP INDEX</literal>,\n> and try to perform <command>REINDEX CONCURRENTLY</command> again. \n> If the transient index is instead suffixed <literal>ccold</literal>,\n> it corresponds to the original index which we failed to drop;\n> the recommended recovery method is to just drop said index, since the\n> rebuild proper has been successful.\n\nYes, that's an improvement. It would be better to backpatch that. So\n+1 from me.\n\n> (The original talks about \"the concurrent index\", which seems somewhat\n> sloppy thinking. I used the term \"transient index\" instead.)\n\nUsing transient to refer to an index aimed at being ephemeral sounds\nfine to me in this context.\n--\nMichael",
"msg_date": "Thu, 20 Aug 2020 14:17:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \"ccold\" left by reindex concurrently are droppable?"
},
{
"msg_contents": "On Thu, Aug 20, 2020 at 7:17 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Aug 19, 2020 at 05:13:12PM -0400, Alvaro Herrera wrote:\n> > In other words I propose to reword this paragraph as follows:\n> >\n> > If the transient index created during the concurrent operation is\n> > suffixed <literal>ccnew</literal>, the recommended recovery method\n> > is to drop the invalid index using <literal>DROP INDEX</literal>,\n> > and try to perform <command>REINDEX CONCURRENTLY</command> again.\n> > If the transient index is instead suffixed <literal>ccold</literal>,\n> > it corresponds to the original index which we failed to drop;\n> > the recommended recovery method is to just drop said index, since the\n> > rebuild proper has been successful.\n>\n> Yes, that's an improvement. It would be better to backpatch that. So\n> +1 from me.\n\n+1, that's an improvement and should be backpatched.\n\n>\n> > (The original talks about \"the concurrent index\", which seems somewhat\n> > sloppy thinking. I used the term \"transient index\" instead.)\n>\n> Using transient to refer to an index aimed at being ephemeral sounds\n> fine to me in this context.\n\nAgreed.\n\n\n",
"msg_date": "Thu, 20 Aug 2020 10:18:07 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ccold\" left by reindex concurrently are droppable?"
},
{
"msg_contents": "Thanks, Michael and Julien! Pushed to 12-master, with a slight\nrewording to use the passive voice, hopefully matching the surrounding\ntext. I also changed \"temporary\" to \"transient\" in another line, for\nconsistency.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 20 Aug 2020 13:54:02 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ccold\" left by reindex concurrently are droppable?"
}
] |
[
{
"msg_contents": "Hello\n\nI have a publisher instance, cpu 32 core, ram 64GB, with SSD and installed\nPostgreSQL 10 on it. I want to use logical replication with it , I\ncreate publication on it, add about 20 tables in the pub, each tables will\nhave about 1 million line data,.\n\nI wonder how many subscriber instance the publisher can affort, what if i\nhave 100 sub,\nwhich means there will be 100 logical replication slot and 100 wal sender\nprocess on the publisher instance, will so many sub slow down the publisher\nperformance?\n\nI have test on my own machine cpu 8core, ram 16GB as a publisher with 10\nsub (anthoer machine start 10 postgres instance, every one create a sub),\nand everything works fine. I test pgbench on my 8core, 16GB machine,\nwithout sub or with 10 sub, the tps is the same. Even if there is one\nmillion line data on the pub, and on sub side the table is empty, and when\ni create subscription on the 10 postgres sub instance, it copy data from\npub is quickly enough.\n\nI wonder how many logical replication slot or wal sender is ok for the\n32core, 64GB machine. 100 sub? 500 sub?\n\nHelloI have a publisher instance, cpu 32 core, ram 64GB, with SSD and installed PostgreSQL 10 on it. I want to use logical replication with it , I create publication on it, add about 20 tables in the pub, each tables will have about 1 million line data,.I wonder how many subscriber instance the publisher can affort, what if i have 100 sub, which means there will be 100 logical replication slot and 100 wal sender process on the publisher instance, will so many sub slow down the publisher performance? I have test on my own machine cpu 8core, ram 16GB as a publisher with 10 sub (anthoer machine start 10 postgres instance, every one create a sub), and everything works fine. I test pgbench on my 8core, 16GB machine, without sub or with 10 sub, the tps is the same. Even if there is one million line data on the pub, and on sub side the table is empty, and when i create subscription on the 10 postgres sub instance, it copy data from pub is quickly enough.I wonder how many logical replication slot or wal sender is ok for the 32core, 64GB machine. 100 sub? 500 sub?",
"msg_date": "Thu, 20 Aug 2020 10:49:46 +0800",
"msg_from": "=?UTF-8?B?6IOh5bi46b2Q?= <huchangqiqi@gmail.com>",
"msg_from_op": true,
"msg_subject": "The number of logical replication slot or wal sender recommend"
}
] |
[
{
"msg_contents": "Hello,\r\n\r\nI am trying to handle the limit that we can't do a tuple move caused by BEFORE TRIGGER,\r\nduring which I get two doubt points:\r\n\r\nThe first issue:\r\nIn ExecBRUpdateTriggers() or ExecBRInsertTriggers() function why we need to check the\r\nresult slot after every trigger call. If we should check the result slot after all triggers are\r\ncalled.\r\n\r\nFor example, we have a table t1(i int, j int), and a BEFORE INSERT TRIGGER on t1 make i - 1, \r\nand another BEFORE INSERT TRIGGER on t1 make i + 1. If the first trigger causes a partition\r\nmove, then the insert query will be interrupted. However, it will not change partition after\r\nall triggers are called.\r\n\r\nThe second issue:\r\nI read the code for partition move caused by an update, it deletes tuple in an old partition\r\nand inserts a new tuple in a partition. But during the insert, it triggers the trigger on the new\r\npartition, so the result value may be changed again, I want to know if it's intended way? In\r\nmy mind, if an insert produced by partition move should not trigger before trigger again.\r\n\r\n\r\nI make an initial patch as my thought, sorry if I missing some of the historical discussion.\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca",
"msg_date": "Thu, 20 Aug 2020 17:23:05 +0800",
"msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Small doubt on update a partition when some rows need to move among\n partition"
},
{
"msg_contents": "On Thu, Aug 20, 2020 at 5:22 PM movead.li@highgo.ca <movead.li@highgo.ca> wrote:\n>\n> Hello,\n>\n> I am trying to handle the limit that we can't do a tuple move caused by BEFORE TRIGGER,\n> during which I get two doubt points:\n>\n> The first issue:\n> In ExecBRUpdateTriggers() or ExecBRInsertTriggers() function why we need to check the\n> result slot after every trigger call. If we should check the result slot after all triggers are\n> called.\n>\n> For example, we have a table t1(i int, j int), and a BEFORE INSERT TRIGGER on t1 make i - 1,\n> and another BEFORE INSERT TRIGGER on t1 make i + 1. If the first trigger causes a partition\n> move, then the insert query will be interrupted. However, it will not change partition after\n> all triggers are called.\n\nThis was discussed at\nhttps://www.postgresql.org/message-id/20200318210213.GA9781@alvherre.pgsql.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 20 Aug 2020 18:27:50 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small doubt on update a partition when some rows need to move\n among partition"
}
] |
[
{
"msg_contents": "Hi,\n\nI was just looking over the JIT code and noticed a few comment and\ndocumentation typos. The attached fixes them.\n\nI'll push this in my UTC+12 morning if nobody objects to any of the\nchanges before then.\n\nUnsure if it'll be worth backpatching or not.\n\nDavid",
"msg_date": "Thu, 20 Aug 2020 22:19:49 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix a couple of typos in JIT"
},
{
"msg_contents": "At 2020-08-20 22:19:49 +1200, dgrowleyml@gmail.com wrote:\n>\n> I was just looking over the JIT code and noticed a few comment and\n> documentation typos. The attached fixes them.\n\nThe first change does not seem to be correct:\n\n-That this is done at query execution time, possibly even only in cases\n-where the relevant task is done a number of times, makes it JIT,\n-rather than ahead-of-time (AOT). Given the way JIT compilation is used\n-in PostgreSQL, the lines between interpretation, AOT and JIT are\n-somewhat blurry.\n+This is done at query execution time, possibly even only in cases where\n+the relevant task is done a number of times, makes it JIT, rather than\n+ahead-of-time (AOT). Given the way JIT compilation is used in PostgreSQL,\n+the lines between interpretation, AOT and JIT are somewhat blurry.\n\nThe original sentence may not be the most shining example of\nsentence-ry, but it is correct, and removing the \"That\" breaks it.\n\n-- Abhijit\n\n\n",
"msg_date": "Thu, 20 Aug 2020 15:59:26 +0530",
"msg_from": "Abhijit Menon-Sen <ams@toroid.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix a couple of typos in JIT"
},
{
"msg_contents": "On Thu, 20 Aug 2020 at 22:29, Abhijit Menon-Sen <ams@toroid.org> wrote:\n>\n> At 2020-08-20 22:19:49 +1200, dgrowleyml@gmail.com wrote:\n> >\n> > I was just looking over the JIT code and noticed a few comment and\n> > documentation typos. The attached fixes them.\n>\n> The first change does not seem to be correct:\n>\n> -That this is done at query execution time, possibly even only in cases\n> -where the relevant task is done a number of times, makes it JIT,\n> -rather than ahead-of-time (AOT). Given the way JIT compilation is used\n> -in PostgreSQL, the lines between interpretation, AOT and JIT are\n> -somewhat blurry.\n> +This is done at query execution time, possibly even only in cases where\n> +the relevant task is done a number of times, makes it JIT, rather than\n> +ahead-of-time (AOT). Given the way JIT compilation is used in PostgreSQL,\n> +the lines between interpretation, AOT and JIT are somewhat blurry.\n>\n> The original sentence may not be the most shining example of\n> sentence-ry, but it is correct, and removing the \"That\" breaks it.\n\nOh, I see. I missed that. Perhaps it would be better changed to \"The\nfact that this\"\n\nDavid\n\n\n",
"msg_date": "Thu, 20 Aug 2020 22:51:41 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix a couple of typos in JIT"
},
{
"msg_contents": "At 2020-08-20 22:51:41 +1200, dgrowleyml@gmail.com wrote:\n>\n> > +This is done at query execution time, possibly even only in cases where\n> > +the relevant task is done a number of times, makes it JIT, rather than\n> > +ahead-of-time (AOT). Given the way JIT compilation is used in PostgreSQL,\n> > +the lines between interpretation, AOT and JIT are somewhat blurry.\n> > […]\n> \n> Oh, I see. I missed that. Perhaps it would be better changed to \"The\n> fact that this\"\n\nOr maybe even:\n\n This is JIT, rather than ahead-of-time (AOT) compilation, because it\n is done at query execution time, and perhaps only in cases where the\n relevant task is repeated a number of times. Given the way …\n\n-- Abhijit\n\n\n",
"msg_date": "Thu, 20 Aug 2020 16:56:53 +0530",
"msg_from": "Abhijit Menon-Sen <ams@toroid.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix a couple of typos in JIT"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-20 15:59:26 +0530, Abhijit Menon-Sen wrote:\n> The original sentence may not be the most shining example of\n> sentence-ry, but it is correct, and removing the \"That\" breaks it.\n\nThat made me laugh ;)\n\nDavid, sounds good, after adapting to Abhijit's concerns.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 20 Aug 2020 07:25:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Fix a couple of typos in JIT"
},
{
"msg_contents": "On Fri, 21 Aug 2020 at 02:25, Andres Freund <andres@anarazel.de> wrote:\n> David, sounds good, after adapting to Abhijit's concerns.\n\nThank you both for having a look. Now pushed.\n\nDavid\n\n\n",
"msg_date": "Fri, 21 Aug 2020 09:38:01 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix a couple of typos in JIT"
}
] |
[
{
"msg_contents": "While trying to make sense of Adam Sjøgren's problem [1], I found\nmyself staring at ReplicationSlotsComputeRequiredXmin() in slot.c.\nIt seems to me that that is very shaky code, on two different\ngrounds:\n\n1. Sometimes it's called with ProcArrayLock already held exclusively.\nThis means that any delay in acquiring the ReplicationSlotControlLock\ntranslates directly into a hold on ProcArrayLock; in other words,\nevery acquisition of the ReplicationSlotControlLock is just as bad\nfor concurrency as an acquisition of ProcArrayLock. While I didn't\nsee any places that were doing really obviously slow things while\nholding ReplicationSlotControlLock, this is disturbing. Do we really\nneed it to be like that?\n\n2. On the other hand, the code is *releasing* the\nReplicationSlotControlLock before it calls\nProcArraySetReplicationSlotXmin, and that seems like a flat out\nconcurrency bug. How can we be sure that the values we're storing\ninto the shared xmin fields aren't stale by the time we acquire\nthe ProcArrayLock (in the case where we didn't hold it already)?\nI'm concerned that in the worst case this code could make the\nshared xmin fields go backwards.\n\nBoth of these issues could be solved, I think, if we got rid of\nthe provision for calling with ProcArrayLock already held and\nmoved the ProcArraySetReplicationSlotXmin call inside the hold\nof ReplicationSlotControlLock. But maybe I'm missing something\nabout why that would be worse.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/87364kdsim.fsf%40tullinup.koldfront.dk\n\n\n",
"msg_date": "Thu, 20 Aug 2020 12:58:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "ReplicationSlotsComputeRequiredXmin seems pretty questionable"
},
{
"msg_contents": "On Thu, Aug 20, 2020 at 10:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> While trying to make sense of Adam Sjøgren's problem [1], I found\n> myself staring at ReplicationSlotsComputeRequiredXmin() in slot.c.\n> It seems to me that that is very shaky code, on two different\n> grounds:\n>\n> 1. Sometimes it's called with ProcArrayLock already held exclusively.\n> This means that any delay in acquiring the ReplicationSlotControlLock\n> translates directly into a hold on ProcArrayLock; in other words,\n> every acquisition of the ReplicationSlotControlLock is just as bad\n> for concurrency as an acquisition of ProcArrayLock. While I didn't\n> see any places that were doing really obviously slow things while\n> holding ReplicationSlotControlLock, this is disturbing. Do we really\n> need it to be like that?\n>\n> 2. On the other hand, the code is *releasing* the\n> ReplicationSlotControlLock before it calls\n> ProcArraySetReplicationSlotXmin, and that seems like a flat out\n> concurrency bug. How can we be sure that the values we're storing\n> into the shared xmin fields aren't stale by the time we acquire\n> the ProcArrayLock (in the case where we didn't hold it already)?\n> I'm concerned that in the worst case this code could make the\n> shared xmin fields go backwards.\n>\n\nIt is not clear to me how those values can go backward. Basically, we\ninstall those values in slots after getting it from\nGetOldestSafeDecodingTransactionId() and then those always seem to get\nadvanced. And GetOldestSafeDecodingTransactionId() takes into account\nthe already stored shared values of replication_slot_xmin and\nreplication_slot_catalog_xmin for computing the xmin_horizon for slot.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 22 Aug 2020 16:16:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ReplicationSlotsComputeRequiredXmin seems pretty questionable"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Thu, Aug 20, 2020 at 10:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 2. On the other hand, the code is *releasing* the\n>> ReplicationSlotControlLock before it calls\n>> ProcArraySetReplicationSlotXmin, and that seems like a flat out\n>> concurrency bug.\n\n> It is not clear to me how those values can go backward.\n\nAfter releasing ReplicationSlotControlLock, that code is holding no\nlock at all (in the already_locked=false case I'm concerned about).\nThus the scenario to consider is:\n\n1. Walsender A runs ReplicationSlotsComputeRequiredXmin, computes\nsome perfectly-valid xmins, releases ReplicationSlotControlLock,\namd then gets swapped out to Timbuktu.\n\n2. Time passes and the \"true values\" of those xmins advance thanks\nto other walsender activity.\n\n3. Walsender B runs ReplicationSlotsComputeRequiredXmin, computes\nsome perfectly-valid xmins, and successfully stores them in the\nprocarray.\n\n4. Walsender A returns from never-never land, and stores its now\nquite stale results in the procarray, causing the globally visible\nxmins to go backwards from where they were after step 3.\n\nI see no mechanism in the code that prevents this scenario.\nOn reflection I'm not even very sure that the code change\nI'm suggesting would prevent it. It'd prevent walsenders\nfrom entering or exiting until we've updated the procarray,\nbut there's nothing to stop the furthest-back walsender from\nadvancing its values.\n\nThere may be some argument why this can't lead to a problem,\nbut I don't see any comments making such an argument, either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 22 Aug 2020 11:03:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: ReplicationSlotsComputeRequiredXmin seems pretty questionable"
},
{
"msg_contents": "On Sat, Aug 22, 2020 at 8:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Thu, Aug 20, 2020 at 10:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> 2. On the other hand, the code is *releasing* the\n> >> ReplicationSlotControlLock before it calls\n> >> ProcArraySetReplicationSlotXmin, and that seems like a flat out\n> >> concurrency bug.\n>\n> > It is not clear to me how those values can go backward.\n>\n> After releasing ReplicationSlotControlLock, that code is holding no\n> lock at all (in the already_locked=false case I'm concerned about).\n> Thus the scenario to consider is:\n>\n> 1. Walsender A runs ReplicationSlotsComputeRequiredXmin, computes\n> some perfectly-valid xmins, releases ReplicationSlotControlLock,\n> amd then gets swapped out to Timbuktu.\n>\n> 2. Time passes and the \"true values\" of those xmins advance thanks\n> to other walsender activity.\n>\n> 3. Walsender B runs ReplicationSlotsComputeRequiredXmin, computes\n> some perfectly-valid xmins, and successfully stores them in the\n> procarray.\n>\n> 4. Walsender A returns from never-never land, and stores its now\n> quite stale results in the procarray, causing the globally visible\n> xmins to go backwards from where they were after step 3.\n>\n> I see no mechanism in the code that prevents this scenario.\n> On reflection I'm not even very sure that the code change\n> I'm suggesting would prevent it. It'd prevent walsenders\n> from entering or exiting until we've updated the procarray,\n> but there's nothing to stop the furthest-back walsender from\n> advancing its values.\n>\n\nI think we can prevent that if we allow\nProcArraySetReplicationSlotXmin to update the shared values only when\nnew xmin values follows the shared values. I am not very sure if it is\nsafe but I am not able to think of a problem with it. The other way\ncould be to always acquire ProcArrayLock before calling\nReplicationSlotsComputeRequiredXmin or before acquiring\nReplicationSlotControlLock but that seems too restrictive.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Aug 2020 11:43:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ReplicationSlotsComputeRequiredXmin seems pretty questionable"
}
] |
[
{
"msg_contents": "While working with Nathan Bossart on an extension, we came across an\ninteresting quirk and possible inconsistency in the PostgreSQL code\naround infomask flags. I'd like to know if there's something I'm\nmisunderstanding here or if this really is a correctness/robustness gap\nin the code.\n\nAt the root of it is the relationship between the XMAX_LOCK_ONLY and\nXMAX_COMMITTED infomask bits.\n\nOne of the things in that all-important foreign key patch from 2013\n(0ac5ad51) was to tweak the UpdateXmaxHintBits() function to always set\nthe INVALID bit if the transaction was a locker only (even if the\nlocking transaction committed).\n\nhttps://github.com/postgres/postgres/blob/9168793d7275b4b318c153d607fba55d14098c19/src/backend/access/heap/heapam.c#L1748\n\nHowever it seems pretty clear from pretty much all of the visibility\ncode that while it may not be the usual case, it is considered a valid\nstate to have the XMAX_LOCK_ONLY and XMAX_COMMITTED bits set at the same\ntime. This combination is handled correctly throughout heapam_visibility.c:\n\nhttps://github.com/postgres/postgres/blob/7559d8ebfa11d98728e816f6b655582ce41150f3/src/backend/access/heap/heapam_visibility.c#L273\nhttps://github.com/postgres/postgres/blob/7559d8ebfa11d98728e816f6b655582ce41150f3/src/backend/access/heap/heapam_visibility.c#L606\nhttps://github.com/postgres/postgres/blob/7559d8ebfa11d98728e816f6b655582ce41150f3/src/backend/access/heap/heapam_visibility.c#L871\nhttps://github.com/postgres/postgres/blob/7559d8ebfa11d98728e816f6b655582ce41150f3/src/backend/access/heap/heapam_visibility.c#L1271\nhttps://github.com/postgres/postgres/blob/7559d8ebfa11d98728e816f6b655582ce41150f3/src/backend/access/heap/heapam_visibility.c#L1447\n\nBut then there's one place in heapam.c where an assumption appears that\nthis state will never happen - the compute_new_xmax_infomask() function:\n\nhttps://github.com/postgres/postgres/blob/9168793d7275b4b318c153d607fba55d14098c19/src/backend/access/heap/heapam.c#L4800\n\n else if (old_infomask & HEAP_XMAX_COMMITTED)\n {\n ...\n if (old_infomask2 & HEAP_KEYS_UPDATED)\n status = MultiXactStatusUpdate;\n else\n status = MultiXactStatusNoKeyUpdate;\n new_status = get_mxact_status_for_lock(mode, is_update);\n ...\n new_xmax = MultiXactIdCreate(xmax, status, add_to_xmax, new_status);\n\nWhen that code sees XMAX_COMMITTED, it assumes the xmax can't possibly\nbe LOCK_ONLY and it sets the status to something sufficiently high\nenough to guarantee that ISUPDATE_from_mxstatus() returns true. That\nmeans that when you try to update this tuple and\ncompute_new_xmax_infomask() is called with an \"update\" status, two\n\"update\" statuses are sent to MultiXactIdCreate() which then fails\n(thank goodness) with the error \"new multixact has more than one\nupdating member\".\n\nhttps://github.com/postgres/postgres/blob/cd5e82256de5895595cdd99ecb03aea15b346f71/src/backend/access/transam/multixact.c#L784\n\nThe UpdateXmaxHintBits() code to always set the INVALID bit wasn't in\nany patches on the mailing list but it was committed and it seems to\nhave worked very well. As a result it seems nearly impossible to get\ninto the state where you have both XMAX_LOCK_ONLY and XMAX_COMMITTED\nbits set; thus it seems we've avoided problems in\ncompute_new_xmax_infomask().\n\nBut nonetheless it seems worth making the code more robust by having the\ncompute_new_xmax_infomask() function handle this state correctly just as\nthe visibility code does. It should only require a simple patch like\nthis (credit to Nathan Bossart):\n\ndiff --git a/src/backend/access/heap/heapam.c\nb/src/backend/access/heap/heapam.c\nindex d881f4cd46..371e3e4f61 100644\n--- a/src/backend/access/heap/heapam.c\n+++ b/src/backend/access/heap/heapam.c\n@@ -4695,7 +4695,9 @@ compute_new_xmax_infomask(TransactionId xmax,\nuint16 old_infomask,\n l5:\n new_infomask = 0;\n new_infomask2 = 0;\n- if (old_infomask & HEAP_XMAX_INVALID)\n+ if (old_infomask & HEAP_XMAX_INVALID ||\n+ (old_infomask & HEAP_XMAX_COMMITTED &&\n+ HEAP_XMAX_IS_LOCKED_ONLY(old_infomask)))\n {\n /*\n * No previous locker; we just insert our own TransactionId.\n\nAlternatively, if we don't want to take this approach, then I'd argue\nthat we should update README.tuplock to explicitly state that\nXMAX_LOCK_ONLY and XMAX_COMMITTED are incompatible (just as it already\nstates for HEAP_XMAX_IS_MULTI and HEAP_XMAX_COMMITTED) and clean up the\ncode in heapam_visibility.c for consistency.\n\nMight be worth adding a note to README.tuplock either way about\nvalid/invalid combinations of infomask flags. Might help avoid some\nconfusion as people approach the code base.\n\nWhat do others think about this?\n\nThanks,\nJeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Thu, 20 Aug 2020 16:30:47 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": true,
"msg_subject": "XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)"
},
{
"msg_contents": "On 2020-Aug-20, Jeremy Schneider wrote:\n\n> While working with Nathan Bossart on an extension, we came across an\n> interesting quirk and possible inconsistency in the PostgreSQL code\n> around infomask flags. I'd like to know if there's something I'm\n> misunderstanding here or if this really is a correctness/robustness gap\n> in the code.\n> \n> At the root of it is the relationship between the XMAX_LOCK_ONLY and\n> XMAX_COMMITTED infomask bits.\n\nI spent a lot of time wondering about XMAX_COMMITTED. It seemed to me\nthat it would make no sense to have xacts that were lock-only yet have\nXMAX_COMMITTED set; so I looked hard to make sure no place would ever\ncause such a situation to arise. However, I still made my best effort\nto make the code cope with such a combination correctly if it ever\nshowed up.\n\nAnd I may have missed assumptions such as this one, even if they seem\nobvious in retrospect, such as in compute_new_xmax_infomask (which, as\nyou'll notice, changed considerably from what was initially committed):\n\n> But then there's one place in heapam.c where an assumption appears that\n> this state will never happen - the compute_new_xmax_infomask() function:\n> \n> https://github.com/postgres/postgres/blob/9168793d7275b4b318c153d607fba55d14098c19/src/backend/access/heap/heapam.c#L4800\n> \n> else if (old_infomask & HEAP_XMAX_COMMITTED)\n> {\n> ...\n> if (old_infomask2 & HEAP_KEYS_UPDATED)\n> status = MultiXactStatusUpdate;\n> else\n> status = MultiXactStatusNoKeyUpdate;\n> new_status = get_mxact_status_for_lock(mode, is_update);\n> ...\n> new_xmax = MultiXactIdCreate(xmax, status, add_to_xmax, new_status);\n> \n> When that code sees XMAX_COMMITTED, it assumes the xmax can't possibly\n> be LOCK_ONLY and it sets the status to something sufficiently high\n> enough to guarantee that ISUPDATE_from_mxstatus() returns true. That\n> means that when you try to update this tuple and\n> compute_new_xmax_infomask() is called with an \"update\" status, two\n> \"update\" statuses are sent to MultiXactIdCreate() which then fails\n> (thank goodness) with the error \"new multixact has more than one\n> updating member\".\n> \n> https://github.com/postgres/postgres/blob/cd5e82256de5895595cdd99ecb03aea15b346f71/src/backend/access/transam/multixact.c#L784\n\nHave you ever observed this error case hit? I've never seen a report of\nthat.\n\n> The UpdateXmaxHintBits() code to always set the INVALID bit wasn't in\n> any patches on the mailing list but it was committed and it seems to\n> have worked very well. As a result it seems nearly impossible to get\n> into the state where you have both XMAX_LOCK_ONLY and XMAX_COMMITTED\n> bits set; thus it seems we've avoided problems in\n> compute_new_xmax_infomask().\n\nPhew.\n\n(I guess I should fully expect that somebody, given sufficient time,\nwould carefully inspect the committed code against submitted patches ...\nespecially given that I do such inspections myself.)\n\n> But nonetheless it seems worth making the code more robust by having the\n> compute_new_xmax_infomask() function handle this state correctly just as\n> the visibility code does. It should only require a simple patch like\n> this (credit to Nathan Bossart):\n> \n> diff --git a/src/backend/access/heap/heapam.c\n> b/src/backend/access/heap/heapam.c\n> index d881f4cd46..371e3e4f61 100644\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -4695,7 +4695,9 @@ compute_new_xmax_infomask(TransactionId xmax,\n> uint16 old_infomask,\n> l5:\n> new_infomask = 0;\n> new_infomask2 = 0;\n> - if (old_infomask & HEAP_XMAX_INVALID)\n> + if (old_infomask & HEAP_XMAX_INVALID ||\n> + (old_infomask & HEAP_XMAX_COMMITTED &&\n> + HEAP_XMAX_IS_LOCKED_ONLY(old_infomask)))\n> {\n> /*\n> * No previous locker; we just insert our own TransactionId.\n\nWe could do this in stable branches, if there were any reports that\nthis inconsistency is happening in real world databases.\n\n> Alternatively, if we don't want to take this approach, then I'd argue\n> that we should update README.tuplock to explicitly state that\n> XMAX_LOCK_ONLY and XMAX_COMMITTED are incompatible (just as it already\n> states for HEAP_XMAX_IS_MULTI and HEAP_XMAX_COMMITTED) and clean up the\n> code in heapam_visibility.c for consistency.\n\nYeah, I like this approach better for the master branch; not just clean\nup as in remove the cases that handle it, but also actively elog(ERROR)\nif the condition ever occurs (hopefully with some known way to fix the\nproblem; maybe by \"WITH tup AS (DELETE FROM tab WHERE .. RETURNING *)\nINSERT * INTO tab FROM tup\" or similar.)\n\n> Might be worth adding a note to README.tuplock either way about\n> valid/invalid combinations of infomask flags. Might help avoid some\n> confusion as people approach the code base.\n\nAbsolutely.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 26 Aug 2020 15:15:51 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)"
},
{
"msg_contents": "On 8/26/20, 12:16 PM, \"Alvaro Herrera\" <alvherre@2ndquadrant.com> wrote:\r\n> On 2020-Aug-20, Jeremy Schneider wrote:\r\n>> Alternatively, if we don't want to take this approach, then I'd argue\r\n>> that we should update README.tuplock to explicitly state that\r\n>> XMAX_LOCK_ONLY and XMAX_COMMITTED are incompatible (just as it already\r\n>> states for HEAP_XMAX_IS_MULTI and HEAP_XMAX_COMMITTED) and clean up the\r\n>> code in heapam_visibility.c for consistency.\r\n>\r\n> Yeah, I like this approach better for the master branch; not just clean\r\n> up as in remove the cases that handle it, but also actively elog(ERROR)\r\n> if the condition ever occurs (hopefully with some known way to fix the\r\n> problem; maybe by \"WITH tup AS (DELETE FROM tab WHERE .. RETURNING *)\r\n> INSERT * INTO tab FROM tup\" or similar.)\r\n\r\n+1. I wouldn't mind picking this up, but it might be some time before\r\nI can get to it.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Wed, 26 Aug 2020 21:12:36 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)"
},
{
"msg_contents": "On Wed, Aug 26, 2020 at 12:16 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> We could do this in stable branches, if there were any reports that\n> this inconsistency is happening in real world databases.\n\nI hope that the new heapam amcheck stuff eventually leads to our\nhaving total (or near total) certainty about what correct on-disk\nstates are possible, regardless of the exact pg_upgrade + minor\nversion paths. We should take a strict line on this stuff where\npossible. If that turns out to be wrong in some detail, then it's\nrelatively easy to fix as a bug in amcheck itself.\n\nThere is a high cost to allowing ambiguity about what heapam states\nare truly legal/possible. It makes future development projects harder.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 27 Aug 2020 16:47:11 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)"
},
{
"msg_contents": "\n\n> On Aug 27, 2020, at 4:47 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Wed, Aug 26, 2020 at 12:16 PM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n>> We could do this in stable branches, if there were any reports that\n>> this inconsistency is happening in real world databases.\n> \n> I hope that the new heapam amcheck stuff eventually leads to our\n> having total (or near total) certainty about what correct on-disk\n> states are possible, regardless of the exact pg_upgrade + minor\n> version paths. We should take a strict line on this stuff where\n> possible. If that turns out to be wrong in some detail, then it's\n> relatively easy to fix as a bug in amcheck itself.\n> \n> There is a high cost to allowing ambiguity about what heapam states\n> are truly legal/possible. It makes future development projects harder.\n\nThe amcheck patch has Asserts in hio.c that purport to disallow writing invalid header bits to disk. The combination being discussed here is not disallowed, but if there is consensus that it is an illegal combination, we could certainly add it:\n\ndiff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c\nindex aa3f14c019..ca357410a2 100644\n--- a/src/backend/access/heap/hio.c\n+++ b/src/backend/access/heap/hio.c\n@@ -47,6 +47,17 @@ RelationPutHeapTuple(Relation relation,\n */\n Assert(!token || HeapTupleHeaderIsSpeculative(tuple->t_data));\n \n+ /*\n+ * Do not allow tuples with invalid combinations of hint bits to be placed\n+ * on a page. These combinations are detected as corruption by the\n+ * contrib/amcheck logic, so if you disable one or both of these\n+ * assertions, make corresponding changes there.\n+ */\n+ Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_LOCK_ONLY) &&\n+ (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)));\n+ Assert(!((tuple->t_data->t_infomask & HEAP_XMAX_COMMITTED) &&\n+ (tuple->t_data->t_infomask & HEAP_XMAX_IS_MULTI)));\n+\n /* Add the tuple to the page */\n pageHeader = BufferGetPage(buffer);\n \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 27 Aug 2020 16:57:15 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 4:57 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The amcheck patch has Asserts in hio.c that purport to disallow writing invalid header bits to disk.\n\nCan it also be a runtime check for the verification process? I think\nthat we can easily afford to be fairly exhaustive about stuff like\nthis.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 27 Aug 2020 16:58:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)"
},
{
"msg_contents": "\n\n> On Aug 27, 2020, at 4:58 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Thu, Aug 27, 2020 at 4:57 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> The amcheck patch has Asserts in hio.c that purport to disallow writing invalid header bits to disk.\n> \n> Can it also be a runtime check for the verification process? I think\n> that we can easily afford to be fairly exhaustive about stuff like\n> this.\n\nThese two are both checked in verify_heapam.c. The point is that the system will also refuse to write out pages that have this corruption. The Asserts could be converted to panics or whatever, but that has other more serious consequences. Did you have something else in mind?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 27 Aug 2020 17:06:23 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 5:06 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> These two are both checked in verify_heapam.c. The point is that the system will also refuse to write out pages that have this corruption. The Asserts could be converted to panics or whatever, but that has other more serious consequences. Did you have something else in mind?\n\nOh, I see -- you propose to add both an assert to hio.c, as well as a\ncheck to amcheck itself. That seems like the right thing to do.\n\nThanks\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 27 Aug 2020 17:18:04 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: XMAX_LOCK_ONLY and XMAX_COMMITTED (fk/multixact code)"
}
] |
[
{
"msg_contents": "I'm concerned about how the FSM gives out pages to heapam. Disabling\nthe FSM entirely helps TPC-C/BenchmarkSQL, which uses non-default heap\nfillfactors for most tables [1]. Testing has shown that this actually\nincreases throughput for the benchmark (as measured in TPM) by 5% -\n9%, even though my approach is totally naive and stupid. My approach\nmakes one or two small tables much bigger, but that doesn't have that\nmuch downside for the workload in practice. My approach helps by\naccidentally improving temporal locality -- related records are more\nconsistently located on the same block, which in practice reduces the\nnumber of pages dirtied and the number of FPIs generated. TPC-C has a\ntendency to insert a set of related tuples together (e.g., order lines\nfrom an order), while later updating all of those records together.\n\nInterestingly, the problems start before we even begin the benchmark\nproper, and can be observed directly using simple ad-hoc queries (I\ndeveloped some simple SQL queries involving ctid for this).\nBenchmarkSQL's initial bulk loading is performed by concurrent workers\nthat insert related groups of tuples into tables, so that we start out\nwith a realistic amount of old orders to refer back to, etc. I can\nclearly observe that various natural groupings (e.g., grouping order\nlines by order number, grouping customers by district + warehouse)\nactually get initially inserted in a way that leaves tuples in a\ngrouping spread around an excessive number of heap blocks. For\nexample, while most order lines do fit on one block, there is a\nsignificant minority of orderlines that span two or more blocks for\nthe master branch case. Whereas with the FSM artificially disabled,\nthe heap relation looks more \"pristine\" in that related tuples are\nlocated on the same blocks (or at least on neighboring blocks). It's\npossible that one orderline will span two neighboring blocks here, but\nit will never span three or more blocks. Each order has 5 - 15 order\nlines, and so I was surprised to see that a small minority or order\nline tuples end up occupying as many as 5 or 7 heap pages on the\nmaster branch (i.e. with the current FSM intact during bulk loading).\n\nThe underlying cause of this \"bulk inserts are surprisingly\nindifferent to locality\" issue seems to be that heap am likes to\nremember small amounts of space from earlier pages when the backend\ncouldn't fit one last tuple on an earlier target page (before\nallocating a new page that became the new relcache target page in\nturn). This is penny wise and pound foolish, because we're eagerly\nusing a little bit more space in a case where we are applying a heap\nfill factor anyway. I think that we also have a number of related\nproblems.\n\nIt seems to me that we don't make enough effort to store related heap\ntuples together -- both during bulk inserts like this, but also during\nsubsequent updates that cannot fit successor tuples on the same heap\npage. The current design of the FSM seems to assume that it doesn't\nmatter where the free space comes from, as long as you get it from\nsomewhere and as long as fill factor isn't violated -- it cares about\nthe letter of the fill factor law without caring about its spirit or\nintent.\n\nIf the FSM tried to get free space from a close-by block, then we\nmight at least see related updates that cannot fit a successor tuple\non the same block behave in a coordinated fashion. We might at least\nhave both updates relocate the successor tuple to the same\nmostly-empty block -- they both notice the same nearby free block, so\nboth sets of successor tuples end up going on the same most-empty\nblock. The two updating backends don't actually coordinate, of course\n-- this just happens as a consequence of looking for nearby free\nspace.\n\nThe FSM should probably be taught to treat pages as free space\ncandidates (candidates to give out free space from) based on more\nsophisticated, locality-aware criteria. The FSM should care about the\n*rate of change* for a block over time. Suppose you have heap fill\nfactor set to 90. Once a heap block reaches fill factor% full, it\nought to not be used to insert new tuples unless and until the used\nspace on the block shrinks *significantly* -- the free space is now\nsupposed to be reserved. It should not be enough for the space used on\nthe page to shrink by just 1% (to 89%). Maybe it should have to reach\nas low as 50% or less before we flip it back to \"free to take space\nfrom for new unrelated tuples\". The idea is that fill factor space is\ntruly reserved for updates -- that should be \"sticky\" for all of this\nto work well.\n\nWhat's the point in having the concept of a heap fill factor at all if\nwe don't make any real effort to enforce that the extra free space\nleft behind is used as intended, for updates of tuples located on the\nsame heap page?\n\nThoughts?\n\n[1] https://github.com/petergeoghegan/benchmarksql/commit/3ef4fe71077b40f56b91286d4b874a15835c241e\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 20 Aug 2020 16:47:53 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Problems with the FSM, heap fillfactor, and temporal locality"
},
{
"msg_contents": "On Fri, Aug 21, 2020 at 2:48 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> I'm concerned about how the FSM gives out pages to heapam. Disabling\n> the FSM entirely helps TPC-C/BenchmarkSQL, which uses non-default heap\n> fillfactors for most tables [1].\n\nHi Peter,\n\nInteresting stuff. Is lower-than-default fillfactor important for the\nbehavior you see?\n\n> located on the same blocks (or at least on neighboring blocks). It's\n> possible that one orderline will span two neighboring blocks here, but\n> it will never span three or more blocks. Each order has 5 - 15 order\n> lines, and so I was surprised to see that a small minority or order\n> line tuples end up occupying as many as 5 or 7 heap pages on the\n> master branch (i.e. with the current FSM intact during bulk loading).\n\nHmm. You focus on FSM, but I'm thinking also multiple VM bits\npotentially got cleared (maybe not this benchmark but other\nscenarios).\n\n> If the FSM tried to get free space from a close-by block, then we\n> might at least see related updates that cannot fit a successor tuple\n> on the same block behave in a coordinated fashion. We might at least\n> have both updates relocate the successor tuple to the same\n> mostly-empty block -- they both notice the same nearby free block, so\n> both sets of successor tuples end up going on the same most-empty\n> block. The two updating backends don't actually coordinate, of course\n> -- this just happens as a consequence of looking for nearby free\n> space.\n\nI'm not sure I follow the \"close-by\" criterion. It doesn't seem\nimmediately relevant to what I understand as the main problem you\nfound, of free space. In other words, if we currently write to 5\nblocks, but with smarter FSM logic we can find a single block, it\nseems that should be preferred over close-by blocks each with less\nspace? Or are you envisioning scoring by both free space and distance\nfrom the original block?\n\n> supposed to be reserved. It should not be enough for the space used on\n> the page to shrink by just 1% (to 89%). Maybe it should have to reach\n> as low as 50% or less before we flip it back to \"free to take space\n> from for new unrelated tuples\". The idea is that fill factor space is\n> truly reserved for updates -- that should be \"sticky\" for all of this\n> to work well.\n\nMakes sense. If we go this route, I wonder if we should keep the\ncurrent behavior and use any free space if the fillfactor is 100%,\nsince that's in line with the intention. Also, the 50% (or whatever)\nfigure could be scaled depending on fillfactor. As in, check if\nfreespace > (100% - fillfactor * 0.6), or something.\n\nI'm not sure how to distinguish blocks that have never reached\nfillfactor vs. ones that did but haven't gained back enough free space\nto be considered again. Naively, we could start by assuming the last\nblock can always be filled up to fillfactor, but earlier blocks must\nuse the stricter rule. That's easy since we already try the last block\nanyway before extending the relation.\n\nThe flip side of this is: Why should vacuum be in a hurry to dirty a\npage, emit WAL, and update the FSM if it only removes one dead tuple?\nThis presentation [1] (pages 35-43) from Masahiko Sawada had the idea\nof a \"garbage map\", which keeps track of which parts of the heap have\nthe most dead space, and focus I/O efforts on those blocks first. It\nmay or may not be worth the extra complexity by itself, but it seems\nit would work synergistically with your proposal: Updating related\ntuples would concentrate dead tuples on fewer pages, and vacuums would\nmore quickly free up space that can actually be used to store multiple\nnew tuples.\n\n[1] https://www.slideshare.net/masahikosawada98/vacuum-more-efficient-than-ever-99916671\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 21 Aug 2020 13:09:59 +0300",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problems with the FSM, heap fillfactor, and temporal locality"
},
{
"msg_contents": "Hi John,\n\nOn Fri, Aug 21, 2020 at 3:10 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> Interesting stuff. Is lower-than-default fillfactor important for the\n> behavior you see?\n\nIt's hard to say. It's definitely not important as far as the initial\nbulk loading behavior is concerned (the behavior where related tuples\nget inserted onto multiple disparate tuples all too often). That will\nhappen with any fill factor, including the default/100.\n\nI'm concerned about heap fill factor in particular because I suspect\nthat that doesn't really work sensibly.\n\nTo give you some concrete idea of the benefits, I present a\npg_stat_database from the master branch after 6 hours of BenchmarkSQL\nwith a rate limit:\n\n-[ RECORD 1 ]---------+------------------------------\ndatid | 13,619\ndatname | postgres\nnumbackends | 3\nxact_commit | 45,902,031\nxact_rollback | 200,131\nblks_read | 662,677,079\nblks_hit | 24,790,989,538\ntup_returned | 30,054,930,608\ntup_fetched | 13,722,542,025\ntup_inserted | 859,165,629\ntup_updated | 520,611,959\ntup_deleted | 20,074,897\nconflicts | 0\ntemp_files | 88\ntemp_bytes | 18,849,890,304\ndeadlocks | 0\nchecksum_failures |\nchecksum_last_failure |\nblk_read_time | 124,233,831.492\nblk_write_time | 8,588,876.871\nstats_reset | 2020-08-20 13:51:08.351036-07\n\nHere is equivalent output for my simple patch that naively disables the FSM:\n\n-[ RECORD 1 ]---------+------------------------------\ndatid | 13,619\ndatname | postgres\nnumbackends | 3\nxact_commit | 46,369,235\nxact_rollback | 201,919\nblks_read | 658,267,665\nblks_hit | 19,980,524,578\ntup_returned | 30,804,856,896\ntup_fetched | 11,839,865,936\ntup_inserted | 861,798,911\ntup_updated | 525,895,435\ntup_deleted | 20,277,618\nconflicts | 0\ntemp_files | 88\ntemp_bytes | 18,849,439,744\ndeadlocks | 0\nchecksum_failures |\nchecksum_last_failure |\nblk_read_time | 117,167,612.616\nblk_write_time | 7,873,922.175\nstats_reset | 2020-08-20 13:50:51.72056-07\n\nNote that there is a ~20% reduction in blks_hit here, even though the\npatch does ~1% more transactions (the rate limiting doesn't work\nperfectly). There is also a ~5.5% reduction in aggregate\nblk_read_time, and a ~9% reduction in blk_write_time. I'd say that\nthat's pretty significant.\n\nI also see small but significant improvements in transaction latency,\nparticularly with 90th percentile latency.\n\n> Hmm. You focus on FSM, but I'm thinking also multiple VM bits\n> potentially got cleared (maybe not this benchmark but other\n> scenarios).\n\nI'm focussed on how heapam interacts with the FSM, and its effects on\nlocality. It's definitely true that this could go on to affect how the\nvisibility map gets set -- we could set fewer bits unnecessarily. And\nit probably has a number of other undesirable consequences that are\nhard to quantify. Clearly there are many reasons why making the\nphysical database layout closer to that of the logical database is a\ngood thing. I probably have missed a few.\n\n> I'm not sure I follow the \"close-by\" criterion. It doesn't seem\n> immediately relevant to what I understand as the main problem you\n> found, of free space. In other words, if we currently write to 5\n> blocks, but with smarter FSM logic we can find a single block, it\n> seems that should be preferred over close-by blocks each with less\n> space? Or are you envisioning scoring by both free space and distance\n> from the original block?\n\nI am thinking of doing both at the same time.\n\nThink of a table with highly skewed access. There is a relatively\nsmall number of hot spots -- parts of the primary key's key space that\nare consistently affected by many skewed updates (with strong heap\ncorrelation to primary key order). We simply cannot ever hope to avoid\nmigrating heavily updated rows to new pages -- too much contention in\nthe hot spots for that. But, by 1) Not considering pages\nfree-space-reclaimable until the free space reaches ~50%, and 2)\npreferring close-by free pages, we avoid (or at least delay)\ndestroying locality of access. There is a much better chance that rows\nfrom logically related hot spots will all migrate to the same few\nblocks again and again, with whole blocks becoming free in a\nrelatively predictable, phased fashion. (I'm speculating here, but my\nguess is that this combination will help real world workloads by quite\na bit.)\n\nPreferring close-by blocks in the FSM is something that there was\ndiscussion of when the current FSM implementation went in back in 8.4.\nI am almost certain that just doing that will not help. If it helps at\nall then it will help by preserving locality as tuple turnover takes\nplace over time, and I think that you need to be clever about \"reuse\ngranularity\" in order for that to happen. We're optimizing the entire\n\"lifecycle\" of logically related tuples whose relatedness is embodied\nby their initial physical position following an initial insert (before\nmany subsequent updates take place that risk destroying locality).\n\n> Makes sense. If we go this route, I wonder if we should keep the\n> current behavior and use any free space if the fillfactor is 100%,\n> since that's in line with the intention. Also, the 50% (or whatever)\n> figure could be scaled depending on fillfactor. As in, check if\n> freespace > (100% - fillfactor * 0.6), or something.\n\nRight. Or it could be another reloption.\n\n> I'm not sure how to distinguish blocks that have never reached\n> fillfactor vs. ones that did but haven't gained back enough free space\n> to be considered again. Naively, we could start by assuming the last\n> block can always be filled up to fillfactor, but earlier blocks must\n> use the stricter rule. That's easy since we already try the last block\n> anyway before extending the relation.\n\nI was thinking of explicitly marking blocks as \"freeable\", meaning\nthat the FSM will advertise their free space. This isn't self-evident\nfrom the amount of free space on the page alone, since we need to\ndistinguish between at least two cases: the case where a page has yet\nto apply fill factor for the first time (which may still be close to\nfillfactor% full) versus the case where the page did reach fillfactor,\nbut then had a small amount of space freed. I think that the FSM ought\nto give out space in the former case, but not in the latter case. Even\nthough an identical amount of free space might be present in either\ncase.\n\n> The flip side of this is: Why should vacuum be in a hurry to dirty a\n> page, emit WAL, and update the FSM if it only removes one dead tuple?\n> This presentation [1] (pages 35-43) from Masahiko Sawada had the idea\n> of a \"garbage map\", which keeps track of which parts of the heap have\n> the most dead space, and focus I/O efforts on those blocks first. It\n> may or may not be worth the extra complexity by itself, but it seems\n> it would work synergistically with your proposal: Updating related\n> tuples would concentrate dead tuples on fewer pages, and vacuums would\n> more quickly free up space that can actually be used to store multiple\n> new tuples.\n\nI agree that that seems kind of related. I'm trying to concentrate\ngarbage from skewed updates in fewer blocks. (Same with the skewed\nsuccessor tuples.)\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 21 Aug 2020 10:52:52 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Problems with the FSM, heap fillfactor, and temporal locality"
},
{
"msg_contents": "On Fri, Aug 21, 2020 at 8:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> Note that there is a ~20% reduction in blks_hit here, even though the\n> patch does ~1% more transactions (the rate limiting doesn't work\n> perfectly). There is also a ~5.5% reduction in aggregate\n> blk_read_time, and a ~9% reduction in blk_write_time. I'd say that\n> that's pretty significant.\n\nIndeed.\n\n> Preferring close-by blocks in the FSM is something that there was\n> discussion of when the current FSM implementation went in back in 8.4.\n\nRight, I found a pretty long one here:\n\nhttps://www.postgresql.org/message-id/flat/1253201179.9666.174.camel%40ebony.2ndQuadrant\n\n> > I'm not sure how to distinguish blocks that have never reached\n> > fillfactor vs. ones that did but haven't gained back enough free space\n> > to be considered again. Naively, we could start by assuming the last\n> > block can always be filled up to fillfactor, but earlier blocks must\n> > use the stricter rule. That's easy since we already try the last block\n> > anyway before extending the relation.\n>\n> I was thinking of explicitly marking blocks as \"freeable\", meaning\n> that the FSM will advertise their free space. This isn't self-evident\n> from the amount of free space on the page alone, since we need to\n> distinguish between at least two cases: the case where a page has yet\n> to apply fill factor for the first time (which may still be close to\n> fillfactor% full) versus the case where the page did reach fillfactor,\n> but then had a small amount of space freed. I think that the FSM ought\n> to give out space in the former case, but not in the latter case. Even\n> though an identical amount of free space might be present in either\n> case.\n\nI imagine you're suggesting to make this change in the FSM data? I'm\nthinking we could change the category byte to a signed integer, and\nreduce FSM_CATEGORIES to 128. (That gives us 64-byte granularity which\ndoesn't sound bad, especially if we're considering ignoring free space\nfor inserts until we get a couple kilobytes back.) The absolute value\nrepresents the actual space. A negative number would always compare as\nless, so use the negative range to mark the page as not usable. A\nfresh page will have positive-numbered categories until fill_factor is\nreached, at which point we flip the sign to negative. When the used\nspace gets below \"min_fill_factor\", flip the sign to mark it usable.\nUpdates would have to be taught to read the absolute value. Managing\nthe math might be a little tricky, but maybe that could be contained.\n\nOne possible disadvantage is that negative values would bubble up to\nhigher levels in the opposite way that positive ones do, but maybe we\ndon't care since the pages are marked unusable anyway. All we care is\nthat all negative numbers give the correct binary comparison when we\nsearch for available space. We could also reserve the high bit as a\nflag (1 = usable), and use the lower bits for the value, but I'm not\nsure that's better.\n\nWe could also preserve the 255 categories as is, but add a second byte\nfor the flag. If we could imagine other uses for a new byte, this\nmight be good, but would make the FSM much bigger, which doesn't sound\nattractive at all.\n\nAny change of the FSM file would require pg_upgrade to rewrite the\nFSM, but it still doesn't seem like a lot of code.\n\nOther ideas?\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 24 Aug 2020 16:38:06 +0300",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problems with the FSM, heap fillfactor, and temporal locality"
},
{
"msg_contents": "On Mon, Aug 24, 2020 at 6:38 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> On Fri, Aug 21, 2020 at 8:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Note that there is a ~20% reduction in blks_hit here, even though the\n> > patch does ~1% more transactions (the rate limiting doesn't work\n> > perfectly). There is also a ~5.5% reduction in aggregate\n> > blk_read_time, and a ~9% reduction in blk_write_time. I'd say that\n> > that's pretty significant.\n>\n> Indeed.\n\nMost of this seems to come from the new_orders table, which has heap\npages that are continually inserted into and then deleted a little\nlater on. new_orders is a bit like a queue that never gets too big. It\nis probably the component of TPC-C where we have the most room for\nimprovement, fragmentation-wise. OTOH, despite all the churn the high\nwatermark size of the new_orders table isn't all that high -- maybe\n~50MB with a 1TB database. So it's not like we'll save very many\nshared_buffers misses there.\n\n> > Preferring close-by blocks in the FSM is something that there was\n> > discussion of when the current FSM implementation went in back in 8.4.\n>\n> Right, I found a pretty long one here:\n>\n> https://www.postgresql.org/message-id/flat/1253201179.9666.174.camel%40ebony.2ndQuadrant\n\nThanks for finding that.\n\n> I imagine you're suggesting to make this change in the FSM data? I'm\n> thinking we could change the category byte to a signed integer, and\n> reduce FSM_CATEGORIES to 128.\n\nYeah, something like that. I don't think that we need very many\ndistinct FSM_CATEGORIES. Even 128 seems like way more than could ever\nbe needed.\n\n> Any change of the FSM file would require pg_upgrade to rewrite the\n> FSM, but it still doesn't seem like a lot of code.\n\nI think that the sloppy approach to locking for the\nfsmpage->fp_next_slot field in functions like fsm_search_avail() (i.e.\nnot using real atomic ops, even though we could) is one source of\nproblems here. That might end up necessitating fixing the on-disk\nformat, just to get the FSM to behave sensibly -- assuming that the\nvalue won't be too stale in practice is extremely dubious.\n\nThis fp_next_slot business interacts poorly with the \"extend a\nrelation by multiple blocks\" logic added by commit 719c84c1be5 --\nconcurrently inserting backends are liable to get the same heap block\nfrom the FSM, causing \"collisions\". That almost seems like a bug IMV.\nWe really shouldn't be given out the same block twice, but that's what\nmy own custom instrumentation shows happens here. With atomic ops, it\nisn't a big deal to restart using a compare-and-swap at the end (when\nwe set/reset fp_next_slot for other backends).\n\n> Other ideas?\n\nI've been experimenting with changing the way that we enforce heap\nfill factor with calls to heap_insert() (and even heap_update()) that\nhappen to occur at a \"natural temporal boundary\". This works by\nremembering an XID alongside the target block in the relcache when the\ntarget block is set. When we have an existing target block whose XID\ndoes not match our backend's current XID (i.e. it's an old XID for the\nbackend), then that means we're at one of these boundaries. We require\nthat the page has a little more free space before we'll insert on it\nwhen at a boundary. If we barely have enough space to insert the\nincoming heap tuple, and it's the first of a few tuples the\ntransaction will ultimately insert, then we should start early on a\nnew page instead of using the last little bit of space (note that the\n\"last little bit\" of space does not include the space left behind by\nfill factor). The overall effect is that groups of related tuples are\nmuch less likely to span a heap page boundary unless and until we have\nlots of updates -- though maybe not even then. I think that it's very\ncommon for transactions to insert a group of 2 - 15 logically related\ntuples into a table at a time.\n\nRoughly speaking, you can think of this as the heapam equivalent of\nthe nbtree page split choice logic added by commit fab25024. We ought\nto go to at least a little bit of effort to minimize the number of\ndistinct XIDs that are present on each heap page (in the tuple\nheaders). We can probably figure out heuristics that result in\nrespecting heap fill factor on average, while giving inserts (and even\nnon-HOT updates) a little wiggle room when it comes to heap page\nboundaries.\n\nBy applying both of these techniques together (locality/page split\nthing and real atomic ops for fp_next_slot) the prototype patch I'm\nworking on mostly restores the system's current ability to reuse space\n(as measured by the final size of relations when everything is done),\nwhile maintaining most of the performance benefits of not using the\nFSM at all. The code is still pretty rough, though.\n\nI haven't decided how far to pursue this. It's not as if there are\nthat many ways to make TPC-C go 5%+ faster left; it's very\nwrite-heavy, and stresses many different parts of the system all at\nonce. I'm sure that anything like my current prototype patch will be\ncontroversial, though. Maybe it will be acceptable if we only change\nthe behavior for people that explicitly set heap fillfactor.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 24 Aug 2020 19:17:16 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Problems with the FSM, heap fillfactor, and temporal locality"
},
{
"msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Mon, Aug 24, 2020 at 6:38 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> > Other ideas?\n> \n> I've been experimenting with changing the way that we enforce heap\n> fill factor with calls to heap_insert() (and even heap_update()) that\n> happen to occur at a \"natural temporal boundary\". This works by\n> remembering an XID alongside the target block in the relcache when the\n> target block is set. When we have an existing target block whose XID\n> does not match our backend's current XID (i.e. it's an old XID for the\n> backend), then that means we're at one of these boundaries. We require\n> that the page has a little more free space before we'll insert on it\n> when at a boundary. If we barely have enough space to insert the\n> incoming heap tuple, and it's the first of a few tuples the\n> transaction will ultimately insert, then we should start early on a\n> new page instead of using the last little bit of space (note that the\n> \"last little bit\" of space does not include the space left behind by\n> fill factor). The overall effect is that groups of related tuples are\n> much less likely to span a heap page boundary unless and until we have\n> lots of updates -- though maybe not even then. I think that it's very\n> common for transactions to insert a group of 2 - 15 logically related\n> tuples into a table at a time.\n\nThis all definitely sounds quite interesting and the idea to look at the\nXID to see if we're in the same transaction and therefore likely\ninserting a related tuple certainly makes some sense. While I get that\nit might not specifically work with TPC-C, I'm wondering about if we\ncould figure out how to make a multi-tuple INSERT use\nheap/table_multi_insert (which seems to only be used by COPY currently,\nand internally thanks to the recent work to use it for some catalog\ntables) and then consider the size of the entire set of tuples being\nINSERT'd when working to find a page, or deciding if we should extend\nthe relation.\n\n> Roughly speaking, you can think of this as the heapam equivalent of\n> the nbtree page split choice logic added by commit fab25024. We ought\n> to go to at least a little bit of effort to minimize the number of\n> distinct XIDs that are present on each heap page (in the tuple\n> headers). We can probably figure out heuristics that result in\n> respecting heap fill factor on average, while giving inserts (and even\n> non-HOT updates) a little wiggle room when it comes to heap page\n> boundaries.\n\nAgreed.\n\n> By applying both of these techniques together (locality/page split\n> thing and real atomic ops for fp_next_slot) the prototype patch I'm\n> working on mostly restores the system's current ability to reuse space\n> (as measured by the final size of relations when everything is done),\n> while maintaining most of the performance benefits of not using the\n> FSM at all. The code is still pretty rough, though.\n> \n> I haven't decided how far to pursue this. It's not as if there are\n> that many ways to make TPC-C go 5%+ faster left; it's very\n> write-heavy, and stresses many different parts of the system all at\n> once. I'm sure that anything like my current prototype patch will be\n> controversial, though. Maybe it will be acceptable if we only change\n> the behavior for people that explicitly set heap fillfactor.\n\nGetting a 5% improvement is pretty exciting, very cool and seems worth\nspending effort on.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 25 Aug 2020 09:21:48 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Problems with the FSM, heap fillfactor, and temporal locality"
},
{
"msg_contents": "On Tue, Aug 25, 2020 at 6:21 AM Stephen Frost <sfrost@snowman.net> wrote:\n> This all definitely sounds quite interesting and the idea to look at the\n> XID to see if we're in the same transaction and therefore likely\n> inserting a related tuple certainly makes some sense. While I get that\n> it might not specifically work with TPC-C, I'm wondering about if we\n> could figure out how to make a multi-tuple INSERT use\n> heap/table_multi_insert (which seems to only be used by COPY currently,\n> and internally thanks to the recent work to use it for some catalog\n> tables) and then consider the size of the entire set of tuples being\n> INSERT'd when working to find a page, or deciding if we should extend\n> the relation.\n\nThere are probably quite a variety of ways in which we can capture\nlocality, and I'm sure that I'm only scratching the surface right now.\nI agree that table_multi_insert could definitely be one of them.\n\nJohn said something about concentrating garbage in certain pages\nup-thread. I also wonder if there is some visibility + freeze map\nangle on this.\n\nWhat I see with the benchmark is that the order_line table (the\nlargest table by quite some margin, and one that grows indefinitely)\ndoes not make much use of the visibility map during VACUUM -- even\nthough it's the kind of table + workload that you'd hope and expect\nwould make good use of it if you were looking at it in a real world\nsituation. Each tuple is only inserted once and later updated once, so\nwhat we really ought to do better. The logs show that\nVACUUM/autovacuum dirties lots of pages, probably due to fragmentation\nfrom free space management (though there are undoubtedly other\nfactors).\n\nThe best \"locality orientated\" reference guide to TPC-C that I've been\nable to find is \"A Modeling Study of the TPC-C Benchmark\", which was\npublished in 1993 by NASA (shortly after the introduction of TPC-C).\nYou can get it from:\n\nhttps://apps.dtic.mil/dtic/tr/fulltext/u2/a264793.pdf (Unfortunately\nthis reproduction is a low quality photocopy -- ACM members can get a\nclear copy.)\n\nIf you think about the TPC-C workload at a high level, and Postgres\ninternals stuff at a low level, and then run the benchmark, you'll\nfind various ways in which we don't live up to our potential. The big\npicture is that the \"physical database\" is not close enough to the\n\"logical database\", especially over time and after a lot of churn.\nThis causes problems all over the place, that look like nothing in\nparticular in profiles.\n\nIt's not that TPC-C is unsympathetic to Postgres in any of the usual\nways -- there are very few UPDATEs that affect indexed columns, which\nare not really a factor at all. There are also no transactions that\nrun longer than 2 seconds (any more than ~50ms per execution is\nexceptional, in fact). We already do a significant amount of the\nnecessary garbage collection opportunistically (by pruning) --\nprobably the vast majority, in fact. In particular, HOT pruning works\nwell, since fill factor has been tuned. It just doesn't work as well\nas you'd hope, in that it cannot stem the tide of fragmentation. And\nnot just because of heapam's use of the FSM.\n\nIf we implemented a simple differential heap tuple compression scheme\nwithin HOT chains (though not across HOT chains/unrelated tuples),\nthen we'd probably do much better -- we could keep the same logical\ntuple on the same heap page much more often, maybe always. For\nexample, \"stock\" table is a major source of FPIs, and I bet that this\nis greatly amplified by our failure to keep versions of the same\nfrequently updated tuple together. We can only fit ~25 stock tuples on\neach heap page (with heap fill factor at 95, the BenchmarkSQL\ndefault), so individual tuples are ~320 bytes (including tuple\nheader). If we found a way to store the changed columns for successor\ntuples within a HOT chain, then we would do much better -- without\nchanging the HOT invariant (or anything else that's truly scary). If\nour scheme worked by abusing the representation that we use for NULL\nvalues in the successor/HOT tuples (which is not terribly space\nefficient), then we could still store about 6 more versions of each\nstock tuple on the same page -- the actual changed columns are\ntypically much much smaller than the unchanged columns. Our 23/24 byte\ntuple header is usually small potatoes compared to storing unchanged\nvalues several times.\n\nAs I said, the HOT optimization (and opportunistic pruning) already\nwork well with this benchmark. But I think that they'd work a lot\nbetter if we could just temporarily absorb a few extra versions on the\nheap page, so we have enough breathing room to prune before the page\n\"truly fills to capacity\". It could help in roughly the same way that\ndeduplication now helps in nbtree indexes with \"version churn\".\n\nI'm also reminded of the nbtree optimization I prototyped recently,\nwhich more or less prevented all unique index bloat provided there is\nno long running transaction:\n\nhttps://postgr.es/m/CAH2-Wzm+maE3apHB8NOtmM=p-DO65j2V5GzAWCOEEuy3JZgb2g@mail.gmail.com\n\n(Yes, \"preventing all unique index bloat provided there is no long\nrunning transaction\" is no exaggeration -- it really prevents all\nbloat related nbtree page splits, even with hundreds of clients, skew,\netc.)\n\nIt seems pretty obvious to me that buying another (say) 2 seconds to\nlet opportunistic pruning run \"before the real damage is done\" can be\nextremely valuable -- we only need to be able to delay a page split\n(which is similar to the case where we cannot continue to store heap\ntuples on the same heap page indefinitely) for a couple of seconds at\na time. We only need to \"stay one step ahead\" of the need to split the\npage (or to migrate a logical heap tuple to a new heap page when it\ncomes to the heap) at any given time -- that alone will totally arrest\nthe problem.\n\nThis is a very important point -- the same set of principles that\nhelped in nbtree can also be effectively applied to heap pages that\nare subject to version churn. (Assuming no long running xacts.)\n\n> Getting a 5% improvement is pretty exciting, very cool and seems worth\n> spending effort on.\n\nI'm already at 5% - 7% now. I bet the differential compression of\ntuples on a HOT chain could buy a lot more than that. The biggest\nemphasis should be placed on stable performance over time, and total\nI/O over time -- that's where we're weakest right now.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 25 Aug 2020 16:41:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Problems with the FSM, heap fillfactor, and temporal locality"
},
{
"msg_contents": "On Tue, Aug 25, 2020 at 5:17 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> I think that the sloppy approach to locking for the\n> fsmpage->fp_next_slot field in functions like fsm_search_avail() (i.e.\n> not using real atomic ops, even though we could) is one source of\n> problems here. That might end up necessitating fixing the on-disk\n> format, just to get the FSM to behave sensibly -- assuming that the\n> value won't be too stale in practice is extremely dubious.\n>\n> This fp_next_slot business interacts poorly with the \"extend a\n> relation by multiple blocks\" logic added by commit 719c84c1be5 --\n> concurrently inserting backends are liable to get the same heap block\n> from the FSM, causing \"collisions\". That almost seems like a bug IMV.\n> We really shouldn't be given out the same block twice, but that's what\n> my own custom instrumentation shows happens here. With atomic ops, it\n> isn't a big deal to restart using a compare-and-swap at the end (when\n> we set/reset fp_next_slot for other backends).\n\nThe fact that that logic extends by 20 * numwaiters to get optimal\nperformance is a red flag that resources aren't being allocated\nefficiently. I have an idea to ignore fp_next_slot entirely if we have\nextended by multiple blocks: The backend that does the extension\nstores in the FSM root page 1) the number of blocks added and 2) the\nend-most block number. Any request for space will look for a valid\nvalue here first before doing the usual search. If there is then the\nblock to try is based on a hash of the xid. Something like:\n\ncandidate-block = prev-end-of-relation + 1 + (xid % (num-new-blocks))\n\nTo guard against collisions, then peak in the FSM at that slot and if\nit's not completely empty, then search FSM using a \"look-nearby\" API\nand increment a counter every time we collide. When the counter gets\nto some-value, clear the special area in the root page so that future\nbackends use the usual search.\n\nI think this would work well with your idea to be more picky if the\nxid stored with the relcache target block doesn't match the current\none.\n\nAlso num-new-blocks above could be scaled down from the actual number\nof blocks added, just to make sure writes aren't happening all over\nthe place.\n\nThere might be holes in this idea, but it may be worth trying to be\nbetter in this area without adding stricter locking.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 26 Aug 2020 11:45:54 +0300",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problems with the FSM, heap fillfactor, and temporal locality"
},
{
"msg_contents": "On Wed, Aug 26, 2020 at 1:46 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> The fact that that logic extends by 20 * numwaiters to get optimal\n> performance is a red flag that resources aren't being allocated\n> efficiently.\n\nI agree that that's pretty suspicious.\n\n> I have an idea to ignore fp_next_slot entirely if we have\n> extended by multiple blocks: The backend that does the extension\n> stores in the FSM root page 1) the number of blocks added and 2) the\n> end-most block number. Any request for space will look for a valid\n> value here first before doing the usual search. If there is then the\n> block to try is based on a hash of the xid. Something like:\n>\n> candidate-block = prev-end-of-relation + 1 + (xid % (num-new-blocks))\n\nI was thinking of doing something in shared memory, and not using the\nFSM here at all. If we're really giving 20 pages out to each backend,\nwe will probably benefit from explicitly assigning contiguous ranges\nof pages to each backend, and making some effort to respect that they\nown the blocks in some general sense. Hopefully without also losing\naccess to the free space in corner cases (e.g. one of the backend's\nhas an error shortly after receiving its contiguous range of blocks).\n\n> To guard against collisions, then peak in the FSM at that slot and if\n> it's not completely empty, then search FSM using a \"look-nearby\" API\n> and increment a counter every time we collide. When the counter gets\n> to some-value, clear the special area in the root page so that future\n> backends use the usual search.\n\nThe backends already use a look nearby API, sort of --\nRecordAndGetPageWithFreeSpace() already behaves that way. I'm not sure\nexactly how well it works in practice, but it definitely works to some\ndegree.\n\n> Also num-new-blocks above could be scaled down from the actual number\n> of blocks added, just to make sure writes aren't happening all over\n> the place.\n\nOr scaled up, perhaps.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 1 Sep 2020 15:56:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Problems with the FSM, heap fillfactor, and temporal locality"
},
{
"msg_contents": "On Wed, Sep 2, 2020 at 1:57 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Aug 26, 2020 at 1:46 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> > The fact that that logic extends by 20 * numwaiters to get optimal\n> > performance is a red flag that resources aren't being allocated\n> > efficiently.\n>\n> I agree that that's pretty suspicious.\n\nPer Simon off-list some time ago (if I understood him correctly),\ncounting the lock waiters make the lock contention worse. I haven't\ntried to measure this, but I just had an idea instead to keep track of\nhow many times the table has previously been extended by multiple\nblocks, and extend by a number calculated from that. Something like\n(pow2(2 + num-times-ext-mult-blocks)), with some ceiling perhaps much\nsmaller than 512. Maybe a bit off topic, but the general problem you\nbrought up has many moving parts, as you've mentioned.\n\n> > I have an idea to ignore fp_next_slot entirely if we have\n> > extended by multiple blocks: The backend that does the extension\n> > stores in the FSM root page 1) the number of blocks added and 2) the\n> > end-most block number. Any request for space will look for a valid\n> > value here first before doing the usual search. If there is then the\n> > block to try is based on a hash of the xid. Something like:\n> >\n> > candidate-block = prev-end-of-relation + 1 + (xid % (num-new-blocks))\n>\n> I was thinking of doing something in shared memory, and not using the\n> FSM here at all. If we're really giving 20 pages out to each backend,\n> we will probably benefit from explicitly assigning contiguous ranges\n> of pages to each backend, and making some effort to respect that they\n> own the blocks in some general sense.\n\nThat would give us flexibility and precise control. I suspect it would\nalso have more cognitive and maintenance cost, by having more than one\nsource of info.\n\n> Hopefully without also losing\n> access to the free space in corner cases (e.g. one of the backend's\n> has an error shortly after receiving its contiguous range of blocks).\n\nRight, you'd need some way of resetting or retiring the shared memory\ninfo when it is no longer useful. That was my thinking with the\ncollision counter -- go back to using the FSM when we no longer have a\nreasonable chance of getting a fresh block.\n\n> > Also num-new-blocks above could be scaled down from the actual number\n> > of blocks added, just to make sure writes aren't happening all over\n> > the place.\n>\n> Or scaled up, perhaps.\n\nI don't think I explained this part well, so here's a concrete\nexample. Let's say 128 blocks were added at once. Then xid % 128 would\ngive a number to be added to the previous last block in the relation,\nso new target block allocations could be anywhere in this 128. By\n\"scale down\", I mean compute (say) xid % 32. That would limit deviance\nof spatial locality for those backends that were waiting on extension.\nDoing the opposite, like xid % 256, would give you blocks past the end\nof the relation. Further thinking out loud, after detecting enough\ncollisions in the first 32, we could iterate through the other ranges\nand finally revert to conventional FSM search.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 2 Sep 2020 16:55:48 +0300",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problems with the FSM, heap fillfactor, and temporal locality"
}
] |
[
{
"msg_contents": "Hi,\n\nI've attached the patch for $subject.\n\ns/replications lots/replication slots/\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 21 Aug 2020 10:58:27 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Fix typo in procarrary.c"
},
{
"msg_contents": "\n\nOn 2020/08/21 10:58, Masahiko Sawada wrote:\n> Hi,\n> \n> I've attached the patch for $subject.\n> \n> s/replications lots/replication slots/\n\nThanks for the patch!\n\nAlso it's better to s/replications slots/replication slots/ ?\n\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -198,7 +198,7 @@ typedef struct ComputeXidHorizonsResult\n * be removed.\n *\n * This likely should only be needed to determine whether pg_subtrans can\n- * be truncated. It currently includes the effects of replications slots,\n+ * be truncated. It currently includes the effects of replication slots,\n * for historical reasons. But that could likely be changed.\n */\n TransactionId oldest_considered_running;\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 21 Aug 2020 11:17:58 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in procarrary.c"
},
{
"msg_contents": "On Fri, 21 Aug 2020 at 11:18, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/08/21 10:58, Masahiko Sawada wrote:\n> > Hi,\n> >\n> > I've attached the patch for $subject.\n> >\n> > s/replications lots/replication slots/\n>\n> Thanks for the patch!\n>\n> Also it's better to s/replications slots/replication slots/ ?\n>\n> --- a/src/backend/storage/ipc/procarray.c\n> +++ b/src/backend/storage/ipc/procarray.c\n> @@ -198,7 +198,7 @@ typedef struct ComputeXidHorizonsResult\n> * be removed.\n> *\n> * This likely should only be needed to determine whether pg_subtrans can\n> - * be truncated. It currently includes the effects of replications slots,\n> + * be truncated. It currently includes the effects of replication slots,\n> * for historical reasons. But that could likely be changed.\n> */\n> TransactionId oldest_considered_running;\n>\n\nIndeed. I agree with you.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 21 Aug 2020 12:29:22 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo in procarrary.c"
},
{
"msg_contents": "\n\nOn 2020/08/21 12:29, Masahiko Sawada wrote:\n> On Fri, 21 Aug 2020 at 11:18, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/08/21 10:58, Masahiko Sawada wrote:\n>>> Hi,\n>>>\n>>> I've attached the patch for $subject.\n>>>\n>>> s/replications lots/replication slots/\n>>\n>> Thanks for the patch!\n>>\n>> Also it's better to s/replications slots/replication slots/ ?\n>>\n>> --- a/src/backend/storage/ipc/procarray.c\n>> +++ b/src/backend/storage/ipc/procarray.c\n>> @@ -198,7 +198,7 @@ typedef struct ComputeXidHorizonsResult\n>> * be removed.\n>> *\n>> * This likely should only be needed to determine whether pg_subtrans can\n>> - * be truncated. It currently includes the effects of replications slots,\n>> + * be truncated. It currently includes the effects of replication slots,\n>> * for historical reasons. But that could likely be changed.\n>> */\n>> TransactionId oldest_considered_running;\n>>\n> \n> Indeed. I agree with you.\n\nThanks! So I pushed both fixes.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 21 Aug 2020 12:39:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in procarrary.c"
},
{
"msg_contents": "On Fri, 21 Aug 2020 at 12:39, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/08/21 12:29, Masahiko Sawada wrote:\n> > On Fri, 21 Aug 2020 at 11:18, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/08/21 10:58, Masahiko Sawada wrote:\n> >>> Hi,\n> >>>\n> >>> I've attached the patch for $subject.\n> >>>\n> >>> s/replications lots/replication slots/\n> >>\n> >> Thanks for the patch!\n> >>\n> >> Also it's better to s/replications slots/replication slots/ ?\n> >>\n> >> --- a/src/backend/storage/ipc/procarray.c\n> >> +++ b/src/backend/storage/ipc/procarray.c\n> >> @@ -198,7 +198,7 @@ typedef struct ComputeXidHorizonsResult\n> >> * be removed.\n> >> *\n> >> * This likely should only be needed to determine whether pg_subtrans can\n> >> - * be truncated. It currently includes the effects of replications slots,\n> >> + * be truncated. It currently includes the effects of replication slots,\n> >> * for historical reasons. But that could likely be changed.\n> >> */\n> >> TransactionId oldest_considered_running;\n> >>\n> >\n> > Indeed. I agree with you.\n>\n> Thanks! So I pushed both fixes.\n>\n\nThanks!\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 21 Aug 2020 15:16:44 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo in procarrary.c"
}
] |
[
{
"msg_contents": "Hello hackers,\r\n\r\nCurrently, if BEFORE TRIGGER causes a partition change, it reports an error 'moving row\r\nto another partition during a BEFORE FOR EACH ROW trigger is not supported' and fails\r\nto execute. I want to try to address this limitation and have made an initial patch to get\r\nfeedback from other hackers.\r\n\r\nThe implemented approach is similar to when a change partition caused by an UPDATE\r\nstatement. If it's a BEFORE INSERT TRIGGER then we just need to insert the row produced\r\nby a trigger to the new partition, and if it's a BEFORE UPDATE TRIGGER we need to delete\r\nthe old tuple and insert the row produced by the trigger to the new partition.\r\n\r\nIn current BEFORE TRIGGER implementation, it reports an error once a trigger result out\r\nof current partition, but I think it should check it after finish all triggers call, and you can\r\nsee the discussion in [1][2]. In the attached patch I have changed this rule, I check the\r\npartition constraint only once after all BEFORE TRIGGERS have been called. If you do not\r\nagree with this way, I can change the implementation.\r\n\r\nAnd another point is that when inserting to new partition caused by BEFORE TRIGGER,\r\nthen it will not trigger the BEFORE TRIGGER on a new partition. I think it's the right way,\r\nwhat's more, I think the UPDATE approach cause partition change should not trigger the\r\nBEFORE TRIGGERS too, you can see discussed on [1].\r\n\r\n[1]https://www.postgresql.org/message-id/2020082017164661079648%40highgo.ca\r\n[2]https://www.postgresql.org/message-id/20200318210213.GA9781@alvherre.pgsql\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca",
"msg_date": "Fri, 21 Aug 2020 15:57:42 +0800",
"msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>",
"msg_from_op": true,
"msg_subject": "[POC]Enable tuple change partition caused by BEFORE TRIGGER"
},
{
"msg_contents": "On Fri, Aug 21, 2020 at 1:28 PM movead.li@highgo.ca <movead.li@highgo.ca> wrote:\n>\n> Hello hackers,\n>\n> Currently, if BEFORE TRIGGER causes a partition change, it reports an error 'moving row\n> to another partition during a BEFORE FOR EACH ROW trigger is not supported' and fails\n> to execute. I want to try to address this limitation and have made an initial patch to get\n> feedback from other hackers.\n\nI am not opposed to removing that limitation, it would be good to know\nthe usecase we will solve. Trying to change a partition key in a\nbefore trigger on a partition looks dubious to me. If at all it should\nbe done by a partitioned table level trigger and not a partition level\ntrigger.\n\n>\n>\n> The implemented approach is similar to when a change partition caused by an UPDATE\n>\n> statement. If it's a BEFORE INSERT TRIGGER then we just need to insert the row produced\n>\n> by a trigger to the new partition, and if it's a BEFORE UPDATE TRIGGER we need to delete\n>\n> the old tuple and insert the row produced by the trigger to the new partition.\n\nIf the triggers are not written carefully, this could have ping-pong\neffect, where the row keeps on bouncing from one partition to the\nother. Obviously it will be user who must be blamed for this but with\nthousands of partitions it's not exactly easy to keep track of the\ntrigger's effects. If we prohibited the row movement because of before\ntrigger, users don't need to worry about it at all.\n\n>\n>\n> In current BEFORE TRIGGER implementation, it reports an error once a trigger result out\n>\n> of current partition, but I think it should check it after finish all triggers call, and you can\n>\n> see the discussion in [1][2]. In the attached patch I have changed this rule, I check the\n>\n> partition constraint only once after all BEFORE TRIGGERS have been called. If you do not\n>\n> agree with this way, I can change the implementation.\n\nI think this change may be good irrespective of the row movement change.\n\n>\n>\n> And another point is that when inserting to new partition caused by BEFORE TRIGGER,\n>\n> then it will not trigger the BEFORE TRIGGER on a new partition. I think it's the right way,\n>\n> what's more, I think the UPDATE approach cause partition change should not trigger the\n>\n> BEFORE TRIGGERS too, you can see discussed on [1].\n>\n\nThat looks dubious to me. Before row triggers may be used in several\ndifferent ways, for auditing, for verification of inserted data or to\nchange some other data based on this change and so on. If we don't\nexecute before row trigger on the partition where the row gets moved,\nall this expected work won't happen. This also needs some background\nabout the usecase which requires this change.\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 21 Aug 2020 17:17:31 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [POC]Enable tuple change partition caused by BEFORE TRIGGER"
},
{
"msg_contents": "On 2020-Aug-21, Ashutosh Bapat wrote:\n\n> On Fri, Aug 21, 2020 at 1:28 PM movead.li@highgo.ca <movead.li@highgo.ca> wrote:\n\n> > In current BEFORE TRIGGER implementation, it reports an error once a\n> > trigger result out of current partition, but I think it should check\n> > it after finish all triggers call, and you can see the discussion in\n> > [1][2]. In the attached patch I have changed this rule, I check the\n> > partition constraint only once after all BEFORE TRIGGERS have been\n> > called. If you do not agree with this way, I can change the\n> > implementation.\n> \n> I think this change may be good irrespective of the row movement change.\n\nYeah, it makes sense to delay the complaint about partition movement\nuntil all triggers have been executed ... although that makes it harder\nto report *which* trigger caused the problem. (It seems pretty bad that\nthe error message that you're changing is not covered in regression\ntests -- mea culpa.)\n\n> > And another point is that when inserting to new partition caused by\n> > BEFORE TRIGGER, then it will not trigger the BEFORE TRIGGER on a new\n> > partition. I think it's the right way, what's more, I think the\n> > UPDATE approach cause partition change should not trigger the BEFORE\n> > TRIGGERS too, you can see discussed on [1].\n> \n> That looks dubious to me.\n\nYeah ...\n\n> Before row triggers may be used in several different ways, for\n> auditing, for verification of inserted data or to change some other\n> data based on this change and so on.\n\nAdmittedly, these things should be done by AFTER triggers, not BEFORE\ntriggers, precisely because you want to do them with the final form of\neach row -- not a form of the row that could still be changed by some\nhypothetical BEFORE trigger that will fire next.\n\nWhat this is saying to me is that we'd need to make sure to run the\nfinal target partition's AFTER triggers, not the original target\npartition. But I'm not 100% about running the BEFORE triggers. Maybe\none way to address this is to check whether the BEFORE triggers in the\nnew target partition are clones; if so then they would have run in the\noriginal target partition and so must not be run, but otherwise they\nhave to.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 26 Aug 2020 13:17:10 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [POC]Enable tuple change partition caused by BEFORE TRIGGER"
},
{
"msg_contents": "On Wed, 26 Aug 2020 at 22:47, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n>\n> What this is saying to me is that we'd need to make sure to run the\n> final target partition's AFTER triggers, not the original target\n> partition.\n\n\nAgreed.\n\n\n> But I'm not 100% about running the BEFORE triggers. Maybe\n> one way to address this is to check whether the BEFORE triggers in the\n> new target partition are clones; if so then they would have run in the\n> original target partition and so must not be run, but otherwise they\n> have to.\n>\n\nThis will work as long as the two BEFORE ROW triggers have the same effect.\nConsider two situations resulting in inserting identical rows 1. row that\nthe before row trigger has redirected to a new partition, say part2 2. a\nrow inserted directly into the part2 - if both these rows are identical\nbefore the BEFORE ROW triggers have been applied, they should remain\nidentical while inserting into part2. Any divergence might be problematic\nfor the application.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Wed, 26 Aug 2020 at 22:47, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\nWhat this is saying to me is that we'd need to make sure to run the\nfinal target partition's AFTER triggers, not the original target\npartition.Agreed. But I'm not 100% about running the BEFORE triggers. Maybe\none way to address this is to check whether the BEFORE triggers in the\nnew target partition are clones; if so then they would have run in the\noriginal target partition and so must not be run, but otherwise they\nhave to.This will work as long as the two BEFORE ROW triggers have the same effect. Consider two situations resulting in inserting identical rows 1. row that the before row trigger has redirected to a new partition, say part2 2. a row inserted directly into the part2 - if both these rows are identical before the BEFORE ROW triggers have been applied, they should remain identical while inserting into part2. Any divergence might be problematic for the application.-- Best Wishes,Ashutosh",
"msg_date": "Thu, 27 Aug 2020 11:34:45 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [POC]Enable tuple change partition caused by BEFORE TRIGGER"
},
{
"msg_contents": "On 2020-Aug-27, Ashutosh Bapat wrote:\n\n> On Wed, 26 Aug 2020 at 22:47, Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n\n> > But I'm not 100% about running the BEFORE triggers. Maybe\n> > one way to address this is to check whether the BEFORE triggers in the\n> > new target partition are clones; if so then they would have run in the\n> > original target partition and so must not be run, but otherwise they\n> > have to.\n> \n> This will work as long as the two BEFORE ROW triggers have the same effect.\n> Consider two situations resulting in inserting identical rows 1. row that\n> the before row trigger has redirected to a new partition, say part2 2. a\n> row inserted directly into the part2 - if both these rows are identical\n> before the BEFORE ROW triggers have been applied, they should remain\n> identical while inserting into part2. Any divergence might be problematic\n> for the application.\n\nWell, that's why I talk about the trigger being \"clones\" -- with that\nterm, I mean that their definitions have been inherited from a\ndefinition in some ancestor partitioned table, and so they must be\nidentical in the partitions.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 27 Aug 2020 12:09:55 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [POC]Enable tuple change partition caused by BEFORE TRIGGER"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI am sorry for the question which may be already discussed multiple times.\nBut I have not found answer for it neither in internet neither in \npgsql-hackers archieve.\nUPSERT (INSERT ... IN CONFLICT...) clause was added to the Postgres a \nlong time ago.\nAs far as I remember there was long discussions about its syntax and \nfunctionality.\nBut today I found that there is still no way to perform one of the most \nfrequently needed operation:\nlocate record by key and return its autogenerated ID or insert new \nrecord if key is absent.\n\nSomething like this:\n\n create table jsonb_schemas(id serial, schema bytea primary key);\n create index on jsonb_schemas(id);\n insert into jsonb_schemas (schema) values (?) on conflict(schema) do \nnothing returning id;\n\nBut it doesn't work because in case of conflict no value is returned.\nIt is possible to do something like this:\n\n with ins as (insert into jsonb_schemas (schema) values (obj_schema) \non conflict(schema) do nothing returning id) select coalesce((select id \nfrom ins),(select id from jsonb_schemas where schema=obj_schema));\n\nbut it requires extra lookup.\nOr perform update:\n\n insert into jsonb_schemas (schema) values (?) on conflict(schema) do \nupdate set schema=excluded.schema returning id;\n\nBut it is even worse because we have to perform useless update and \nproduce new version.\n\nMay be I missing something, but according to stackoverflow:\nhttps://stackoverflow.com/questions/34708509/how-to-use-returning-with-on-conflict-in-postgresql\nthere is no better solution.\n\nI wonder how it can happen that such popular use case ia not covered by \nPostgresql UPSERT?\nAre there some principle problems with it?\nWhy it is not possible to add one more on-conflict action: SELECT, \nmaking it possible to return data when key is found?\n\nThanks in advance,\nKonstantin\n\n\n\n\n\n\n",
"msg_date": "Sat, 22 Aug 2020 10:16:28 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "On Sat, 22 Aug 2020 at 08:16, Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> It is possible to do something like this:\n>\n> with ins as (insert into jsonb_schemas (schema) values (obj_schema)\n> on conflict(schema) do nothing returning id) select coalesce((select id\n> from ins),(select id from jsonb_schemas where schema=obj_schema));\n>\n> but it requires extra lookup.\n\nBut if\n\nINSERT INTO jsonb_schemas (schema) VALUES (obj_schema)\n ON CONFLICT (schema) DO NOTHING RETURNING id\n\nwere to work then that would _also_ require a second lookup, since\n\"id\" is not part of the conflict key that will be used to perform the\nexistence test, so the only difference is it's hidden by the syntax.\n\nGeoff\n\n\n",
"msg_date": "Mon, 24 Aug 2020 11:37:40 +0100",
"msg_from": "Geoff Winkless <pgsqladmin@geoff.dj>",
"msg_from_op": false,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "\n\nOn 24.08.2020 13:37, Geoff Winkless wrote:\n> On Sat, 22 Aug 2020 at 08:16, Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> It is possible to do something like this:\n>>\n>> with ins as (insert into jsonb_schemas (schema) values (obj_schema)\n>> on conflict(schema) do nothing returning id) select coalesce((select id\n>> from ins),(select id from jsonb_schemas where schema=obj_schema));\n>>\n>> but it requires extra lookup.\n> But if\n>\n> INSERT INTO jsonb_schemas (schema) VALUES (obj_schema)\n> ON CONFLICT (schema) DO NOTHING RETURNING id\n>\n> were to work then that would _also_ require a second lookup, since\n> \"id\" is not part of the conflict key that will be used to perform the\n> existence test, so the only difference is it's hidden by the syntax.\n>\n> Geoff\nSorry, I didn't quite understand it.\nIf we are doing such query:\n\nINSERT INTO jsonb_schemas (schema) VALUES (obj_schema)\n ON CONFLICT (schema) DO UPDATE schema=jsonb_schemas.schema RETURNING id\n\n\nThen as far as I understand no extra lookup is used to return ID:\n\n Insert on jsonb_schemas (cost=0.00..0.01 rows=1 width=36) (actual \ntime=0.035..0.036 rows=0 loops=1)\n Conflict Resolution: UPDATE\n Conflict Arbiter Indexes:jsonb_schemas_schema_key\n Conflict Filter: false\n Rows Removed by Conflict Filter: 1\n Tuples Inserted: 0\n Conflicting Tuples: 1\n -> Result (cost=0.00..0.01 rows=1 width=36) (actual \ntime=0.002..0.002 rows=1 loops=1)\n Planning Time: 0.034 ms\n Execution Time: 0.065 ms\n(10 rows)\n\nSo if we are able to efficienty execute query above, why we can not \nwrite query:\n\nINSERT INTO jsonb_schemas (schema) VALUES (obj_schema)\n ON CONFLICT (schema) DO SELECT ID RETURNING id\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 31 Aug 2020 16:53:44 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "On 22.08.2020 10:16, Konstantin Knizhnik wrote:\n> Hi hackers,\n>\n> I am sorry for the question which may be already discussed multiple \n> times.\n> But I have not found answer for it neither in internet neither in \n> pgsql-hackers archieve.\n> UPSERT (INSERT ... IN CONFLICT...) clause was added to the Postgres a \n> long time ago.\n> As far as I remember there was long discussions about its syntax and \n> functionality.\n> But today I found that there is still no way to perform one of the \n> most frequently needed operation:\n> locate record by key and return its autogenerated ID or insert new \n> record if key is absent.\n>\n> Something like this:\n>\n> create table jsonb_schemas(id serial, schema bytea primary key);\n> create index on jsonb_schemas(id);\n> insert into jsonb_schemas (schema) values (?) on conflict(schema) do \n> nothing returning id;\n>\n> But it doesn't work because in case of conflict no value is returned.\n> It is possible to do something like this:\n>\n> with ins as (insert into jsonb_schemas (schema) values (obj_schema) \n> on conflict(schema) do nothing returning id) select coalesce((select \n> id from ins),(select id from jsonb_schemas where schema=obj_schema));\n>\n> but it requires extra lookup.\n> Or perform update:\n>\n> insert into jsonb_schemas (schema) values (?) on conflict(schema) do \n> update set schema=excluded.schema returning id;\n>\n> But it is even worse because we have to perform useless update and \n> produce new version.\n>\n> May be I missing something, but according to stackoverflow:\n> https://stackoverflow.com/questions/34708509/how-to-use-returning-with-on-conflict-in-postgresql \n>\n> there is no better solution.\n>\n> I wonder how it can happen that such popular use case ia not covered \n> by Postgresql UPSERT?\n> Are there some principle problems with it?\n> Why it is not possible to add one more on-conflict action: SELECT, \n> making it possible to return data when key is found?\n>\n> Thanks in advance,\n> Konstantin\n\nI'm sorry for been intrusive.\nBut can somebody familiar with Postgres upsert mechanism explain me why \ncurrent implementation doesn't support very popular use case:\nlocate record by some unique key and and return its primary \n(autogenerated) key if found otherwise insert new tuple.\nI have explained the possible workarounds of the problem above.\nBut all of them looks awful or inefficient.\n\nWhat I am suggesting is just add ON CONFLICT DO SELECT clause:\n\ninsert into jsonb_schemas (schema) values ('one') on conflict(schema) do \nselect returning id;\n\nI attached small patch with prototype implementation of this construction.\nIt seems to be very trivial. What's wring with it?\nAre there some fundamental problems which I do not understand?\n\nBelow is small illustration of how this patch is working:\n\npostgres=# create table jsonb_schemas(id serial, schema bytea primary key);\nCREATE TABLE\npostgres=# create index on jsonb_schemas(id);\nCREATE INDEX\npostgres=# insert into jsonb_schemas (schema) values ('some') on \nconflict(schema) do nothing returning id;\n id\n----\n 1\n(1 row)\n\nINSERT 0 1\npostgres=# insert into jsonb_schemas (schema) values ('some') on \nconflict(schema) do nothing returning id;\n id\n----\n(0 rows)\n\nINSERT 0 0\npostgres=# insert into jsonb_schemas (schema) values ('some') on \nconflict(schema) do select returning id;\n id\n----\n 1\n(1 row)\n\nINSERT 0 1\n\n\nThanks in advance,\nKonstantin",
"msg_date": "Thu, 3 Sep 2020 19:16:14 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "There's prior art on this: https://commitfest.postgresql.org/15/1241/\n\n\n.m\n\nThere's prior art on this: https://commitfest.postgresql.org/15/1241/.m",
"msg_date": "Thu, 3 Sep 2020 19:30:24 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": false,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "\n\nOn 03.09.2020 19:30, Marko Tiikkaja wrote:\n> There's prior art on this: https://commitfest.postgresql.org/15/1241/\n>\n>\n> .m\nOoops:(\nThank you.\nI missed it.\n\nBut frankly speaking I still didn't find answer for my question in this \nthread: what are the dangerous scenarios with ON CONFLICT DO NOTHING/SELECT.\nYes, record is not exclusively locked. But I just want to obtain value \nof some column which is not a source of conflict. I do not understand \nwhat can be wrong if some\nother transaction changed this column.\n\nAnd I certainly can't agree with Peter's statement:\n > Whereas here, with ON CONFLICT DO SELECT,\n > I see a somewhat greater risk, and a much, much smaller benefit. A\n > benefit that might actually be indistinguishable from zero.\n\n From my point of view it is quite common use case when we need to \nconvert some long key to small autogenerated record identifier.\nWithout UPSERT we have to perform two queries instead of just one . And \neven with current implementation of INSERT ON CONFLICT...\nwe have to either perform extra lookup, either produce new (useless) \ntuple version.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 3 Sep 2020 19:52:05 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "On Mon, 31 Aug 2020 at 14:53, Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> If we are doing such query:\n>\n> INSERT INTO jsonb_schemas (schema) VALUES (obj_schema)\n> ON CONFLICT (schema) DO UPDATE schema=jsonb_schemas.schema RETURNING id\n>\n>\n> Then as far as I understand no extra lookup is used to return ID:\n\nThe conflict resolution checks the unique index on (schema) and\ndecides whether or not a conflict will exist. For DO NOTHING it\ndoesn't have to get the actual row from the table; however in order\nfor it to return the ID it would have to go and get the existing row\nfrom the table. That's the \"extra lookup\", as you term it. The only\ndifference from doing it with RETURNING id versus WITH... COALESCE()\nas you described is the simpler syntax.\n\nI'm not saying the simpler syntax isn't nice, mind you. I was just\npointing out that it's not inherently any less efficient.\n\nGeoff\n\n\n",
"msg_date": "Thu, 3 Sep 2020 17:56:31 +0100",
"msg_from": "Geoff Winkless <pgsqladmin@geoff.dj>",
"msg_from_op": false,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "On Thu, Sep 3, 2020 at 7:56 PM Geoff Winkless <pgsqladmin@geoff.dj> wrote:\n>\n> On Mon, 31 Aug 2020 at 14:53, Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n> > If we are doing such query:\n> >\n> > INSERT INTO jsonb_schemas (schema) VALUES (obj_schema)\n> > ON CONFLICT (schema) DO UPDATE schema=jsonb_schemas.schema RETURNING id\n> >\n> >\n> > Then as far as I understand no extra lookup is used to return ID:\n>\n> The conflict resolution checks the unique index on (schema) and\n> decides whether or not a conflict will exist. For DO NOTHING it\n> doesn't have to get the actual row from the table; however in order\n> for it to return the ID it would have to go and get the existing row\n> from the table. That's the \"extra lookup\", as you term it. The only\n> difference from doing it with RETURNING id versus WITH... COALESCE()\n> as you described is the simpler syntax.\n\nAs I know, conflict resolution still has to fetch heap tuples, see\n_bt_check_unique(). As I understand it, the issues are as follows.\n1) Conflict resolution uses the dirty snapshot. It's unclear whether\nwe can return this tuple to the user, because the query has a\ndifferent snapshot. Note, that CTE query by Konstantin at thead start\ndoesn't handle all the cases correctly, it can return no rows on\nconflict. We probably should do the trick similar to the EPQ mechanism\nfor UPDATE. For instance, UPDATE ... RETURNING old.* can return the\ntuple, which doesn't match the query snapshot. But INSERT ON CONFLICT\nmight have other caveats in this area, it needs careful analysis.\n2) Checking unique conflicts inside the index am is already the\nencapsulation-breaking hack. Returning the heap tuple for index am\nwould be even worse hack. We probably should refactor this whole area\nbefore.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 3 Sep 2020 20:59:11 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "On 9/3/20 6:52 PM, Konstantin Knizhnik wrote:\n> But frankly speaking I still didn't find answer for my question in this \n> thread: what are the dangerous scenarios with ON CONFLICT DO \n> NOTHING/SELECT.\n> Yes, record is not exclusively locked. But I just want to obtain value \n> of some column which is not a source of conflict. I do not understand \n> what can be wrong if some\n> other transaction changed this column.\n> \n> And I certainly can't agree with Peter's statement:\n> > Whereas here, with ON CONFLICT DO SELECT,\n> > I see a somewhat greater risk, and a much, much smaller benefit. A\n> > benefit that might actually be indistinguishable from zero.\n> \n> From my point of view it is quite common use case when we need to \n> convert some long key to small autogenerated record identifier.\n> Without UPSERT we have to perform two queries instead of just one . And \n> even with current implementation of INSERT ON CONFLICT...\n> we have to either perform extra lookup, either produce new (useless) \n> tuple version.\n\nI have no idea about the potential risks here since I am not very \nfamiliar with the ON CONFLICT code, but I will chime in and agree that \nthis is indeed a common use case. Selecting and taking a SHARE lock \nwould also be a nice feature.\n\nAndreas\n\n\n\n",
"msg_date": "Thu, 3 Sep 2020 21:59:55 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "\n\nOn 03.09.2020 19:56, Geoff Winkless wrote:\n> On Mon, 31 Aug 2020 at 14:53, Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> If we are doing such query:\n>>\n>> INSERT INTO jsonb_schemas (schema) VALUES (obj_schema)\n>> ON CONFLICT (schema) DO UPDATE schema=jsonb_schemas.schema RETURNING id\n>>\n>>\n>> Then as far as I understand no extra lookup is used to return ID:\n> The conflict resolution checks the unique index on (schema) and\n> decides whether or not a conflict will exist. For DO NOTHING it\n> doesn't have to get the actual row from the table; however in order\n> for it to return the ID it would have to go and get the existing row\n> from the table. That's the \"extra lookup\", as you term it. The only\n> difference from doing it with RETURNING id versus WITH... COALESCE()\n> as you described is the simpler syntax.\nSorry, but there is no exrta lookup in this case.\nBy \"lookup\" I mean index search.\nWhat we are doing in case ON CONFLICT SELECT is just fetching tuple from \nthe buffer.\nSo we are not even loading any data from the disk.\n\nBy in case\n\n with ins as (insert into jsonb_schemas (schema) values (obj_schema) \non conflict(schema) do nothing returning id)\n select coalesce((select id from ins),(select id from jsonb_schemas \nwhere schema=obj_schema));\n\nwe actually execute extra subquery: select id from jsonb_schemas where \nschema=obj_schema:\n\nexplain with ins as (insert into jsonb_schemas (schema) values ('some') \non conflict(schema) do nothing returning id) select coalesce((select id \nfrom ins),(select id from jsonb_schemas where schema='some'));\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Result (cost=8.21..8.21 rows=1 width=4)\n CTE ins\n -> Insert on jsonb_schemas (cost=0.00..0.01 rows=1 width=36)\n Conflict Resolution: NOTHING\n Conflict Arbiter Indexes: jsonb_schemas_pkey\n -> Result (cost=0.00..0.01 rows=1 width=36)\n InitPlan 2 (returns $2)\n -> CTE Scan on ins (cost=0.00..0.02 rows=1 width=4)\n InitPlan 3 (returns $3)\n -> Index Scan using jsonb_schemas_pkey on jsonb_schemas \njsonb_schemas_1 (cost=0.15..8.17 rows=1 width=4)\n Index Cond: (schema = '\\x736f6d65'::bytea)\n\nIs it critical?\nAt my system average time of executing this query is 104 usec, and with \nON CONFLICT SELECT fix - 82 usec.\nThe difference is no so large, because we in any case insert speculative \ntuple.\nBut it is incorrect to say that \"it's not inherently any less efficient.\"\n\n> I'm not saying the simpler syntax isn't nice, mind you. I was just\n> pointing out that it's not inherently any less efficient.\n>\n> Geoff\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 4 Sep 2020 12:29:18 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "I have performed comparison of different ways of implementing UPSERT in \nPostgres.\nMay be it will be interesting not only for me, so I share my results:\n\nSo first of all initialization step:\n\n create table jsonb_schemas(id serial, schema bytea primary key);\n create unique index on jsonb_schemas(id);\n insert into jsonb_schemas (schema) values ('some') on \nconflict(schema) do nothing returning id;\n\nThen I test performance of getting ID of exitsed schema:\n\n1. Use plpgsql script to avoid unneeded database modifications:\n\ncreate function upsert(obj_schema bytea) returns integer as $$\ndeclare\n obj_id integer;\nbegin\n select id from jsonb_schemas where schema=obj_schema into obj_id;\n if obj_id is null then\n insert into jsonb_schemas (schema) values (obj_schema) on \nconflict(schema) do nothing returning id into obj_id;\n if obj_id is null then\n select id from jsonb_schemas where schema=obj_schema into obj_id;\n end if;\n end if;\n return obj_id;\nend;\n$$ language plpgsql;\n\n------------------------\nupsert-plpgsql.sql:\nselect upsert('some');\n------------------------\npgbench -n -T 100 -M prepared -f upsert-plpgsql.sql postgres\ntps = 45092.241350\n\n2. Use ON CONFLICT DO UPDATE:\n\nupsert-update.sql:\ninsert into jsonb_schemas (schema) values ('some') on conflict(schema) \ndo update set schema='some' returning id;\n------------------------\npgbench -n -T 100 -M prepared -f upsert-update.sql postgres\ntps = 9222.344890\n\n\n3. Use ON CONFLICT DO NOTHING + COALESCE:\n\nupsert-coalecsce.sql:\nwith ins as (insert into jsonb_schemas (schema) values ('some') on \nconflict(schema) do nothing returning id) select coalesce((select id \nfrom ins),(select id from jsonb_schemas where schema='some'));\n------------------------\npgbench -n -T 100 -M prepared -f upsert-coalesce.sql postgres\ntps = 28929.353732\n\n\n4. Use ON CONFLICT DO SELECT\n\nupsert-select.sql:\ninsert into jsonb_schemas (schema) values ('some') on conflict(schema) \ndo select returning id;\n------------------------\npgbench -n -T 100 -M prepared -f upsert-select.sql postgres\nps = 35788.362302\n\n\n\nSo, as you can see PLpgSQL version, which doesn't modify database if key \nis found is signficantly faster than others.\nAnd version which always do update is almost five times slower!\nProposed version of upsert with ON CONFLICT DO SELECT is slower than \nPLpgSQL version (because it has to insert speculative tuple),\nbut faster than \"user-unfriendly\" version with COALESCE:\n\nUpsert implementation\n\tTPS\nPLpgSQL\n\t45092\nON CONFLICT DO UPDATE \t9222\nON CONFLICT DO NOTHING \t28929\nON CONFLICT DO SELECT \t35788\n\n\n\nSlightly modified version of my ON CONFLICT DO SELECT patch is attached \nto this mail.\n\n-- \n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 8 Sep 2020 12:06:41 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "út 8. 9. 2020 v 11:06 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n> I have performed comparison of different ways of implementing UPSERT in\n> Postgres.\n> May be it will be interesting not only for me, so I share my results:\n>\n> So first of all initialization step:\n>\n> create table jsonb_schemas(id serial, schema bytea primary key);\n> create unique index on jsonb_schemas(id);\n> insert into jsonb_schemas (schema) values ('some') on conflict(schema)\n> do nothing returning id;\n>\n> Then I test performance of getting ID of exitsed schema:\n>\n> 1. Use plpgsql script to avoid unneeded database modifications:\n>\n> create function upsert(obj_schema bytea) returns integer as $$\n> declare\n> obj_id integer;\n> begin\n> select id from jsonb_schemas where schema=obj_schema into obj_id;\n> if obj_id is null then\n> insert into jsonb_schemas (schema) values (obj_schema) on\n> conflict(schema) do nothing returning id into obj_id;\n> if obj_id is null then\n> select id from jsonb_schemas where schema=obj_schema into obj_id;\n> end if;\n> end if;\n> return obj_id;\n> end;\n> $$ language plpgsql;\n>\n\nIn parallel execution the plpgsql variant can fail. The possible raise\nconditions are not handled.\n\nSo maybe this is the reason why this is really fast.\n\nRegards\n\nPavel\n\n\n>\n> ------------------------\n> upsert-plpgsql.sql:\n> select upsert('some');\n> ------------------------\n> pgbench -n -T 100 -M prepared -f upsert-plpgsql.sql postgres\n> tps = 45092.241350\n>\n> 2. Use ON CONFLICT DO UPDATE:\n>\n> upsert-update.sql:\n> insert into jsonb_schemas (schema) values ('some') on conflict(schema) do\n> update set schema='some' returning id;\n> ------------------------\n> pgbench -n -T 100 -M prepared -f upsert-update.sql postgres\n> tps = 9222.344890\n>\n>\n> 3. Use ON CONFLICT DO NOTHING + COALESCE:\n>\n> upsert-coalecsce.sql:\n> with ins as (insert into jsonb_schemas (schema) values ('some') on\n> conflict(schema) do nothing returning id) select coalesce((select id from\n> ins),(select id from jsonb_schemas where schema='some'));\n> ------------------------\n> pgbench -n -T 100 -M prepared -f upsert-coalesce.sql postgres\n> tps = 28929.353732\n>\n>\n> 4. Use ON CONFLICT DO SELECT\n>\n> upsert-select.sql:\n> insert into jsonb_schemas (schema) values ('some') on conflict(schema) do\n> select returning id;\n> ------------------------\n> pgbench -n -T 100 -M prepared -f upsert-select.sql postgres\n> ps = 35788.362302\n>\n>\n>\n> So, as you can see PLpgSQL version, which doesn't modify database if key\n> is found is signficantly faster than others.\n> And version which always do update is almost five times slower!\n> Proposed version of upsert with ON CONFLICT DO SELECT is slower than\n> PLpgSQL version (because it has to insert speculative tuple),\n> but faster than \"user-unfriendly\" version with COALESCE:\n>\n> Upsert implementation\n> TPS\n> PLpgSQL\n> 45092\n> ON CONFLICT DO UPDATE 9222\n> ON CONFLICT DO NOTHING 28929\n> ON CONFLICT DO SELECT 35788\n>\n> Slightly modified version of my ON CONFLICT DO SELECT patch is attached to\n> this mail.\n>\n> --\n>\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nút 8. 9. 2020 v 11:06 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n I have performed comparison of different ways of implementing UPSERT\n in Postgres.\n May be it will be interesting not only for me, so I share my\n results:\n\n So first of all initialization step:\n\n create table jsonb_schemas(id serial, schema bytea primary key);\n \n create unique index on jsonb_schemas(id);\n \n insert into jsonb_schemas (schema) values ('some') on\n conflict(schema) do nothing returning id;\n \n\n Then I test performance of getting ID of exitsed schema:\n\n 1. Use plpgsql script to avoid unneeded database modifications:\n\n create function upsert(obj_schema bytea) returns integer as $$\n declare\n obj_id integer;\n begin\n select id from jsonb_schemas where schema=obj_schema into obj_id;\n if obj_id is null then\n insert into jsonb_schemas (schema) values (obj_schema) on\n conflict(schema) do nothing returning id into obj_id;\n if obj_id is null then\n select id from jsonb_schemas where schema=obj_schema into\n obj_id;\n end if;\n end if;\n return obj_id;\n end;\n $$ language plpgsql;In parallel execution the plpgsql variant can fail. The possible raise conditions are not handled.So maybe this is the reason why this is really fast.RegardsPavel \n\n ------------------------\n upsert-plpgsql.sql:\n select upsert('some');\n ------------------------\n pgbench -n -T 100 -M prepared -f upsert-plpgsql.sql postgres\n tps = 45092.241350\n\n 2. Use ON CONFLICT DO UPDATE:\n\n upsert-update.sql:\n insert into jsonb_schemas (schema) values ('some') on\n conflict(schema) do update set schema='some' returning id;\n ------------------------\n pgbench -n -T 100 -M prepared -f upsert-update.sql postgres\n tps = 9222.344890\n\n\n 3. Use ON CONFLICT DO NOTHING + COALESCE:\n\n upsert-coalecsce.sql:\n with ins as (insert into jsonb_schemas (schema) values ('some') on\n conflict(schema) do nothing returning id) select coalesce((select id\n from ins),(select id from jsonb_schemas where schema='some'));\n ------------------------\n pgbench -n -T 100 -M prepared -f upsert-coalesce.sql postgres\n tps = 28929.353732\n\n\n 4. Use ON CONFLICT DO SELECT\n\n upsert-select.sql:\n insert into jsonb_schemas (schema) values ('some') on\n conflict(schema) do select returning id; \n ------------------------\n pgbench -n -T 100 -M prepared -f upsert-select.sql postgres\n ps = 35788.362302\n\n\n\n So, as you can see PLpgSQL version, which doesn't modify database if\n key is found is signficantly faster than others.\n And version which always do update is almost five times slower!\n Proposed version of upsert with ON CONFLICT DO SELECT is slower than\n PLpgSQL version (because it has to insert speculative tuple),\n but faster than \"user-unfriendly\" version with COALESCE:\n\n\n\n\nUpsert implementation\n\nTPS\n\n\n\nPLpgSQL\n\n45092\n\n\nON CONFLICT DO UPDATE\n9222\n\n\nON CONFLICT DO NOTHING \n28929\n\n\nON CONFLICT DO SELECT\n35788\n\n\n\n\n\n Slightly modified version of my ON CONFLICT DO SELECT patch is\n attached to this mail.\n\n --\n Konstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 8 Sep 2020 11:34:08 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "On 08.09.2020 12:34, Pavel Stehule wrote:\n>\n>\n> út 8. 9. 2020 v 11:06 odesílatel Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> napsal:\n>\n> I have performed comparison of different ways of implementing\n> UPSERT in Postgres.\n> May be it will be interesting not only for me, so I share my results:\n>\n> So first of all initialization step:\n>\n> create table jsonb_schemas(id serial, schema bytea primary key);\n> create unique index on jsonb_schemas(id);\n> insert into jsonb_schemas (schema) values ('some') on\n> conflict(schema) do nothing returning id;\n>\n> Then I test performance of getting ID of exitsed schema:\n>\n> 1. Use plpgsql script to avoid unneeded database modifications:\n>\n> create function upsert(obj_schema bytea) returns integer as $$\n> declare\n> obj_id integer;\n> begin\n> select id from jsonb_schemas where schema=obj_schema into obj_id;\n> if obj_id is null then\n> insert into jsonb_schemas (schema) values (obj_schema) on\n> conflict(schema) do nothing returning id into obj_id;\n> if obj_id is null then\n> select id from jsonb_schemas where schema=obj_schema into\n> obj_id;\n> end if;\n> end if;\n> return obj_id;\n> end;\n> $$ language plpgsql;\n>\n>\n> In parallel execution the plpgsql variant can fail. The possible raise \n> conditions are not handled.\n>\n> So maybe this is the reason why this is really fast.\n\nWith this example I model real use case, where we need to map long key \n(json schema in this case) to short identifier (serial column in this \ncase).\nRows of jsonb_schemas are never updated: it is append-only dictionary.\nIn this assumption no race condition can happen with this PLpgSQL \nimplementation (and other implementations of UPSERT as well).\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 08.09.2020 12:34, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\n\nút 8. 9. 2020 v 11:06\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n I have performed comparison of\n different ways of implementing UPSERT in Postgres.\n May be it will be interesting not only for me, so I share\n my results:\n\n So first of all initialization step:\n\n create table jsonb_schemas(id serial, schema bytea\n primary key); \n create unique index on jsonb_schemas(id); \n insert into jsonb_schemas (schema) values ('some') on\n conflict(schema) do nothing returning id; \n\n Then I test performance of getting ID of exitsed schema:\n\n 1. Use plpgsql script to avoid unneeded database\n modifications:\n\n create function upsert(obj_schema bytea) returns integer\n as $$\n declare\n obj_id integer;\n begin\n select id from jsonb_schemas where schema=obj_schema\n into obj_id;\n if obj_id is null then\n insert into jsonb_schemas (schema) values (obj_schema)\n on conflict(schema) do nothing returning id into obj_id;\n if obj_id is null then\n select id from jsonb_schemas where schema=obj_schema\n into obj_id;\n end if;\n end if;\n return obj_id;\n end;\n $$ language plpgsql;\n\n\n\n\nIn parallel execution the plpgsql variant can fail. The\n possible raise conditions are not handled.\n\n\nSo maybe this is the reason why this is really fast.\n\n\n\n\n With this example I model real use case, where we need to map long\n key (json schema in this case) to short identifier (serial column\n in this case).\n Rows of jsonb_schemas are never updated: it is append-only\n dictionary.\n In this assumption no race condition can happen with this PLpgSQL\n implementation (and other implementations of UPSERT as well).\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 8 Sep 2020 13:34:43 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "út 8. 9. 2020 v 12:34 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 08.09.2020 12:34, Pavel Stehule wrote:\n>\n>\n>\n> út 8. 9. 2020 v 11:06 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>> I have performed comparison of different ways of implementing UPSERT in\n>> Postgres.\n>> May be it will be interesting not only for me, so I share my results:\n>>\n>> So first of all initialization step:\n>>\n>> create table jsonb_schemas(id serial, schema bytea primary key);\n>> create unique index on jsonb_schemas(id);\n>> insert into jsonb_schemas (schema) values ('some') on conflict(schema)\n>> do nothing returning id;\n>>\n>> Then I test performance of getting ID of exitsed schema:\n>>\n>> 1. Use plpgsql script to avoid unneeded database modifications:\n>>\n>> create function upsert(obj_schema bytea) returns integer as $$\n>> declare\n>> obj_id integer;\n>> begin\n>> select id from jsonb_schemas where schema=obj_schema into obj_id;\n>> if obj_id is null then\n>> insert into jsonb_schemas (schema) values (obj_schema) on\n>> conflict(schema) do nothing returning id into obj_id;\n>> if obj_id is null then\n>> select id from jsonb_schemas where schema=obj_schema into obj_id;\n>> end if;\n>> end if;\n>> return obj_id;\n>> end;\n>> $$ language plpgsql;\n>>\n>\n> In parallel execution the plpgsql variant can fail. The possible raise\n> conditions are not handled.\n>\n> So maybe this is the reason why this is really fast.\n>\n>\n> With this example I model real use case, where we need to map long key\n> (json schema in this case) to short identifier (serial column in this\n> case).\n> Rows of jsonb_schemas are never updated: it is append-only dictionary.\n> In this assumption no race condition can happen with this PLpgSQL\n> implementation (and other implementations of UPSERT as well).\n>\n\nyes, the performance depends on possibilities - and if you can implement\noptimistic or pessimistic locking (or if you know so there is not race\ncondition possibility)\n\nPavel\n\n\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nút 8. 9. 2020 v 12:34 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 08.09.2020 12:34, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\nút 8. 9. 2020 v 11:06\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n I have performed comparison of\n different ways of implementing UPSERT in Postgres.\n May be it will be interesting not only for me, so I share\n my results:\n\n So first of all initialization step:\n\n create table jsonb_schemas(id serial, schema bytea\n primary key); \n create unique index on jsonb_schemas(id); \n insert into jsonb_schemas (schema) values ('some') on\n conflict(schema) do nothing returning id; \n\n Then I test performance of getting ID of exitsed schema:\n\n 1. Use plpgsql script to avoid unneeded database\n modifications:\n\n create function upsert(obj_schema bytea) returns integer\n as $$\n declare\n obj_id integer;\n begin\n select id from jsonb_schemas where schema=obj_schema\n into obj_id;\n if obj_id is null then\n insert into jsonb_schemas (schema) values (obj_schema)\n on conflict(schema) do nothing returning id into obj_id;\n if obj_id is null then\n select id from jsonb_schemas where schema=obj_schema\n into obj_id;\n end if;\n end if;\n return obj_id;\n end;\n $$ language plpgsql;\n\n\n\n\nIn parallel execution the plpgsql variant can fail. The\n possible raise conditions are not handled.\n\n\nSo maybe this is the reason why this is really fast.\n\n\n\n\n With this example I model real use case, where we need to map long\n key (json schema in this case) to short identifier (serial column\n in this case).\n Rows of jsonb_schemas are never updated: it is append-only\n dictionary.\n In this assumption no race condition can happen with this PLpgSQL\n implementation (and other implementations of UPSERT as well).yes, the performance depends on possibilities - and if you can implement optimistic or pessimistic locking (or if you know so there is not race condition possibility)Pavel \n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 8 Sep 2020 12:57:12 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
},
{
"msg_contents": "út 8. 9. 2020 v 12:34 odesílatel Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> napsal:\n\n>\n>\n> On 08.09.2020 12:34, Pavel Stehule wrote:\n>\n>\n>\n> út 8. 9. 2020 v 11:06 odesílatel Konstantin Knizhnik <\n> k.knizhnik@postgrespro.ru> napsal:\n>\n>> I have performed comparison of different ways of implementing UPSERT in\n>> Postgres.\n>> May be it will be interesting not only for me, so I share my results:\n>>\n>> So first of all initialization step:\n>>\n>> create table jsonb_schemas(id serial, schema bytea primary key);\n>> create unique index on jsonb_schemas(id);\n>> insert into jsonb_schemas (schema) values ('some') on conflict(schema)\n>> do nothing returning id;\n>>\n>> Then I test performance of getting ID of exitsed schema:\n>>\n>> 1. Use plpgsql script to avoid unneeded database modifications:\n>>\n>> create function upsert(obj_schema bytea) returns integer as $$\n>> declare\n>> obj_id integer;\n>> begin\n>> select id from jsonb_schemas where schema=obj_schema into obj_id;\n>> if obj_id is null then\n>> insert into jsonb_schemas (schema) values (obj_schema) on\n>> conflict(schema) do nothing returning id into obj_id;\n>> if obj_id is null then\n>> select id from jsonb_schemas where schema=obj_schema into obj_id;\n>> end if;\n>> end if;\n>> return obj_id;\n>> end;\n>> $$ language plpgsql;\n>>\n>\n> In parallel execution the plpgsql variant can fail. The possible raise\n> conditions are not handled.\n>\n> So maybe this is the reason why this is really fast.\n>\n>\n> With this example I model real use case, where we need to map long key\n> (json schema in this case) to short identifier (serial column in this\n> case).\n> Rows of jsonb_schemas are never updated: it is append-only dictionary.\n> In this assumption no race condition can happen with this PLpgSQL\n> implementation (and other implementations of UPSERT as well).\n>\n\nI am not sure, but I think this should be a design and behavior of MERGE\nstatement - it is designed for OLAP (and speed). Unfortunately, this\nfeature stalled (and your benchmarks show so there is clean performance\nbenefit).\n\nRegards\n\nPavel\n\n>\n>\n> --\n> Konstantin Knizhnik\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nút 8. 9. 2020 v 12:34 odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru> napsal:\n\n\n\nOn 08.09.2020 12:34, Pavel Stehule\n wrote:\n\n\n\n\n\n\n\nút 8. 9. 2020 v 11:06\n odesílatel Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n napsal:\n\n\n I have performed comparison of\n different ways of implementing UPSERT in Postgres.\n May be it will be interesting not only for me, so I share\n my results:\n\n So first of all initialization step:\n\n create table jsonb_schemas(id serial, schema bytea\n primary key); \n create unique index on jsonb_schemas(id); \n insert into jsonb_schemas (schema) values ('some') on\n conflict(schema) do nothing returning id; \n\n Then I test performance of getting ID of exitsed schema:\n\n 1. Use plpgsql script to avoid unneeded database\n modifications:\n\n create function upsert(obj_schema bytea) returns integer\n as $$\n declare\n obj_id integer;\n begin\n select id from jsonb_schemas where schema=obj_schema\n into obj_id;\n if obj_id is null then\n insert into jsonb_schemas (schema) values (obj_schema)\n on conflict(schema) do nothing returning id into obj_id;\n if obj_id is null then\n select id from jsonb_schemas where schema=obj_schema\n into obj_id;\n end if;\n end if;\n return obj_id;\n end;\n $$ language plpgsql;\n\n\n\n\nIn parallel execution the plpgsql variant can fail. The\n possible raise conditions are not handled.\n\n\nSo maybe this is the reason why this is really fast.\n\n\n\n\n With this example I model real use case, where we need to map long\n key (json schema in this case) to short identifier (serial column\n in this case).\n Rows of jsonb_schemas are never updated: it is append-only\n dictionary.\n In this assumption no race condition can happen with this PLpgSQL\n implementation (and other implementations of UPSERT as well).I am not sure, but I think this should be a design and behavior of MERGE statement - it is designed for OLAP (and speed). Unfortunately, this feature stalled (and your benchmarks show so there is clean performance benefit). RegardsPavel\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 8 Sep 2020 21:15:02 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: INSERT ON CONFLICT and RETURNING"
}
] |
[
{
"msg_contents": "We've seen repeated failures in the tests added by commit 43e084197:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2020-08-23%2005%3A46%3A17\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2020-08-04%2001%3A05%3A30\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2020-03-14%2019%3A35%3A31\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2020-04-01%2004%3A10%3A51\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=komodoensis&dt=2020-03-10%2003%3A14%3A13\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2020-03-10%2011%3A01%3A49\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2020-03-09%2010%3A59%3A43\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2020-03-09%2015%3A52%3A50\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-09%2005%3A20%3A07\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2020-03-09%2003%3A00%3A15\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2020-03-09%2015%3A52%3A53\n\nI dug into this a bit today, and found that I can reproduce the failure\nreliably by adding a short delay in the right place, as attached.\n\nHowever, after studying the test awhile I have to admit that I do not\nunderstand why all these failures look the same, because it seems to\nme that this test is a house of cards. It *repeatedly* expects that\nissuing a command to session X will result in session Y reporting\nsome notice before X's command terminates, even though X's command will\nnever meet the conditions for isolationtester to think it's blocked.\nAFAICS that is nothing but wishful thinking. Why is it that only one of\nthose places has failed so far?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 23 Aug 2020 22:53:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-23 22:53:18 -0400, Tom Lane wrote:\n> We've seen repeated failures in the tests added by commit 43e084197:\n> ...\n> I dug into this a bit today, and found that I can reproduce the failure\n> reliably by adding a short delay in the right place, as attached.\n> \n> However, after studying the test awhile I have to admit that I do not\n> understand why all these failures look the same, because it seems to\n> me that this test is a house of cards. It *repeatedly* expects that\n> issuing a command to session X will result in session Y reporting\n> some notice before X's command terminates, even though X's command will\n> never meet the conditions for isolationtester to think it's blocked.\n>\n> AFAICS that is nothing but wishful thinking. Why is it that only one of\n> those places has failed so far?\n\nAre there really that many places expecting that? I've not gone through\nthis again exhaustively by any means, but most places seem to print\nsomething only before waiting for a lock.\n\nThis test is really hairy, which isn't great. But until we have a proper\nframework for controlling server side execution, I am not sure how we\nbetter can achieve test coverage for this stuff. And there've been bugs,\nso it's worth testing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Aug 2020 13:42:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "On 2020-08-24 13:42:35 -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2020-08-23 22:53:18 -0400, Tom Lane wrote:\n> > We've seen repeated failures in the tests added by commit 43e084197:\n> > ...\n> > I dug into this a bit today, and found that I can reproduce the failure\n> > reliably by adding a short delay in the right place, as attached.\n> > \n> > However, after studying the test awhile I have to admit that I do not\n> > understand why all these failures look the same, because it seems to\n> > me that this test is a house of cards. It *repeatedly* expects that\n> > issuing a command to session X will result in session Y reporting\n> > some notice before X's command terminates, even though X's command will\n> > never meet the conditions for isolationtester to think it's blocked.\n\n> > AFAICS that is nothing but wishful thinking. Why is it that only one of\n> > those places has failed so far?\n> \n> Are there really that many places expecting that? I've not gone through\n> this again exhaustively by any means, but most places seem to print\n> something only before waiting for a lock.\n\nISTM the issue at hand isn't so much that X expects something to be\nprinted by Y before it terminates, but that it expects the next step to\nnot be executed before Y unlocks. If I understand the wrong output\ncorrectly, what happens is that \"controller_print_speculative_locks\" is\nexecuted, even though s1 hasn't yet acquired the next lock. Note how the\n+s1: NOTICE: blurt_and_lock_123() called for k1 in session 1\n+s1: NOTICE: acquiring advisory lock on 2\nis *after* \"step controller_print_speculative_locks\", not just after\n\"step s2_upsert: <... completed>\"\n\nTo be clear, there'd still be an issue about whether the NOTICE is\nprinted before/after the \"s2_upsert: <... completed>\", but it looks to\nme the issue is bigger than that. It's easy enough to add another wait\nin s2_upsert, but that doesn't help if the entire scheduling just\ncontinues regardless of there not really being a waiter.\n\nAm I missing something here?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Aug 2020 14:21:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> ISTM the issue at hand isn't so much that X expects something to be\n> printed by Y before it terminates, but that it expects the next step to\n> not be executed before Y unlocks. If I understand the wrong output\n> correctly, what happens is that \"controller_print_speculative_locks\" is\n> executed, even though s1 hasn't yet acquired the next lock.\n\nThat's one way to look at it perhaps.\n\nI've spent the day fooling around with a re-implementation of\nisolationtester that waits for all its controlled sessions to quiesce\n(either wait for client input, or block on a lock held by another\nsession) before moving on to the next step. That was not a feasible\napproach before we had the wait_event infrastructure, but it's\nseeming like it might be workable now. Still have a few issues to\nsort out though ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Aug 2020 21:34:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "I wrote:\n> I've spent the day fooling around with a re-implementation of\n> isolationtester that waits for all its controlled sessions to quiesce\n> (either wait for client input, or block on a lock held by another\n> session) before moving on to the next step. That was not a feasible\n> approach before we had the wait_event infrastructure, but it's\n> seeming like it might be workable now. Still have a few issues to\n> sort out though ...\n\nI wasted a good deal of time on this idea, and eventually concluded\nthat it's a dead end, because there is an unremovable race condition.\nNamely, that even if the isolationtester's observer backend has\nobserved that test session X has quiesced according to its\nwait_event_info, it is possible for the report of that fact to arrive\nat the isolationtester client process before test session X's output\ndoes.\n\nIt's quite obvious how that might happen if the isolationtester is\non a different machine than the PG server --- just imagine a dropped\npacket in X's output that has to be retransmitted. You might think\nit shouldn't happen within a single machine, but I'm seeing results\nthat I cannot explain any other way (on an 8-core RHEL8 box).\nIt appears to not be particularly rare, either.\n\n> Andres Freund <andres@anarazel.de> writes:\n>> ISTM the issue at hand isn't so much that X expects something to be\n>> printed by Y before it terminates, but that it expects the next step to\n>> not be executed before Y unlocks. If I understand the wrong output\n>> correctly, what happens is that \"controller_print_speculative_locks\" is\n>> executed, even though s1 hasn't yet acquired the next lock.\n\nThe problem as I'm now understanding it is that\ninsert-conflict-specconflict.spec issues multiple commands in sequence\nto its \"controller\" session, and expects that NOTICE outputs from a\ndifferent test session will arrive at a determinate point in that\nsequence. In practice that's not guaranteed, because (a) the other\ntest session might not send the NOTICE soon enough --- as my modified\nspecfile proves --- and (b) even if the NOTICE is timely sent, the\nkernel will not guarantee timely receipt. We could fix (a) by\nintroducing some explicit interlock between the controller and test\nsessions, but (b) is a killer.\n\nI think what we have to do to salvage this test is to get rid of the\nuse of NOTICE outputs, and instead have the test functions insert\nlog records into some table, which we can inspect after the fact\nto verify that things happened as we expect.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 Aug 2020 12:04:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "\nOn 8/24/20 4:42 PM, Andres Freund wrote:\n>\n> This test is really hairy, which isn't great. But until we have a proper\n> framework for controlling server side execution, I am not sure how we\n> better can achieve test coverage for this stuff. And there've been bugs,\n> so it's worth testing.\n>\n\n\nWhat would the framework look like?\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 25 Aug 2020 13:03:37 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "Let me (rather shamelessly) extract a couple of patches from the\r\npatch set that was already shared in the fault injection framework\r\nproposal [1].\r\n\r\nThe first patch incorporates a new syntax in isolation spec grammar to\r\nexplicitly mark a step that is expected to block (due to reasons other\r\nthan locks). E.g.\r\n\r\n permutation step1 step2& step3\r\n\r\nThe “&” suffix indicates that step2 is expected to block and isolation\r\ntester should move on to step3 without waiting for step2 to finish.\r\n\r\nThe second patch implements the insert-conflict scenario that is being\r\ndiscussed here - one session waits (using a “suspend” fault) after\r\ninserting a tuple into the heap relation but before updating the\r\nindex. Another session concurrently inserts a conflicting tuple in\r\nthe heap and the index, and commits. Then the fault is reset so that\r\nthe blocked session resumes and detects conflict when updating the\r\nindex.\r\n\r\n> On 25-Aug-2020, at 9:34 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>\r\n> I wrote:\r\n>> I've spent the day fooling around with a re-implementation of\r\n>> isolationtester that waits for all its controlled sessions to quiesce\r\n>> (either wait for client input, or block on a lock held by another\r\n>> session) before moving on to the next step. That was not a feasible\r\n>> approach before we had the wait_event infrastructure, but it's\r\n>> seeming like it might be workable now. Still have a few issues to\r\n>> sort out though ...\r\n>\r\n> I wasted a good deal of time on this idea, and eventually concluded\r\n> that it's a dead end, because there is an unremovable race condition.\r\n> Namely, that even if the isolationtester's observer backend has\r\n> observed that test session X has quiesced according to its\r\n> wait_event_info, it is possible for the report of that fact to arrive\r\n> at the isolationtester client process before test session X's output\r\n> does.\r\n\r\nThe attached test evades this race condition by not depending on any\r\noutput from the blocked session X. It queries status of the injected\r\nfault to ascertain that a specific point in the code was reached\r\nduring execution.\r\n\r\n>\r\n> I think what we have to do to salvage this test is to get rid of the\r\n> use of NOTICE outputs, and instead have the test functions insert\r\n> log records into some table, which we can inspect after the fact\r\n> to verify that things happened as we expect.\r\n>\r\n\r\n+1 to getting rid of NOTICE outputs.\r\n\r\nPlease refer to https://github.com/asimrp/postgres/tree/faultinjector\r\nfor the full patch set proposed in [1] that is now rebased against the\r\nlatest master.\r\n\r\n\r\nAsim\r\n\r\n[1] https://www.postgresql.org/message-id/flat/CANXE4Tc%2BRYRC48%3DdKYn1PvAjE26Ew4hh%3DXUjBRGj%3DJ9eob-S6g%40mail.gmail.com#cd02fa3b461102e97bcdc97e62dcc6d3",
"msg_date": "Mon, 31 Aug 2020 15:10:46 +0000",
"msg_from": "Asim Praveen <pasim@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "The test material added in commit 43e0841 continues to yield buildfarm\nfailures. Failures new since the rest of this thread:\n\n damselfly │ 2021-02-02 10:19:15 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=damselfly&dt=2021-02-02%2010%3A19%3A15\n drongo │ 2021-02-05 01:13:10 │ REL_13_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-02-05%2001%3A13%3A10\n lorikeet │ 2021-03-05 21:30:13 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2021-03-05%2021%3A30%3A13\n lorikeet │ 2021-03-16 08:28:36 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2021-03-16%2008%3A28%3A36\n macaque │ 2021-03-21 10:14:52 │ REL_13_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=macaque&dt=2021-03-21%2010%3A14%3A52\n walleye │ 2021-03-25 05:00:44 │ REL_13_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=walleye&dt=2021-03-25%2005%3A00%3A44\n sungazer │ 2021-04-23 21:52:31 │ REL_13_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2021-04-23%2021%3A52%3A31\n gharial │ 2021-04-30 06:08:36 │ REL_13_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-04-30%2006%3A08%3A36\n walleye │ 2021-05-05 17:00:41 │ REL_13_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=walleye&dt=2021-05-05%2017%3A00%3A41\n gharial │ 2021-05-05 22:35:33 │ REL_13_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-05-05%2022%3A35%3A33\n\nOn Tue, Aug 25, 2020 at 12:04:41PM -0400, Tom Lane wrote:\n> I think what we have to do to salvage this test is to get rid of the\n> use of NOTICE outputs, and instead have the test functions insert\n> log records into some table, which we can inspect after the fact\n> to verify that things happened as we expect.\n\nThat sounds promising. Are those messages important for observing server\nbugs, or are they for debugging/modifying the test itself? If the latter, one\ncould just change the messages to LOG. Any of the above won't solve things\ncompletely, because 3 of the 21 failures have diffs in the pg_locks output:\n\n dory │ 2020-03-14 19:35:31 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2020-03-14%2019%3A35%3A31\n walleye │ 2021-03-25 05:00:44 │ REL_13_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=walleye&dt=2021-03-25%2005%3A00%3A44\n walleye │ 2021-05-05 17:00:41 │ REL_13_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=walleye&dt=2021-05-05%2017%3A00%3A41\n\nPerhaps the pg_locks query should poll until pg_locks has the expected rows.\nOr else poll until all test sessions are waiting or idle.\n\n\n",
"msg_date": "Sun, 13 Jun 2021 00:34:07 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> The test material added in commit 43e0841 continues to yield buildfarm\n> failures.\n\nYeah. It's only a relatively small fraction of the total volume of\nisolation-test failures, so I'm not sure it's worth expending a\nwhole lot of effort on this issue by itself.\n\n> On Tue, Aug 25, 2020 at 12:04:41PM -0400, Tom Lane wrote:\n>> I think what we have to do to salvage this test is to get rid of the\n>> use of NOTICE outputs, and instead have the test functions insert\n>> log records into some table, which we can inspect after the fact\n>> to verify that things happened as we expect.\n\n> That sounds promising. Are those messages important for observing server\n> bugs, or are they for debugging/modifying the test itself? If the latter, one\n> could just change the messages to LOG.\n\nI think they are important, because they show that the things we expect\nto happen occur when we expect them to happen.\n\nI experimented with replacing the RAISE NOTICEs with INSERTs, and ran\ninto two problems:\n\n1. You can't do an INSERT in an IMMUTABLE function. This is easily\nworked around by putting the INSERT in a separate, volatile function.\nThat's cheating like mad of course, but so is the rest of the stuff\nthis test does in \"immutable\" functions.\n\n2. The inserted events don't become visible from the outside until the\nrespective session commits. This seems like an absolute show-stopper.\nAfter the fact, we can see that the events happened in the expected\nrelative order; but we don't have proof that they happened in the right\norder relative to the actions visible in the test output file.\n\n> ... Any of the above won't solve things\n> completely, because 3 of the 21 failures have diffs in the pg_locks output:\n\nYeah, it looks like the issue there is that session 2 reports completion\nof its step before session 1 has a chance to make progress after obtaining\nthe lock. This seems to me to be closely related to the race conditions\nI described upthread.\n\n[ thinks for awhile ... ]\n\nI wonder whether we could do better with something along these lines:\n\n* Adjust the test script's functions to emit a NOTICE *after* acquiring\na lock, not before.\n\n* Annotate permutations with something along the lines of \"expect N\nNOTICE outputs before allowing this step to be considered complete\",\nwhich we'd attach to the unlock steps.\n\nThis idea is only half baked at present, but maybe there's something\nto work with there. If it works, maybe we could improve the other\ntest cases that are always pseudo-failing in a similar way. For\nexample, in the deadlock tests, annotate steps with \"expect step\nY to finish before step X\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 13 Jun 2021 16:48:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "On Sun, Jun 13, 2021 at 04:48:48PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Tue, Aug 25, 2020 at 12:04:41PM -0400, Tom Lane wrote:\n> >> I think what we have to do to salvage this test is to get rid of the\n> >> use of NOTICE outputs, and instead have the test functions insert\n> >> log records into some table, which we can inspect after the fact\n> >> to verify that things happened as we expect.\n> \n> > That sounds promising. Are those messages important for observing server\n> > bugs, or are they for debugging/modifying the test itself? If the latter, one\n> > could just change the messages to LOG.\n> \n> I think they are important, because they show that the things we expect\n> to happen occur when we expect them to happen.\n> \n> I experimented with replacing the RAISE NOTICEs with INSERTs, and ran\n> into two problems:\n> \n> 1. You can't do an INSERT in an IMMUTABLE function. This is easily\n> worked around by putting the INSERT in a separate, volatile function.\n> That's cheating like mad of course, but so is the rest of the stuff\n> this test does in \"immutable\" functions.\n> \n> 2. The inserted events don't become visible from the outside until the\n> respective session commits. This seems like an absolute show-stopper.\n> After the fact, we can see that the events happened in the expected\n> relative order; but we don't have proof that they happened in the right\n> order relative to the actions visible in the test output file.\n\nOne could send the inserts over dblink, I suppose.\n\n> > ... Any of the above won't solve things\n> > completely, because 3 of the 21 failures have diffs in the pg_locks output:\n\n> * Adjust the test script's functions to emit a NOTICE *after* acquiring\n> a lock, not before.\n\nI suspect that particular lock acquisition merely unblocks the processing that\nreaches the final lock state expected by the test. So, ...\n\n> * Annotate permutations with something along the lines of \"expect N\n> NOTICE outputs before allowing this step to be considered complete\",\n> which we'd attach to the unlock steps.\n\n... I don't expect this to solve $SUBJECT. It could be a separately-useful\nfeature, though.\n\n> This idea is only half baked at present, but maybe there's something\n> to work with there. If it works, maybe we could improve the other\n> test cases that are always pseudo-failing in a similar way. For\n> example, in the deadlock tests, annotate steps with \"expect step\n> Y to finish before step X\".\n\nYeah, a special permutation list entry like PQgetResult(s8) could solve\nfailures like\nhttp://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-06-11%2017%3A13%3A44\n\nIncidentally, I have a different idle patch relevant to deadlock test failures\nlike that. Let me see if it has anything useful.\n\n\n",
"msg_date": "Sun, 13 Jun 2021 14:29:43 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sun, Jun 13, 2021 at 04:48:48PM -0400, Tom Lane wrote:\n>> * Adjust the test script's functions to emit a NOTICE *after* acquiring\n>> a lock, not before.\n\n> I suspect that particular lock acquisition merely unblocks the processing that\n> reaches the final lock state expected by the test. So, ...\n\nAh, you're probably right.\n\n>> * Annotate permutations with something along the lines of \"expect N\n>> NOTICE outputs before allowing this step to be considered complete\",\n>> which we'd attach to the unlock steps.\n\n> ... I don't expect this to solve $SUBJECT. It could be a separately-useful\n> feature, though.\n\nI think it would solve it. In the examples at hand, where we have\n\n@@ -377,8 +377,6 @@\n pg_advisory_unlock\n \n t \n-s1: NOTICE: blurt_and_lock_123() called for k1 in session 1\n-s1: NOTICE: acquiring advisory lock on 2\n step s2_upsert: <... completed>\n step controller_print_speculative_locks: \n SELECT pa.application_name, locktype, mode, granted\n\nand then those notices show up sometime later, I'm hypothesizing\nthat the actions did happen timely, but the actual delivery of\nthose packets to the isolationtester client did not. If we\nannotated step s2_upsert with a marker to the effect of \"wait\nfor 2 NOTICEs from session 1 before considering this step done\",\nwe could resolve that race condition. Admittedly, this is putting\na thumb on the scales a little bit, but it's hard to see how to\ndeal with inconsistent TCP delivery delays without that.\n\n(BTW, I find that removing the pq_flush() call at the bottom of\nsend_message_to_frontend produces this failure and a bunch of\nother similar ones.)\n\n> Yeah, a special permutation list entry like PQgetResult(s8) could solve\n> failures like\n> http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-06-11%2017%3A13%3A44\n\nRight. I'm visualizing it more like annotating s7a8 as requiring\ns8a1 to complete first -- or vice versa, either would stabilize\nthat test result I think.\n\nWe might be able to get rid of the stuff about concurrent step\ncompletion in isolationtester.c if we required the spec files\nto use annotations to force a deterministic step completion\norder in all such cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 13 Jun 2021 18:09:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "On Sun, Jun 13, 2021 at 06:09:20PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Sun, Jun 13, 2021 at 04:48:48PM -0400, Tom Lane wrote:\n> >> * Adjust the test script's functions to emit a NOTICE *after* acquiring\n> >> a lock, not before.\n> \n> > I suspect that particular lock acquisition merely unblocks the processing that\n> > reaches the final lock state expected by the test. So, ...\n> \n> Ah, you're probably right.\n> \n> >> * Annotate permutations with something along the lines of \"expect N\n> >> NOTICE outputs before allowing this step to be considered complete\",\n> >> which we'd attach to the unlock steps.\n> \n> > ... I don't expect this to solve $SUBJECT. It could be a separately-useful\n> > feature, though.\n> \n> I think it would solve it. In the examples at hand, where we have\n> \n> @@ -377,8 +377,6 @@\n> pg_advisory_unlock\n> \n> t \n> -s1: NOTICE: blurt_and_lock_123() called for k1 in session 1\n> -s1: NOTICE: acquiring advisory lock on 2\n> step s2_upsert: <... completed>\n> step controller_print_speculative_locks: \n> SELECT pa.application_name, locktype, mode, granted\n\nIt would solve that one particular diff. I meant that it wouldn't solve the\naforementioned pg_locks diffs:\n\n dory │ 2020-03-14 19:35:31 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2020-03-14%2019%3A35%3A31\n walleye │ 2021-03-25 05:00:44 │ REL_13_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=walleye&dt=2021-03-25%2005%3A00%3A44\n walleye │ 2021-05-05 17:00:41 │ REL_13_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=walleye&dt=2021-05-05%2017%3A00%3A41\n\n> We might be able to get rid of the stuff about concurrent step\n> completion in isolationtester.c if we required the spec files\n> to use annotations to force a deterministic step completion\n> order in all such cases.\n\nYeah. If we're willing to task spec authors with that, the test program can't\nthen guess wrong under unusual timing.\n\n\n",
"msg_date": "Sun, 13 Jun 2021 15:22:12 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "Hi,\n\nOn 2021-06-13 15:22:12 -0700, Noah Misch wrote:\n> On Sun, Jun 13, 2021 at 06:09:20PM -0400, Tom Lane wrote:\n> > We might be able to get rid of the stuff about concurrent step\n> > completion in isolationtester.c if we required the spec files\n> > to use annotations to force a deterministic step completion\n> > order in all such cases.\n> \n> Yeah. If we're willing to task spec authors with that, the test program can't\n> then guess wrong under unusual timing.\n\nI think it'd make it *easier* for spec authors. Right now one needs to\nfind some way to get a consistent ordering, which is often hard and\ncomplicates tests way more than specifying an explicit ordering\nwould. And it's often unreliable, as evidenced here and in plenty other\ntests.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 13 Jun 2021 16:49:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "On 14/06/21 11:49 am, Andres Freund wrote:\n> Hi,\n>\n> On 2021-06-13 15:22:12 -0700, Noah Misch wrote:\n>> On Sun, Jun 13, 2021 at 06:09:20PM -0400, Tom Lane wrote:\n>>> We might be able to get rid of the stuff about concurrent step\n>>> completion in isolationtester.c if we required the spec files\n>>> to use annotations to force a deterministic step completion\n>>> order in all such cases.\n>> Yeah. If we're willing to task spec authors with that, the test program can't\n>> then guess wrong under unusual timing.\n> I think it'd make it *easier* for spec authors. Right now one needs to\n> find some way to get a consistent ordering, which is often hard and\n> complicates tests way more than specifying an explicit ordering\n> would. And it's often unreliable, as evidenced here and in plenty other\n> tests.\n>\n> Greetings,\n>\n> Andres Freund\n\nHow about adding a keyword like 'DETERMINISTIC' to the top level SELECT, \nthe idea being the output would be deterministic (given the same table \nvalues after filtering etc), but not necessarily in any particular \norder?ᅵ So pg could decide the optimum way to achieve that which may not \nnecessarily need a sort.\n\n\nCheers,\nGavin\n\n\n\n",
"msg_date": "Mon, 14 Jun 2021 12:23:24 +1200",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
},
{
"msg_contents": "On Sun, Jun 13, 2021 at 04:49:04PM -0700, Andres Freund wrote:\n> On 2021-06-13 15:22:12 -0700, Noah Misch wrote:\n> > On Sun, Jun 13, 2021 at 06:09:20PM -0400, Tom Lane wrote:\n> > > We might be able to get rid of the stuff about concurrent step\n> > > completion in isolationtester.c if we required the spec files\n> > > to use annotations to force a deterministic step completion\n> > > order in all such cases.\n> > \n> > Yeah. If we're willing to task spec authors with that, the test program can't\n> > then guess wrong under unusual timing.\n> \n> I think it'd make it *easier* for spec authors. Right now one needs to\n> find some way to get a consistent ordering, which is often hard and\n> complicates tests way more than specifying an explicit ordering\n> would. And it's often unreliable, as evidenced here and in plenty other\n> tests.\n\nFine with me. Even if it weren't easier for spec authors, it shifts efforts\nto spec authors and away from buildfarm observers, which is a good thing.\n\n\n",
"msg_date": "Sun, 13 Jun 2021 18:46:15 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Continuing instability in insert-conflict-specconflict test"
}
] |
[
{
"msg_contents": "Hi!\n\nI created a POC patch that allows showing a list of extended statistics by\n\"\\dz\" command on psql. I believe this feature helps DBA and users who\nwould like to know all extended statistics easily. :-D\n\nI have not a strong opinion to assign \"\\dz\". I prefer \"\\dx\" or \"\\de*\"\nthan \"\\dz\" but they were already assigned. Therefore I used \"\\dz\"\ninstead of them.\n\nPlease find the attached patch.\nAny comments are welcome!\n\nFor Example:\n=======================\nCREATE TABLE t1 (a INT, b INT);\nCREATE STATISTICS stts1 (dependencies) ON a, b FROM t1;\nCREATE STATISTICS stts2 (dependencies, ndistinct) ON a, b FROM t1;\nCREATE STATISTICS stts3 (dependencies, ndistinct, mcv) ON a, b FROM t1;\nANALYZE t1;\n\nCREATE TABLE t2 (a INT, b INT, c INT);\nCREATE STATISTICS stts4 ON b, c FROM t2;\nANALYZE t2;\n\npostgres=# \\dz\n List of extended statistics\n Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n--------+-------+-------+---------+-----------+--------------+-----\n public | t1 | stts1 | a, b | f | t | f\n public | t1 | stts2 | a, b | t | t | f\n public | t1 | stts3 | a, b | t | t | t\n public | t2 | stts4 | b, c | t | t | t\n(4 rows)\n\npostgres=# \\?\n...\n \\dy [PATTERN] list event triggers\n \\dz [PATTERN] list extended statistics\n \\l[+] [PATTERN] list databases\n...\n=======================\n\nFor now, I haven't written a document and regression test for that.\nI'll create it later.\n\nThanks,\nTatsuro Yamada",
"msg_date": "Mon, 24 Aug 2020 12:22:49 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "list of extended statistics on psql"
},
{
"msg_contents": "po 24. 8. 2020 v 5:23 odesílatel Tatsuro Yamada <\ntatsuro.yamada.tf@nttcom.co.jp> napsal:\n\n> Hi!\n>\n> I created a POC patch that allows showing a list of extended statistics by\n> \"\\dz\" command on psql. I believe this feature helps DBA and users who\n> would like to know all extended statistics easily. :-D\n>\n> I have not a strong opinion to assign \"\\dz\". I prefer \"\\dx\" or \"\\de*\"\n> than \"\\dz\" but they were already assigned. Therefore I used \"\\dz\"\n> instead of them.\n>\n> Please find the attached patch.\n> Any comments are welcome!\n>\n> For Example:\n> =======================\n> CREATE TABLE t1 (a INT, b INT);\n> CREATE STATISTICS stts1 (dependencies) ON a, b FROM t1;\n> CREATE STATISTICS stts2 (dependencies, ndistinct) ON a, b FROM t1;\n> CREATE STATISTICS stts3 (dependencies, ndistinct, mcv) ON a, b FROM t1;\n> ANALYZE t1;\n>\n> CREATE TABLE t2 (a INT, b INT, c INT);\n> CREATE STATISTICS stts4 ON b, c FROM t2;\n> ANALYZE t2;\n>\n> postgres=# \\dz\n> List of extended statistics\n> Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n> --------+-------+-------+---------+-----------+--------------+-----\n> public | t1 | stts1 | a, b | f | t | f\n> public | t1 | stts2 | a, b | t | t | f\n> public | t1 | stts3 | a, b | t | t | t\n> public | t2 | stts4 | b, c | t | t | t\n> (4 rows)\n>\n> postgres=# \\?\n> ...\n> \\dy [PATTERN] list event triggers\n> \\dz [PATTERN] list extended statistics\n> \\l[+] [PATTERN] list databases\n> ...\n> =======================\n>\n> For now, I haven't written a document and regression test for that.\n> I'll create it later.\n>\n\n+1 good idea\n\nPavel\n\n\n> Thanks,\n> Tatsuro Yamada\n>\n>\n>\n\npo 24. 8. 2020 v 5:23 odesílatel Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> napsal:Hi!\n\nI created a POC patch that allows showing a list of extended statistics by\n\"\\dz\" command on psql. I believe this feature helps DBA and users who\nwould like to know all extended statistics easily. :-D\n\nI have not a strong opinion to assign \"\\dz\". I prefer \"\\dx\" or \"\\de*\"\nthan \"\\dz\" but they were already assigned. Therefore I used \"\\dz\"\ninstead of them.\n\nPlease find the attached patch.\nAny comments are welcome!\n\nFor Example:\n=======================\nCREATE TABLE t1 (a INT, b INT);\nCREATE STATISTICS stts1 (dependencies) ON a, b FROM t1;\nCREATE STATISTICS stts2 (dependencies, ndistinct) ON a, b FROM t1;\nCREATE STATISTICS stts3 (dependencies, ndistinct, mcv) ON a, b FROM t1;\nANALYZE t1;\n\nCREATE TABLE t2 (a INT, b INT, c INT);\nCREATE STATISTICS stts4 ON b, c FROM t2;\nANALYZE t2;\n\npostgres=# \\dz\n List of extended statistics\n Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n--------+-------+-------+---------+-----------+--------------+-----\n public | t1 | stts1 | a, b | f | t | f\n public | t1 | stts2 | a, b | t | t | f\n public | t1 | stts3 | a, b | t | t | t\n public | t2 | stts4 | b, c | t | t | t\n(4 rows)\n\npostgres=# \\?\n...\n \\dy [PATTERN] list event triggers\n \\dz [PATTERN] list extended statistics\n \\l[+] [PATTERN] list databases\n...\n=======================\n\nFor now, I haven't written a document and regression test for that.\nI'll create it later.+1 good ideaPavel\n\nThanks,\nTatsuro Yamada",
"msg_date": "Mon, 24 Aug 2020 06:12:42 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Mon, Aug 24, 2020 at 6:13 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> po 24. 8. 2020 v 5:23 odesílatel Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> napsal:\n>>\n>> Hi!\n>>\n>> I created a POC patch that allows showing a list of extended statistics by\n>> \"\\dz\" command on psql. I believe this feature helps DBA and users who\n>> would like to know all extended statistics easily. :-D\n>>\n>> I have not a strong opinion to assign \"\\dz\". I prefer \"\\dx\" or \"\\de*\"\n>> than \"\\dz\" but they were already assigned. Therefore I used \"\\dz\"\n>> instead of them.\n>>\n>> Please find the attached patch.\n>> Any comments are welcome!\n>>\n>> For Example:\n>> =======================\n>> CREATE TABLE t1 (a INT, b INT);\n>> CREATE STATISTICS stts1 (dependencies) ON a, b FROM t1;\n>> CREATE STATISTICS stts2 (dependencies, ndistinct) ON a, b FROM t1;\n>> CREATE STATISTICS stts3 (dependencies, ndistinct, mcv) ON a, b FROM t1;\n>> ANALYZE t1;\n>>\n>> CREATE TABLE t2 (a INT, b INT, c INT);\n>> CREATE STATISTICS stts4 ON b, c FROM t2;\n>> ANALYZE t2;\n>>\n>> postgres=# \\dz\n>> List of extended statistics\n>> Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n>> --------+-------+-------+---------+-----------+--------------+-----\n>> public | t1 | stts1 | a, b | f | t | f\n>> public | t1 | stts2 | a, b | t | t | f\n>> public | t1 | stts3 | a, b | t | t | t\n>> public | t2 | stts4 | b, c | t | t | t\n>> (4 rows)\n>>\n>> postgres=# \\?\n>> ...\n>> \\dy [PATTERN] list event triggers\n>> \\dz [PATTERN] list extended statistics\n>> \\l[+] [PATTERN] list databases\n>> ...\n>> =======================\n>>\n>> For now, I haven't written a document and regression test for that.\n>> I'll create it later.\n>\n>\n> +1 good idea\n\n+1 that's a good idea. Please add it to the next commitfest!\n\nYou have a typo:\n\n+ if (pset.sversion < 10000)\n+ {\n+ char sverbuf[32];\n+\n+ pg_log_error(\"The server (version %s) does not support\nextended statistics.\",\n+ formatPGVersionNumber(pset.sversion, false,\n+ sverbuf, sizeof(sverbuf)));\n+ return true;\n+ }\n\nthe version test is missing a 0, the feature looks otherwise ok.\n\nHow about using \\dX rather than \\dz?\n\n\n",
"msg_date": "Mon, 24 Aug 2020 07:54:36 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi!\n\n>> +1 good idea\n> \n> +1 that's a good idea. Please add it to the next commitfest!\n\nThanks!\n\n\n> You have a typo:\n> \n> + if (pset.sversion < 10000)\n> + {\n> + char sverbuf[32];\n> +\n> + pg_log_error(\"The server (version %s) does not support\n> extended statistics.\",\n> + formatPGVersionNumber(pset.sversion, false,\n> + sverbuf, sizeof(sverbuf)));\n> + return true;\n> + }\n> \n> the version test is missing a 0, the feature looks otherwise ok.\n\nOuch, I fixed on the attached patch.\n\nThe new patch includes:\n\n - Fix the version number check (10000 -> 100000)\n - Fix query to get extended stats info for sort order\n - Add handling [Pattern] e.g \\dz stts*\n - Add document and regression test for \\dz\n \n> How about using \\dX rather than \\dz?\n\nThanks for your suggestion!\nI'll replace it if I got consensus. :-D\n\nThanks,\nTatsuro Yamada",
"msg_date": "Mon, 24 Aug 2020 16:41:32 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Julien and Pavel!\n\n>> How about using \\dX rather than \\dz?\n> \n> Thanks for your suggestion!\n> I'll replace it if I got consensus. :-D\n\n>> How about using \\dX rather than \\dz?\n>\n>Thanks for your suggestion!\n>I'll replace it if I got consensus. :-D\n\n\nI re-read a help message of \\d* commands and realized it's better to\nuse \"\\dX\".\nThere are already cases where the commands differ due to differences\nin case, so I did the same way. Please find attached patch. :-D\n \nFor example:\n==========\n \\da[S] [PATTERN] list aggregates\n \\dA[+] [PATTERN] list access methods\n==========\n\nAttached patch uses \"\\dX\" instead of \"\\dz\":\n==========\n \\dx[+] [PATTERN] list extensions\n \\dX [PATTERN] list extended statistics\n==========\n\nResults of regress test of the feature are the following:\n==========\n-- check printing info about extended statistics\ncreate table t1 (a int, b int);\ncreate statistics stts_1 (dependencies) on a, b from t1;\ncreate statistics stts_2 (dependencies, ndistinct) on a, b from t1;\ncreate statistics stts_3 (dependencies, ndistinct, mcv) on a, b from t1;\ncreate table t2 (a int, b int, c int);\ncreate statistics stts_4 on b, c from t2;\ncreate table hoge (col1 int, col2 int, col3 int);\ncreate statistics stts_hoge on col1, col2, col3 from hoge;\n\n\\dX\n List of extended statistics\n Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n--------+-------+-----------+------------------+-----------+--------------+-----\n public | hoge | stts_hoge | col1, col2, col3 | t | t | t\n public | t1 | stts_1 | a, b | f | t | f\n public | t1 | stts_2 | a, b | t | t | f\n public | t1 | stts_3 | a, b | t | t | t\n public | t2 | stts_4 | b, c | t | t | t\n(5 rows)\n\n\\dX stts_?\n List of extended statistics\n Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n--------+-------+--------+---------+-----------+--------------+-----\n public | t1 | stts_1 | a, b | f | t | f\n public | t1 | stts_2 | a, b | t | t | f\n public | t1 | stts_3 | a, b | t | t | t\n public | t2 | stts_4 | b, c | t | t | t\n(4 rows)\n\n\\dX *hoge\n List of extended statistics\n Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n--------+-------+-----------+------------------+-----------+--------------+-----\n public | hoge | stts_hoge | col1, col2, col3 | t | t | t\n(1 row)\n==========\n\n\nThanks,\nTatsuro Yamada",
"msg_date": "Thu, 27 Aug 2020 15:13:09 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Yamada-san,\n\nOn Thu, Aug 27, 2020 at 03:13:09PM +0900, Tatsuro Yamada wrote:\n> \n> I re-read a help message of \\d* commands and realized it's better to\n> use \"\\dX\".\n> There are already cases where the commands differ due to differences\n> in case, so I did the same way. Please find attached patch. :-D\n> For example:\n> ==========\n> \\da[S] [PATTERN] list aggregates\n> \\dA[+] [PATTERN] list access methods\n> ==========\n> \n> Attached patch uses \"\\dX\" instead of \"\\dz\":\n> ==========\n> \\dx[+] [PATTERN] list extensions\n> \\dX [PATTERN] list extended statistics\n> ==========\n\n\nThanks for updating the patch! This alias will probably be easier to remember.\n\n\n> \n> Results of regress test of the feature are the following:\n> ==========\n> -- check printing info about extended statistics\n> create table t1 (a int, b int);\n> create statistics stts_1 (dependencies) on a, b from t1;\n> create statistics stts_2 (dependencies, ndistinct) on a, b from t1;\n> create statistics stts_3 (dependencies, ndistinct, mcv) on a, b from t1;\n> create table t2 (a int, b int, c int);\n> create statistics stts_4 on b, c from t2;\n> create table hoge (col1 int, col2 int, col3 int);\n> create statistics stts_hoge on col1, col2, col3 from hoge;\n> \n> \\dX\n> List of extended statistics\n> Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n> --------+-------+-----------+------------------+-----------+--------------+-----\n> public | hoge | stts_hoge | col1, col2, col3 | t | t | t\n> public | t1 | stts_1 | a, b | f | t | f\n> public | t1 | stts_2 | a, b | t | t | f\n> public | t1 | stts_3 | a, b | t | t | t\n> public | t2 | stts_4 | b, c | t | t | t\n> (5 rows)\n> \n> \\dX stts_?\n> List of extended statistics\n> Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n> --------+-------+--------+---------+-----------+--------------+-----\n> public | t1 | stts_1 | a, b | f | t | f\n> public | t1 | stts_2 | a, b | t | t | f\n> public | t1 | stts_3 | a, b | t | t | t\n> public | t2 | stts_4 | b, c | t | t | t\n> (4 rows)\n> \n> \\dX *hoge\n> List of extended statistics\n> Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n> --------+-------+-----------+------------------+-----------+--------------+-----\n> public | hoge | stts_hoge | col1, col2, col3 | t | t | t\n> (1 row)\n> ==========\n\n\nThanks also for the documentation and regression tests. This overall looks\ngood, I just have a two comments:\n\n- there's a whitespace issue in the documentation part:\n\nadd_list_extended_stats_for_psql_by_dX_command.patch:10: tab in indent.\n\t <varlistentry>\nwarning: 1 line adds whitespace errors.\n\n- You're sorting the output on schema, table, extended statistics and columns\n but I think the last one isn't required since extended statistics names are\n unique.\n\n\n",
"msg_date": "Thu, 27 Aug 2020 15:15:04 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Julien!\n \n\n> Thanks also for the documentation and regression tests. This overall looks\n> good, I just have a two comments:\n\n\nThank you for reviewing the patch! :-D\n\n\n> - there's a whitespace issue in the documentation part:\n> \n> add_list_extended_stats_for_psql_by_dX_command.patch:10: tab in indent.\n> \t <varlistentry>\n> warning: 1 line adds whitespace errors.\n\n\nOops, I forgot to use \"git diff --check\". I fixed it.\n\n \n> - You're sorting the output on schema, table, extended statistics and columns\n> but I think the last one isn't required since extended statistics names are\n> unique.\n\n\nYou are right.\nThe sort key \"columns\" was not necessary so I removed it.\n\nAttached new patch includes the above two fixes:\n\n - Fix whitespace issue in the documentation part\n - Remove unnecessary sort key from the query\n (ORDER BY 1, 2, 3, 4 -> ORDER BY 1, 2, 3)\n\n\nThanks,\nTatsuro Yamada",
"msg_date": "Fri, 28 Aug 2020 08:42:55 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "+1 for the general idea, and +1 for \\dX being the syntax to use\n\nIMO the per-type columns should show both the type being enabled as\nwell as it being built.\n\n(How many more stat types do we expect -- Tomas? I wonder if having one\ncolumn per type is going to scale in the long run.)\n\nAlso, the stat obj name column should be first, followed by a single\ncolumn listing both table and columns that it applies to. Keep in mind\nthat in the future we might want to add stats that cross multiple tables\n-- that's why the CREATE syntax is the way it is. So we should give\nroom for that in psql's display too.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 27 Aug 2020 19:53:23 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Alvaro!\n\nIt's been ages since we created a progress reporting feature together. :-D\n\n>>> +1 good idea\n>>\n>> +1 that's a good idea. Please add it to the next commitfest!\n>\n>+1 for the general idea, and +1 for \\dX being the syntax to use\n\nThank you for voting!\n\n\n> IMO the per-type columns should show both the type being enabled as\nwell as it being built.\n\nHmm. I'm not sure how to get the status (enabled or disabled) of\nextended stats. :(\nCould you explain it more?\n\n\n> Also, the stat obj name column should be first, followed by a single\n> column listing both table and columns that it applies to. Keep in mind\n> that in the future we might want to add stats that cross multiple tables\n> -- that's why the CREATE syntax is the way it is. So we should give\n> room for that in psql's display too.\n\nI understand your suggestions are the following, right?\n\n* The Current column order:\n===================\n Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n--------+-------+--------+---------+-----------+--------------+-----\n public | t1 | stts_1 | a, b | f | t | f\n public | t1 | stts_2 | a, b | t | t | f\n public | t1 | stts_3 | a, b | t | t | t\n public | t2 | stts_4 | b, c | t | t | t\n===================\n\n* The suggested column order is like this:\n===================\n Name | Schema | Table | Columns | Ndistinct | Dependencies | MCV\n-----------+--------+-------+------------------+-----------+--------------+-----\n stts_1 | public | t1 | a, b | f | t | f\n stts_2 | public | t1 | a, b | t | t | f\n stts_3 | public | t1 | a, b | t | t | t\n stts_4 | public | t2 | b, c | t | t | t\n===================\n\n* In the future, Extended stats that cross multiple tables will be\n shown maybe... (t1, t2):\n===================\n Name | Schema | Table | Columns | Ndistinct | Dependencies | MCV\n-----------+--------+--------+------------------+-----------+--------------+-----\n stts_5 | public | t1, t2 | a, b | f | t | f\n===================\n\nIf so, I can revise the column order as you suggested easily.\nHowever, I have no idea how to show extended stats that cross\nmultiple tables and the status now.\n\nI suppose that the current column order is sufficient if there is\nno improvement of extended stats on PG14. Do you know any plan to\nimprove extended stats such as to allow it to cross multiple tables on PG14?\n\n\nIn addition,\nCurrently, I use this query to get Extended stats info from pg_statistic_ext.\n\n SELECT\n stxnamespace::pg_catalog.regnamespace AS \"Schema\",\n c.relname AS \"Table\",\n stxname AS \"Name\",\n (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(attname),', ')\n FROM pg_catalog.unnest(stxkeys) s(attnum)\n JOIN pg_catalog.pg_attribute a ON (stxrelid = a.attrelid AND\n a.attnum = s.attnum AND NOT attisdropped)) AS \"Columns\",\n 'd' = any(stxkind) AS \"Ndistinct\",\n 'f' = any(stxkind) AS \"Dependencies\",\n 'm' = any(stxkind) AS \"MCV\"\n FROM pg_catalog.pg_statistic_ext\n INNER JOIN pg_catalog.pg_class c\n ON stxrelid = c.oid\n ORDER BY 1, 2, 3;\n\nThanks,\nTatsuro Yamada\n\n\n\n\n\n",
"msg_date": "Fri, 28 Aug 2020 11:07:43 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On 2020-Aug-28, Tatsuro Yamada wrote:\n\n> > IMO the per-type columns should show both the type being enabled as\n> > well as it being built.\n> \n> Hmm. I'm not sure how to get the status (enabled or disabled) of\n> extended stats. :(\n> Could you explain it more?\n\npg_statistic_ext_data.stxdndistinct is not null if the stats have been\nbuilt. (I'm not sure whether there's an easier way to determine this.)\n\n\n> * The suggested column order is like this:\n> ===================\n> Name | Schema | Table | Columns | Ndistinct | Dependencies | MCV\n> -----------+--------+-------+------------------+-----------+--------------+-----\n> stts_1 | public | t1 | a, b | f | t | f\n> stts_2 | public | t1 | a, b | t | t | f\n> stts_3 | public | t1 | a, b | t | t | t\n> stts_4 | public | t2 | b, c | t | t | t\n> ===================\n\nI suggest to do this\n\n Name | Schema | Definition | Ndistinct | Dependencies | MCV\n -----------+--------+--------------------------+-----------+--------------+-----\n stts_1 | public | (a, b) FROM t1 | f | t | f\n\n> I suppose that the current column order is sufficient if there is\n> no improvement of extended stats on PG14. Do you know any plan to\n> improve extended stats such as to allow it to cross multiple tables on PG14?\n\nI suggest that changing it in the future is going to be an uphill\nbattle, so better get it right from the get go, without requiring a\nfuture restructure.\n\n> In addition,\n> Currently, I use this query to get Extended stats info from pg_statistic_ext.\n\nMaybe something like this would do\n\nSELECT\n stxnamespace::pg_catalog.regnamespace AS \"Schema\",\n stxname AS \"Name\",\n format('%s FROM %s',\n (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(attname),', ')\n FROM pg_catalog.unnest(stxkeys) s(attnum)\n JOIN pg_catalog.pg_attribute a ON (stxrelid = a.attrelid AND\n a.attnum = s.attnum AND NOT attisdropped)),\n stxrelid::regclass) AS \"Definition\",\n CASE WHEN stxdndistinct IS NOT NULL THEN 'built' WHEN 'd' = any(stxkind) THEN 'enabled, not built' END AS \"n-distinct\",\n CASE WHEN stxddependencies IS NOT NULL THEN 'built' WHEN 'f' = any(stxkind) THEN 'enabled, not built' END AS \"functional dependencies\",\n CASE WHEN stxdmcv IS NOT NULL THEN 'built' WHEN 'm' = any(stxkind) THEN 'enabled, not built' END AS mcv\n FROM pg_catalog.pg_statistic_ext es\n INNER JOIN pg_catalog.pg_class c\n ON stxrelid = c.oid\n LEFT JOIN pg_catalog.pg_statistic_ext_data esd ON es.oid = esd.stxoid\n ORDER BY 1, 2, 3;\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 27 Aug 2020 23:26:17 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 07:53:23PM -0400, Alvaro Herrera wrote:\n>+1 for the general idea, and +1 for \\dX being the syntax to use\n>\n>IMO the per-type columns should show both the type being enabled as\n>well as it being built.\n>\n>(How many more stat types do we expect -- Tomas? I wonder if having one\n>column per type is going to scale in the long run.)\n>\n\nI wouldn't expect a huge number of types. I can imagine maybe twice the\ncurrent number of types, but not much more. But I'm not sure the output\nis easy to read even now ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 29 Aug 2020 23:47:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 11:26:17PM -0400, Alvaro Herrera wrote:\n>On 2020-Aug-28, Tatsuro Yamada wrote:\n>\n>> > IMO the per-type columns should show both the type being enabled as\n>> > well as it being built.\n>>\n>> Hmm. I'm not sure how to get the status (enabled or disabled) of\n>> extended stats. :(\n>> Could you explain it more?\n>\n>pg_statistic_ext_data.stxdndistinct is not null if the stats have been\n>built. (I'm not sure whether there's an easier way to determine this.)\n>\n\nIt's the only way, I think. Which types were requested is stored in\n\n pg_statistic_ext.stxkind\n\nand what was built is in pg_statistic_ext_data. But if we want the\noutput to show both what was requested and which types were actually\nbuilt, that'll effectively double the number of columns needed :-(\n\nAlso, it might be useful to show the size of the statistics built, just\nlike we show for \\d+ etc.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 29 Aug 2020 23:54:58 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On 2020-Aug-29, Tomas Vondra wrote:\n\n> But if we want the\n> output to show both what was requested and which types were actually\n> built, that'll effectively double the number of columns needed :-(\n\nI was thinking it would be one column per type showing either disabled or enabled\nor built. But another idea is to show one type per line that's at least\nenabled.\n\n> Also, it might be useful to show the size of the statistics built, just\n> like we show for \\d+ etc.\n\n\\dX+ I suppose?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 29 Aug 2020 18:43:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Sat, Aug 29, 2020 at 06:43:47PM -0400, Alvaro Herrera wrote:\n>On 2020-Aug-29, Tomas Vondra wrote:\n>\n>> But if we want the\n>> output to show both what was requested and which types were actually\n>> built, that'll effectively double the number of columns needed :-(\n>\n>I was thinking it would be one column per type showing either disabled or enabled\n>or built. But another idea is to show one type per line that's at least\n>enabled.\n>\n>> Also, it might be useful to show the size of the statistics built, just\n>> like we show for \\d+ etc.\n>\n>\\dX+ I suppose?\n>\n\nRight. I've only used \\d+ as an example of an existing command showing\nsizes of the objects.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 30 Aug 2020 00:54:36 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On 2020-Aug-30, Tomas Vondra wrote:\n\n> On Sat, Aug 29, 2020 at 06:43:47PM -0400, Alvaro Herrera wrote:\n> > On 2020-Aug-29, Tomas Vondra wrote:\n\n> > > Also, it might be useful to show the size of the statistics built, just\n> > > like we show for \\d+ etc.\n> > \n> > \\dX+ I suppose?\n> \n> Right. I've only used \\d+ as an example of an existing command showing\n> sizes of the objects.\n\nYeah, I understood it that way too.\n\nHow can you measure the size of the stat objects in a query? Are you\nthinking in pg_column_size()?\n\nI wonder how to report that. Knowing that psql \\-commands are not meant\nfor anything other than human consumption, maybe we can use a format()\nstring that says \"built: %d bytes\" when \\dX+ is used (for each stat type),\nand just \"built\" when \\dX is used. What do people think about this?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 30 Aug 2020 12:33:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Sun, Aug 30, 2020 at 12:33:29PM -0400, Alvaro Herrera wrote:\n>On 2020-Aug-30, Tomas Vondra wrote:\n>\n>> On Sat, Aug 29, 2020 at 06:43:47PM -0400, Alvaro Herrera wrote:\n>> > On 2020-Aug-29, Tomas Vondra wrote:\n>\n>> > > Also, it might be useful to show the size of the statistics built, just\n>> > > like we show for \\d+ etc.\n>> >\n>> > \\dX+ I suppose?\n>>\n>> Right. I've only used \\d+ as an example of an existing command showing\n>> sizes of the objects.\n>\n>Yeah, I understood it that way too.\n>\n>How can you measure the size of the stat objects in a query? Are you\n>thinking in pg_column_size()?\n>\n\nEither that or simply length() on the bytea value.\n\n>I wonder how to report that. Knowing that psql \\-commands are not meant\n>for anything other than human consumption, maybe we can use a format()\n>string that says \"built: %d bytes\" when \\dX+ is used (for each stat type),\n>and just \"built\" when \\dX is used. What do people think about this?\n>\n\nI'd use the same approach as \\d+, i.e. a separate column with the size.\nMaybe that'd mean too many columns, though.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 30 Aug 2020 18:48:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sun, Aug 30, 2020 at 12:33:29PM -0400, Alvaro Herrera wrote:\n>> I wonder how to report that. Knowing that psql \\-commands are not meant\n>> for anything other than human consumption, maybe we can use a format()\n>> string that says \"built: %d bytes\" when \\dX+ is used (for each stat type),\n>> and just \"built\" when \\dX is used. What do people think about this?\n\nSeems a little too cute to me.\n\n> I'd use the same approach as \\d+, i.e. a separate column with the size.\n> Maybe that'd mean too many columns, though.\n\npsql already has \\d commands with so many columns that you pretty much\nhave to use \\x mode to make them legible; \\df+ for instance. I don't\nmind if \\dX+ is also in that territory. It'd be good though if plain\n\\dX can fit in a normal terminal window.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 30 Aug 2020 12:59:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Alvaro,\n\n>>> IMO the per-type columns should show both the type being enabled as\n>>> well as it being built.\n>>\n>> Hmm. I'm not sure how to get the status (enabled or disabled) of\n>> extended stats. :(\n>> Could you explain it more?\n>\n> pg_statistic_ext_data.stxdndistinct is not null if the stats have been\n> built. (I'm not sure whether there's an easier way to determine this.)\n\n\nAh.. I see! Thank you.\n\n\n> I suggest to do this\n>\n> Name | Schema | Definition | Ndistinct | Dependencies | MCV\n> -----------+--------+--------------------------+-----------+--------------+-----\n> stts_1 | public | (a, b) FROM t1 | f | t | f\n>\n>> I suppose that the current column order is sufficient if there is\n>> no improvement of extended stats on PG14. Do you know any plan to\n>> improve extended stats such as to allow it to cross multiple tables on PG14?\n>\n> I suggest that changing it in the future is going to be an uphill\n> battle, so better get it right from the get go, without requiring a\n> future restructure.\n\n\nI understand your suggestions. I'll replace \"Columns\" and \"Table\" columns with \"Definition\" column.\n\n\n>> Currently, I use this query to get Extended stats info from pg_statistic_ext.\n>\n> Maybe something like this would do\n>\n> SELECT\n> stxnamespace::pg_catalog.regnamespace AS \"Schema\",\n> stxname AS \"Name\",\n> format('%s FROM %s',\n> (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(attname),', ')\n> FROM pg_catalog.unnest(stxkeys) s(attnum)\n> JOIN pg_catalog.pg_attribute a ON (stxrelid = a.attrelid AND\n> a.attnum = s.attnum AND NOT attisdropped)),\n> stxrelid::regclass) AS \"Definition\",\n> CASE WHEN stxdndistinct IS NOT NULL THEN 'built' WHEN 'd' = any(stxkind) THEN 'enabled, not built' END AS \"n-distinct\",\n> CASE WHEN stxddependencies IS NOT NULL THEN 'built' WHEN 'f' = any(stxkind) THEN 'enabled, not built' END AS \"functional dependencies\",\n> CASE WHEN stxdmcv IS NOT NULL THEN 'built' WHEN 'm' = any(stxkind) THEN 'enabled, not built' END AS mcv\n> FROM pg_catalog.pg_statistic_ext es\n> INNER JOIN pg_catalog.pg_class c\n> ON stxrelid = c.oid\n> LEFT JOIN pg_catalog.pg_statistic_ext_data esd ON es.oid = esd.stxoid\n> ORDER BY 1, 2, 3;\n\nGreat! It helped me a lot to understand your suggestions correctly. Thanks. :-D\nI got the below results by your query.\n\n========\ncreate table t1 (a int, b int);\ncreate statistics stts_1 (dependencies) on a, b from t1;\ncreate statistics stts_2 (dependencies, ndistinct) on a, b from t1;\ncreate statistics stts_3 (dependencies, ndistinct, mcv) on a, b from t1;\ncreate table t2 (a int, b int, c int);\ncreate statistics stts_4 on b, c from t2;\ncreate table hoge (col1 int, col2 int, col3 int);\ncreate statistics stts_hoge on col1, col2, col3 from hoge;\n\ninsert into t1 select i,i from generate_series(1,100) i;\nanalyze t1;\n\n\nYour query gave this result:\n\n Schema | Name | Definition | n-distinct | functional dependencies | mcv\n--------+-----------+----------------------------+--------------------+-------------------------+--------------------\n public | stts_1 | a, b FROM t1 | | built |\n public | stts_2 | a, b FROM t1 | built | built |\n public | stts_3 | a, b FROM t1 | built | built | built\n public | stts_4 | b, c FROM t2 | enabled, not built | enabled, not built | enabled, not built\n public | stts_hoge | col1, col2, col3 FROM hoge | enabled, not built | enabled, not built | enabled, not built\n(5 rows)\n========\n\nI guess \"enabled, not built\" is a little redundant. The status would better to\nhave three patterns: \"built\", \"not built\" or nothing (NULL) like these:\n\n - \"built\": extended stats is defined and built (collected by analyze cmd)\n - \"not built\": extended stats is defined but have not built yet\n - nothing (NULL): extended stats is not defined\n\nWhat do you think about it?\n\n\nI will send a new patch including :\n\n - Replace \"Columns\" and \"Table\" column with \"Definition\"\n - Show the status (built/not built/null) of extended stats by using\n pg_statistic_ext_data\n\nThanks,\nTatsuro Yamada\n\n\n\n\n\n",
"msg_date": "Mon, 31 Aug 2020 08:56:52 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On 2020/08/31 1:59, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Sun, Aug 30, 2020 at 12:33:29PM -0400, Alvaro Herrera wrote:\n>>> I wonder how to report that. Knowing that psql \\-commands are not meant\n>>> for anything other than human consumption, maybe we can use a format()\n>>> string that says \"built: %d bytes\" when \\dX+ is used (for each stat type),\n>>> and just \"built\" when \\dX is used. What do people think about this?\n> \n> Seems a little too cute to me.\n> \n>> I'd use the same approach as \\d+, i.e. a separate column with the size.\n>> Maybe that'd mean too many columns, though.\n> \n> psql already has \\d commands with so many columns that you pretty much\n> have to use \\x mode to make them legible; \\df+ for instance. I don't\n> mind if \\dX+ is also in that territory. It'd be good though if plain\n> \\dX can fit in a normal terminal window.\n\n\nHmm. How about these instead of \"built: %d bytes\"?\nI added three columns (N_size, D_size, M_size) to show size. See below:\n\n===================\n postgres=# \\dX\n List of extended statistics\n Schema | Name | Definition | N_distinct | Dependencies | Mcv\n--------+-----------+----------------------------+------------+--------------+-----------\n public | stts_1 | a, b FROM t1 | | built |\n public | stts_2 | a, b FROM t1 | built | built |\n public | stts_3 | a, b FROM t1 | built | built | built\n public | stts_4 | b, c FROM t2 | not built | not built | not built\n public | stts_hoge | col1, col2, col3 FROM hoge | not built | not built | not built\n(5 rows)\n\npostgres=# \\dX+\n List of extended statistics\n Schema | Name | Definition | N_distinct | Dependencies | Mcv | N_size | D_size | M_size\n--------+-----------+----------------------------+------------+--------------+-----------+--------+--------+--------\n public | stts_1 | a, b FROM t1 | | built | | | 40 |\n public | stts_2 | a, b FROM t1 | built | built | | 13 | 40 |\n public | stts_3 | a, b FROM t1 | built | built | built | 13 | 40 | 6126\n public | stts_4 | b, c FROM t2 | not built | not built | not built | | |\n public | stts_hoge | col1, col2, col3 FROM hoge | not built | not built | not built | | |\n===================\n\nI used this query to get results of \"\\dX+\".\n===================\n SELECT\n stxnamespace::pg_catalog.regnamespace AS \"Schema\",\n stxname AS \"Name\",\n format('%s FROM %s',\n (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(attname),', ')\n FROM pg_catalog.unnest(stxkeys) s(attnum)\n JOIN pg_catalog.pg_attribute a\n ON (stxrelid = a.attrelid\n AND a.attnum = s.attnum\n AND NOT attisdropped)),\n stxrelid::regclass) AS \"Definition\",\n CASE WHEN esd.stxdndistinct IS NOT NULL THEN 'built'\n WHEN 'd' = any(stxkind) THEN 'not built'\n END AS \"N_distinct\",\n CASE WHEN esd.stxddependencies IS NOT NULL THEN 'built'\n WHEN 'f' = any(stxkind) THEN 'not built'\n END AS \"Dependencies\",\n CASE WHEN esd.stxdmcv IS NOT NULL THEN 'built'\n WHEN 'm' = any(stxkind) THEN 'not built'\n END AS \"Mcv\",\n pg_catalog.length(stxdndistinct) AS \"N_size\",\n pg_catalog.length(stxddependencies) AS \"D_size\",\n pg_catalog.length(stxdmcv) AS \"M_size\"\n FROM pg_catalog.pg_statistic_ext es\n INNER JOIN pg_catalog.pg_class c\n ON stxrelid = c.oid\n LEFT JOIN pg_catalog.pg_statistic_ext_data esd\n ON es.oid = esd.stxoid\n ORDER BY 1, 2;\n===================\n \n\nAttached patch includes:\n\n - Replace \"Columns\" and \"Table\" column with \"Definition\"\n - Show the status (built/not built/null) of extended stats by\n using pg_statistic_ext_data\n - Add \"\\dX+\" command to show size of extended stats\n\nPlease find the attached file! :-D\n\n\nThanks,\nTatsuro Yamada",
"msg_date": "Mon, 31 Aug 2020 10:24:23 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 07:53:23PM -0400, Alvaro Herrera wrote:\n> +1 for the general idea, and +1 for \\dX being the syntax to use\n> \n> IMO the per-type columns should show both the type being enabled as\n> well as it being built.\n> \n> (How many more stat types do we expect -- Tomas? I wonder if having one\n> column per type is going to scale in the long run.)\n> \n> Also, the stat obj name column should be first, followed by a single\n> column listing both table and columns that it applies to. Keep in mind\n> that in the future we might want to add stats that cross multiple tables\n> -- that's why the CREATE syntax is the way it is. So we should give\n> room for that in psql's display too.\n\nThere's also a plan for CREATE STATISTICS to support expresion statistics, with\nthe statistics functionality of an expression index, but without the cost of\nindex-update on UPDATE/DELETE. That's Tomas' patch here:\nhttps://commitfest.postgresql.org/29/2421/\n\nI think that would compute ndistinct and MCV, same as indexes, but not\ndependencies. To me, I think it's better if there's a single column showing\nthe \"kinds\" of statistics to be generated (stxkind), rather than a column for\neach.\n\nI'm not sure why the length of the stats lists cast as text is useful to show?\nWe don't have a slash-dee command to show the number of MCV or histogram in\ntraditional, 1-D stats in pg_statistic, right ? I think anybody wanting that\nwould learn to SELECT FROM pg_statistic*. Also, the length of the text output\nisn't very meaningful ? If this is json, maybe you'd do something like this:\n|SELECT a.stxdndistinct , COUNT(b) FROM pg_statistic_ext_data a , json_each(stxdndistinct::Json) AS b GROUP BY 1\n\nI guess stxdmcv isn't json, but it seems especially meaningless to show\nlength() of its ::text, since we don't even \"deserialize\" the object to begin\nwith.\n\nBTW, I've just started a new thread about displaying in psql \\d the stats\ntarget of target extended stats.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 31 Aug 2020 00:18:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On 2020-Aug-30, Tomas Vondra wrote:\n\n> On Sun, Aug 30, 2020 at 12:33:29PM -0400, Alvaro Herrera wrote:\n\n> > I wonder how to report that. Knowing that psql \\-commands are not meant\n> > for anything other than human consumption, maybe we can use a format()\n> > string that says \"built: %d bytes\" when \\dX+ is used (for each stat type),\n> > and just \"built\" when \\dX is used. What do people think about this?\n> \n> I'd use the same approach as \\d+, i.e. a separate column with the size.\n> Maybe that'd mean too many columns, though.\n\nAre you thinking in one size for all stats, or a combined size? If the\nformer, then yes it'd be too many columns.\n\nI'm trying to figure out what can the user *do* with that data. Can\nthey make the sample size smaller/bigger if the stats data is too large?\nCan they do that for each individual stats type? If so, it'd make sense\nto list each type's size separately.\n\nIf we do put each type in its own row -- at least \"logical\" row, say\nstring_agg(unnest(array_of_types), '\\n') -- then we can put the size of each type\nin a separate column with string_agg(unnest(array_of_sizes), '\\n') \n\n statname | definition | type | size\n----------+-----------------+--------------------------+-----------\n someobj | (a, b) FROM tab | n-distinct: built | 2000 bytes\n | func-dependencies: built | 4000 bytes\n another | (a, c) FROM tab | n-distint: enabled | <null>\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 31 Aug 2020 10:28:38 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> If we do put each type in its own row -- at least \"logical\" row, say\n> string_agg(unnest(array_of_types), '\\n') -- then we can put the size of each type\n> in a separate column with string_agg(unnest(array_of_sizes), '\\n') \n\n> statname | definition | type | size\n> ----------+-----------------+--------------------------+-----------\n> someobj | (a, b) FROM tab | n-distinct: built | 2000 bytes\n> | func-dependencies: built | 4000 bytes\n> another | (a, c) FROM tab | n-distint: enabled | <null>\n\nI guess I'm wondering why the size is of such interest that we\nneed it at all here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 31 Aug 2020 10:58:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Mon, Aug 31, 2020 at 10:58:11AM -0400, Tom Lane wrote:\n>Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> If we do put each type in its own row -- at least \"logical\" row, say\n>> string_agg(unnest(array_of_types), '\\n') -- then we can put the size of each type\n>> in a separate column with string_agg(unnest(array_of_sizes), '\\n')\n>\n>> statname | definition | type | size\n>> ----------+-----------------+--------------------------+-----------\n>> someobj | (a, b) FROM tab | n-distinct: built | 2000 bytes\n>> | func-dependencies: built | 4000 bytes\n>> another | (a, c) FROM tab | n-distint: enabled | <null>\n>\n>I guess I'm wondering why the size is of such interest that we\n>need it at all here.\n>\n\nI agree it may not be important enough. I did use it during development\netc. but maybe it's not something we need to include in this list (even\nif it's just in the \\dX+ variant).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 31 Aug 2020 17:20:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Mon, Aug 31, 2020 at 10:28:38AM -0400, Alvaro Herrera wrote:\n>On 2020-Aug-30, Tomas Vondra wrote:\n>\n>> On Sun, Aug 30, 2020 at 12:33:29PM -0400, Alvaro Herrera wrote:\n>\n>> > I wonder how to report that. Knowing that psql \\-commands are not meant\n>> > for anything other than human consumption, maybe we can use a format()\n>> > string that says \"built: %d bytes\" when \\dX+ is used (for each stat type),\n>> > and just \"built\" when \\dX is used. What do people think about this?\n>>\n>> I'd use the same approach as \\d+, i.e. a separate column with the size.\n>> Maybe that'd mean too many columns, though.\n>\n>Are you thinking in one size for all stats, or a combined size? If the\n>former, then yes it'd be too many columns.\n>\n\nI wonder if trying to list info about all stats from the statistics\nobject in a single line is necessary. Maybe we should split the info\ninto one line per statistics, so for example\n\n CREATE STATISTICS s (mcv, ndistinct, dependencies) ON ...\n\nwould result in three lines in the \\dX output. The statistics name would\nidentify which lines belong together, but other than that the pieces are\nmostly independent.\n\nThis would make it somewhat future-proof in case we add more statistics\ntypes, because the number of columns would not increase. OTOH maybe it's\npointless and/or against the purpose of listing statistics objects.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 31 Aug 2020 17:30:53 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On 2020-Aug-31, Tomas Vondra wrote:\n\n> I wonder if trying to list info about all stats from the statistics\n> object in a single line is necessary. Maybe we should split the info\n> into one line per statistics, so for example\n> \n> CREATE STATISTICS s (mcv, ndistinct, dependencies) ON ...\n> \n> would result in three lines in the \\dX output. The statistics name would\n> identify which lines belong together, but other than that the pieces are\n> mostly independent.\n\nYeah, that's what I'm suggesting. I don't think we need to repeat the\nname/definition for each line though.\n\nIt might be useful to know how does pspg show a single entry that's\nsplit in three lines, though.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 31 Aug 2020 12:18:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Mon, Aug 31, 2020 at 12:18:09PM -0400, Alvaro Herrera wrote:\n>On 2020-Aug-31, Tomas Vondra wrote:\n>\n>> I wonder if trying to list info about all stats from the statistics\n>> object in a single line is necessary. Maybe we should split the info\n>> into one line per statistics, so for example\n>>\n>> CREATE STATISTICS s (mcv, ndistinct, dependencies) ON ...\n>>\n>> would result in three lines in the \\dX output. The statistics name would\n>> identify which lines belong together, but other than that the pieces are\n>> mostly independent.\n>\n>Yeah, that's what I'm suggesting. I don't think we need to repeat the\n>name/definition for each line though.\n>\n>It might be useful to know how does pspg show a single entry that's\n>split in three lines, though.\n>\n\nAh, I didn't realize you're proposing that - I assumed it's broken\nsimply to make it readable, or something like that. I think the lines\nare mostly independent, so I'd suggest to include the name of the object\non each line. The question is whether this independence will remain true\nin the future - for example histograms would be built only on data not\nrepresented by the MCV list, so there's a close dependency there.\n\nNot sure about pspg, and I'm not sure it matters too much.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 31 Aug 2020 18:32:00 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "po 31. 8. 2020 v 18:32 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\nnapsal:\n\n> On Mon, Aug 31, 2020 at 12:18:09PM -0400, Alvaro Herrera wrote:\n> >On 2020-Aug-31, Tomas Vondra wrote:\n> >\n> >> I wonder if trying to list info about all stats from the statistics\n> >> object in a single line is necessary. Maybe we should split the info\n> >> into one line per statistics, so for example\n> >>\n> >> CREATE STATISTICS s (mcv, ndistinct, dependencies) ON ...\n> >>\n> >> would result in three lines in the \\dX output. The statistics name would\n> >> identify which lines belong together, but other than that the pieces are\n> >> mostly independent.\n> >\n> >Yeah, that's what I'm suggesting. I don't think we need to repeat the\n> >name/definition for each line though.\n> >\n> >It might be useful to know how does pspg show a single entry that's\n> >split in three lines, though.\n> >\n>\n> Ah, I didn't realize you're proposing that - I assumed it's broken\n> simply to make it readable, or something like that. I think the lines\n> are mostly independent, so I'd suggest to include the name of the object\n> on each line. The question is whether this independence will remain true\n> in the future - for example histograms would be built only on data not\n> represented by the MCV list, so there's a close dependency there.\n>\n> Not sure about pspg, and I'm not sure it matters too much.\n>\n\npspg almost ignores multiline rows - the horizontal cursor is one row every\ntime. There is only one use case where pspg detects multiline rows - sorts,\nand pspg ensures correct content for multiline rows displayed in different\n(than input) order.\n\nRegards\n\nPavel\n\n\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npo 31. 8. 2020 v 18:32 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com> napsal:On Mon, Aug 31, 2020 at 12:18:09PM -0400, Alvaro Herrera wrote:\n>On 2020-Aug-31, Tomas Vondra wrote:\n>\n>> I wonder if trying to list info about all stats from the statistics\n>> object in a single line is necessary. Maybe we should split the info\n>> into one line per statistics, so for example\n>>\n>> CREATE STATISTICS s (mcv, ndistinct, dependencies) ON ...\n>>\n>> would result in three lines in the \\dX output. The statistics name would\n>> identify which lines belong together, but other than that the pieces are\n>> mostly independent.\n>\n>Yeah, that's what I'm suggesting. I don't think we need to repeat the\n>name/definition for each line though.\n>\n>It might be useful to know how does pspg show a single entry that's\n>split in three lines, though.\n>\n\nAh, I didn't realize you're proposing that - I assumed it's broken\nsimply to make it readable, or something like that. I think the lines\nare mostly independent, so I'd suggest to include the name of the object\non each line. The question is whether this independence will remain true\nin the future - for example histograms would be built only on data not\nrepresented by the MCV list, so there's a close dependency there.\n\nNot sure about pspg, and I'm not sure it matters too much.pspg almost ignores multiline rows - the horizontal cursor is one row every time. There is only one use case where pspg detects multiline rows - sorts, and pspg ensures correct content for multiline rows displayed in different (than input) order.RegardsPavel \n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 31 Aug 2020 20:38:11 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi,\n\n> >> I wonder if trying to list info about all stats from the statistics\n> >> object in a single line is necessary. Maybe we should split the info\n> >> into one line per statistics, so for example\n> >>\n> >> CREATE STATISTICS s (mcv, ndistinct, dependencies) ON ...\n> >>\n> >> would result in three lines in the \\dX output. The statistics name would\n> >> identify which lines belong together, but other than that the pieces are\n> >> mostly independent.\n> >\n> >Yeah, that's what I'm suggesting. I don't think we need to repeat the\n> >name/definition for each line though.\n> >\n> >It might be useful to know how does pspg show a single entry that's\n> >split in three lines, though.\n> >\n> \n> Ah, I didn't realize you're proposing that - I assumed it's broken\n> simply to make it readable, or something like that. I think the lines\n> are mostly independent, so I'd suggest to include the name of the object\n> on each line. The question is whether this independence will remain true\n> in the future - for example histograms would be built only on data not\n> represented by the MCV list, so there's a close dependency there.\n> \n> Not sure about pspg, and I'm not sure it matters too much.\n> \n> \n> pspg almost ignores multiline rows - the horizontal cursor is one row every time. There is only one use case where pspg detects multiline rows - sorts, and pspg ensures correct content for multiline rows displayed in different (than input) order.\n\n\n\nI try to summarize the discussion so far.\nIs my understanding right? Could you revise it if it has something wrong?\n\n\n* Summary\n\n 1. \"\\dX[+]\" doesn't display the Size of extended stats since the size is\n useful only for the development process of the stats.\n\n 2. each row should have stats name, definition, type, and status.\n For example:\n\n statname | definition | type |\n ----------+------------------+---------------------------+\n someobj | (a, b) FROM tab | n-distinct: built |\n someobj | (a, b) FROM tab | func-dependencies: built |\n someobj | (a, b) FROM tab | mcv: built |\n sttshoge | (a, b) FROM hoge | n-distinct: required |\n sttshoge | (a, b) FROM hoge | func-dependencies:required|\n sttscross| (a, b) FROM t1,t2| n-distinct: required |\n\n\nMy opinion is below:\n\n For 1., Agreed. I will remove it on the next patch.\n For 2., I feel the design is not beautiful so I'd like to change it.\n The reasons are:\n\n - I think that even if we expected the number of types increasing two times,\n each type would be better to put as columns, not lines.\n Repeating items (the stats name and definition) should be removed.\n It's okay there are many columns in the future like \"\\df+\" because we can\n use \"\\x\" mode to display if we need it.\n\n - The type column has two kinds of data, the one is stats type and another\n is status. We know the word \"One fact in One place\" for data modeling in\n the RDBMS world so it would be better to divide it.\n I'd like to suggest the bellow design of the view.\n\n statname | definition | n-distinct | func-dependencies | mcv |\n ----------+------------------+------------+-------------------+-------|\n someobj | (a, b) FROM tab | built | built | built |\n sttshoge | (a, b) FROM hoge | required | required | |\n sttscross| (a, b) FROM t1,t2| required | | |\n\n\nAny thoughts?\n\n\nThanks,\nTatsuro Yamada\n\n\n\n\n\n",
"msg_date": "Thu, 03 Sep 2020 08:45:17 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Thu, Sep 03, 2020 at 08:45:17AM +0900, Tatsuro Yamada wrote:\n> I try to summarize the discussion so far.\n\nCould you provide at least a rebased version of the patch? The CF bot\nis complaning here.\n--\nMichael",
"msg_date": "Thu, 17 Sep 2020 14:55:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Thu, Sep 17, 2020 at 02:55:31PM +0900, Michael Paquier wrote:\n> Could you provide at least a rebased version of the patch? The CF bot\n> is complaning here.\n\nNot seeing this answered after two weeks, I have marked the patch as\nRwF for now.\n--\nMichael",
"msg_date": "Wed, 30 Sep 2020 15:19:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Michael-san and Hackers,\n\nOn 2020/09/30 15:19, Michael Paquier wrote:\n> On Thu, Sep 17, 2020 at 02:55:31PM +0900, Michael Paquier wrote:\n>> Could you provide at least a rebased version of the patch? The CF bot\n>> is complaning here.\n> \n> Not seeing this answered after two weeks, I have marked the patch as\n> RwF for now.\n> --\n> Michael\n\n\nSorry for the delayed reply.\n\nI re-based the patch on the current head and did some\nrefactoring.\nI think the size of extended stats are not useful for DBA.\nShould I remove it?\n\nChanges:\n========\n - Use a keyword \"defined\" instead of \"not built\"\n - Use COALESCE function for size for extended stats\n\nResults of \\dX and \\dX+:\n========================\npostgres=# \\dX\n List of extended statistics\n Schema | Name | Definition | N_distinct | Dependencies | Mcv\n-------------+-----------+-----------------+------------+--------------+---------\n public | hoge1_ext | a, b FROM hoge1 | defined | defined | defined\n hoge1schema | hoge1_ext | a, b FROM hoge1 | built | built | built\n(2 rows)\n\npostgres=# \\dX+\n List of extended statistics\n Schema | Name | Definition | N_distinct | Dependencies | Mcv | N_size | D_size | M_size\n-------------+-----------+-----------------+------------+--------------+---------+--------+--------+--------\n public | hoge1_ext | a, b FROM hoge1 | defined | defined | defined | 0 | 0 | 0\n hoge1schema | hoge1_ext | a, b FROM hoge1 | built | built | built | 13 | 40 | 6126\n(2 rows)\n\nQuery of \\dX+:\n==============\n SELECT\n stxnamespace::pg_catalog.regnamespace AS \"Schema\",\n stxname AS \"Name\",\n pg_catalog.format('%s FROM %s',\n (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(a.attname),', ')\n FROM pg_catalog.unnest(es.stxkeys) s(attnum)\n JOIN pg_catalog.pg_attribute a\n ON (es.stxrelid = a.attrelid\n AND a.attnum = s.attnum\n AND NOT a.attisdropped)),\n es.stxrelid::regclass) AS \"Definition\",\n CASE WHEN esd.stxdndistinct IS NOT NULL THEN 'built'\n WHEN 'd' = any(stxkind) THEN 'defined'\n END AS \"N_distinct\",\n CASE WHEN esd.stxddependencies IS NOT NULL THEN 'built'\n WHEN 'f' = any(stxkind) THEN 'defined'\n END AS \"Dependencies\",\n CASE WHEN esd.stxdmcv IS NOT NULL THEN 'built'\n WHEN 'm' = any(stxkind) THEN 'defined'\n END AS \"Mcv\",\n COALESCE(pg_catalog.length(stxdndistinct), 0) AS \"N_size\",\n COALESCE(pg_catalog.length(stxddependencies), 0) AS \"D_size\",\n COALESCE(pg_catalog.length(stxdmcv), 0) AS \"M_size\"\n FROM pg_catalog.pg_statistic_ext es\n LEFT JOIN pg_catalog.pg_statistic_ext_data esd\n ON es.oid = esd.stxoid\n INNER JOIN pg_catalog.pg_class c\n ON es.stxrelid = c.oid\n ORDER BY 1, 2;\n\n\nRegards,\nTatsuro Yamada",
"msg_date": "Wed, 28 Oct 2020 14:41:40 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: [spam] Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Michael-san and Hackers,\n\nOn 2020/09/30 15:19, Michael Paquier wrote:\n> On Thu, Sep 17, 2020 at 02:55:31PM +0900, Michael Paquier wrote:\n>> Could you provide at least a rebased version of the patch? The CF bot\n>> is complaning here.\n> \n> Not seeing this answered after two weeks, I have marked the patch as\n> RwF for now.\n> --\n> Michael\n\n\nSorry for the delayed reply.\n\nI re-based the patch on the current head and did some\nrefactoring.\nI think the size of extended stats are not useful for DBA.\nShould I remove it?\n\nChanges:\n========\n - Use a keyword \"defined\" instead of \"not built\"\n - Use COALESCE function for size for extended stats\n\nResults of \\dX and \\dX+:\n========================\npostgres=# \\dX\n List of extended statistics\n Schema | Name | Definition | N_distinct | Dependencies | Mcv\n-------------+-----------+-----------------+------------+--------------+---------\n public | hoge1_ext | a, b FROM hoge1 | defined | defined | defined\n hoge1schema | hoge1_ext | a, b FROM hoge1 | built | built | built\n(2 rows)\n\npostgres=# \\dX+\n List of extended statistics\n Schema | Name | Definition | N_distinct | Dependencies | Mcv | N_size | D_size | M_size\n-------------+-----------+-----------------+------------+--------------+---------+--------+--------+--------\n public | hoge1_ext | a, b FROM hoge1 | defined | defined | defined | 0 | 0 | 0\n hoge1schema | hoge1_ext | a, b FROM hoge1 | built | built | built | 13 | 40 | 6126\n(2 rows)\n\nQuery of \\dX+:\n==============\n SELECT\n stxnamespace::pg_catalog.regnamespace AS \"Schema\",\n stxname AS \"Name\",\n pg_catalog.format('%s FROM %s',\n (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(a.attname),', ')\n FROM pg_catalog.unnest(es.stxkeys) s(attnum)\n JOIN pg_catalog.pg_attribute a\n ON (es.stxrelid = a.attrelid\n AND a.attnum = s.attnum\n AND NOT a.attisdropped)),\n es.stxrelid::regclass) AS \"Definition\",\n CASE WHEN esd.stxdndistinct IS NOT NULL THEN 'built'\n WHEN 'd' = any(stxkind) THEN 'defined'\n END AS \"N_distinct\",\n CASE WHEN esd.stxddependencies IS NOT NULL THEN 'built'\n WHEN 'f' = any(stxkind) THEN 'defined'\n END AS \"Dependencies\",\n CASE WHEN esd.stxdmcv IS NOT NULL THEN 'built'\n WHEN 'm' = any(stxkind) THEN 'defined'\n END AS \"Mcv\",\n COALESCE(pg_catalog.length(stxdndistinct), 0) AS \"N_size\",\n COALESCE(pg_catalog.length(stxddependencies), 0) AS \"D_size\",\n COALESCE(pg_catalog.length(stxdmcv), 0) AS \"M_size\"\n FROM pg_catalog.pg_statistic_ext es\n LEFT JOIN pg_catalog.pg_statistic_ext_data esd\n ON es.oid = esd.stxoid\n INNER JOIN pg_catalog.pg_class c\n ON es.stxrelid = c.oid\n ORDER BY 1, 2;\n\n\nRegards,\nTatsuro Yamada",
"msg_date": "Wed, 28 Oct 2020 15:07:56 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi,\n\n> Results of \\dX and \\dX+:\n> ========================\n> postgres=# \\dX\n> ��������������������������� List of extended statistics\n> ��� Schema��� |�� Name��� |�� Definition��� | N_distinct | Dependencies |�� Mcv\n> -------------+-----------+-----------------+------------+--------------+---------\n> � public����� | hoge1_ext | a, b FROM hoge1 | defined��� | defined����� | defined\n> � hoge1schema | hoge1_ext | a, b FROM hoge1 | built����� | built������� | built\n> (2 rows)\n\n\nI used \"Order by 1, 2\" on the query but I realized the ordering of\nresult was wrong so I fixed on the attached patch.\nPlease fined the patch file. :-D\n\nRegards,\nTatsuro Yamada",
"msg_date": "Wed, 28 Oct 2020 16:20:25 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 03:07:56PM +0900, Tatsuro Yamada wrote:\n>Hi Michael-san and Hackers,\n>\n>On 2020/09/30 15:19, Michael Paquier wrote:\n>>On Thu, Sep 17, 2020 at 02:55:31PM +0900, Michael Paquier wrote:\n>>>Could you provide at least a rebased version of the patch? The CF bot\n>>>is complaning here.\n>>\n>>Not seeing this answered after two weeks, I have marked the patch as\n>>RwF for now.\n>>--\n>>Michael\n>\n>\n>Sorry for the delayed reply.\n>\n>I re-based the patch on the current head and did some\n>refactoring.\n>I think the size of extended stats are not useful for DBA.\n>Should I remove it?\n>\n\nI think it's an interesting / useful information, I'd keep it (in the\n\\dX+ output only, of course). But I think it needs to print the size\nsimilarly to \\d+, i.e. using pg_size_pretty - that'll include the unit\nand make it more readable for large stats.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 28 Oct 2020 20:06:01 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 04:20:25PM +0900, Tatsuro Yamada wrote:\n>Hi,\n>\n>>Results of \\dX and \\dX+:\n>>========================\n>>postgres=# \\dX\n>> ��������������������������� List of extended statistics\n>> ��� Schema��� |�� Name��� |�� Definition��� | N_distinct | Dependencies |�� Mcv\n>>-------------+-----------+-----------------+------------+--------------+---------\n>> � public����� | hoge1_ext | a, b FROM hoge1 | defined��� | defined����� | defined\n>> � hoge1schema | hoge1_ext | a, b FROM hoge1 | built����� | built������� | built\n>>(2 rows)\n>\n>\n>I used \"Order by 1, 2\" on the query but I realized the ordering of\n>result was wrong so I fixed on the attached patch.\n>Please fined the patch file. :-D\n>\n\nThanks. I'll take a look at the beginning of the 2020-11 commitfest, and\nI hope to get this committed.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 28 Oct 2020 20:07:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas,\n\nOn 2020/10/29 4:07, Tomas Vondra wrote:\n> On Wed, Oct 28, 2020 at 04:20:25PM +0900, Tatsuro Yamada wrote:\n>> Hi,\n>>\n>>> Results of \\dX and \\dX+:\n>>> ========================\n>>> postgres=# \\dX\n>>> List of extended statistics\n>>> Schema | Name | Definition | N_distinct | Dependencies | Mcv\n>>> -------------+-----------+-----------------+------------+--------------+---------\n>>> public | hoge1_ext | a, b FROM hoge1 | defined | defined | defined\n>>> hoge1schema | hoge1_ext | a, b FROM hoge1 | built | built | built\n>>> (2 rows)\n>>\n>>\n>> I used \"Order by 1, 2\" on the query but I realized the ordering of\n>> result was wrong so I fixed on the attached patch.\n>> Please find the patch file. :-D\n>>\n> \n> Thanks. I'll take a look at the beginning of the 2020-11 commitfest, and\n> I hope to get this committed.\n\n\nThanks for your reply and I'm glad to hear that.\n\nI'm going to revise the patch as possible to get this committed on\nthe next commitfest.\n\nRegards,\nTatsuro Yamada\n\n\n\n\n\n",
"msg_date": "Thu, 29 Oct 2020 10:22:47 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas,\n\nOn 2020/10/29 4:06, Tomas Vondra wrote:\n> On Wed, Oct 28, 2020 at 03:07:56PM +0900, Tatsuro Yamada wrote:\n>> Hi Michael-san and Hackers,\n>>\n>> On 2020/09/30 15:19, Michael Paquier wrote:\n>>> On Thu, Sep 17, 2020 at 02:55:31PM +0900, Michael Paquier wrote:\n>>>> Could you provide at least a rebased version of the patch? The CF bot\n>>>> is complaning here.\n>>>\n>>> Not seeing this answered after two weeks, I have marked the patch as\n>>> RwF for now.\n>>> -- \n>>> Michael\n>>\n>>\n>> Sorry for the delayed reply.\n>>\n>> I re-based the patch on the current head and did some\n>> refactoring.\n>> I think the size of extended stats are not useful for DBA.\n>> Should I remove it?\n>>\n> \n> I think it's an interesting / useful information, I'd keep it (in the\n> \\dX+ output only, of course). But I think it needs to print the size\n> similarly to \\d+, i.e. using pg_size_pretty - that'll include the unit\n> and make it more readable for large stats.\n\n\nThanks for your comment.\nI addressed it, so I keep the size of extended stats with the unit.\n\nChanges:\n========\n - Use pg_size_pretty to show the size of extended stats by \\dX+\n\nResult of \\dX+:\n===============\n Schema | Name | Definition | N_distinct | Dependencies | Mcv | N_Size | D_Size | M_Size\n-------------+------------+-----------------+------------+--------------+---------+----------+----------+------------\n hoge1schema | hoge1_ext | a, b FROM hoge1 | built | built | built | 13 bytes | 40 bytes | 6126 bytes\n public | hoge1_ext1 | a, b FROM hoge1 | defined | defined | defined | 0 bytes | 0 bytes | 0 bytes\n public | hoge1_ext2 | a, b FROM hoge1 | defined | | | 0 bytes | |\n(3 rows)\n\nPlease find the attached patch.\n\nRegards,\nTatsuro Yamada",
"msg_date": "Thu, 29 Oct 2020 10:34:44 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi,\n\n> I addressed it, so I keep the size of extended stats with the unit.\n> \n> Changes:\n> ========\n> - Use pg_size_pretty to show the size of extended stats by \\dX+\n\n\nI rebased the patch on the head and also added tab-completion.\nAny feedback is welcome.\n\n\nPreparing for tests:\n===========\ncreate table t1 (a int, b int);\ncreate statistics stts_1 (dependencies) on a, b from t1;\ncreate statistics stts_2 (dependencies, ndistinct) on a, b from t1;\ncreate statistics stts_3 (dependencies, ndistinct, mcv) on a, b from t1;\n\ncreate table t2 (a int, b int, c int);\ncreate statistics stts_4 on b, c from t2;\n\ncreate table hoge (col1 int, col2 int, col3 int);\ncreate statistics stts_hoge on col1, col2, col3 from hoge;\n\ncreate schema foo;\ncreate schema yama;\ncreate statistics foo.stts_foo on col1, col2 from hoge;\ncreate statistics yama.stts_yama (ndistinct, mcv) on col1, col3 from hoge;\n\ninsert into t1 select i,i from generate_series(1,100) i;\nanalyze t1;\n\nResult of \\dX:\n==============\npostgres=# \\dX\n List of extended statistics\n Schema | Name | Definition | N_distinct | Dependencies | Mcv\n--------+-----------+----------------------------+------------+--------------+---------\n foo | stts_foo | col1, col2 FROM hoge | defined | defined | defined\n public | stts_1 | a, b FROM t1 | | built |\n public | stts_2 | a, b FROM t1 | built | built |\n public | stts_3 | a, b FROM t1 | built | built | built\n public | stts_4 | b, c FROM t2 | defined | defined | defined\n public | stts_hoge | col1, col2, col3 FROM hoge | defined | defined | defined\n yama | stts_yama | col1, col3 FROM hoge | defined | | defined\n(7 rows)\n\nResult of \\dX+:\n===============\npostgres=# \\dX+\n List of extended statistics\n Schema | Name | Definition | N_distinct | Dependencies | Mcv | N_size | D_size | M_size\n--------+-----------+----------------------------+------------+--------------+---------+----------+----------+------------\n foo | stts_foo | col1, col2 FROM hoge | defined | defined | defined | 0 bytes | 0 bytes | 0 bytes\n public | stts_1 | a, b FROM t1 | | built | | | 40 bytes |\n public | stts_2 | a, b FROM t1 | built | built | | 13 bytes | 40 bytes |\n public | stts_3 | a, b FROM t1 | built | built | built | 13 bytes | 40 bytes | 6126 bytes\n public | stts_4 | b, c FROM t2 | defined | defined | defined | 0 bytes | 0 bytes | 0 bytes\n public | stts_hoge | col1, col2, col3 FROM hoge | defined | defined | defined | 0 bytes | 0 bytes | 0 bytes\n yama | stts_yama | col1, col3 FROM hoge | defined | | defined | 0 bytes | | 0 bytes\n(7 rows)\n\nResults of Tab-completion:\n===============\npostgres=# \\dX <Tab>\nfoo. pg_toast. stts_2 stts_hoge\ninformation_schema. public. stts_3 yama.\npg_catalog. stts_1 stts_4\n\npostgres=# \\dX+ <Tab>\nfoo. pg_toast. stts_2 stts_hoge\ninformation_schema. public. stts_3 yama.\npg_catalog. stts_1 stts_4\n\n\nRegards,\nTatsuro Yamada",
"msg_date": "Wed, 04 Nov 2020 12:04:48 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi,\n\nI took a look at this today, and I think the code is ready, but the\nregression test needs a bit more work:\n\n1) It's probably better to use somewhat more specific names for the\nobjects, especially when created in public schema. It decreases the\nchance of a collision with other tests (which may be hard to notice\nbecause of timing). I suggest we use \"stts_\" prefix or something like\nthat, per the attached 0002 patch. (0001 is just the v7 patch)\n\n2) The test is failing intermittently because it's executed in parallel\nwith stats_ext test, which is also creating extended statistics. So\ndepending on the timing the \\dX may list some of the stats_ext stuff.\nI'm not sure what to do about this. Either this part needs to be moved\nto a separate test executed in a different group, or maybe we should\nsimply move it to stats_ext.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 8 Nov 2020 22:53:34 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas,\n\n> I took a look at this today, and I think the code is ready, but the\n> regression test needs a bit more work:\n\nThanks for taking your time. :-D\n\n\n> 1) It's probably better to use somewhat more specific names for the\n> objects, especially when created in public schema. It decreases the\n> chance of a collision with other tests (which may be hard to notice\n> because of timing). I suggest we use \"stts_\" prefix or something like\n> that, per the attached 0002 patch. (0001 is just the v7 patch)\n\nI agree with your comment. Thanks.\n\n\n\n> 2) The test is failing intermittently because it's executed in parallel\n> with stats_ext test, which is also creating extended statistics. So\n> depending on the timing the \\dX may list some of the stats_ext stuff.\n> I'm not sure what to do about this. Either this part needs to be moved\n> to a separate test executed in a different group, or maybe we should\n> simply move it to stats_ext.\n\nI thought all tests related to meta-commands exist in psql.sql, but I\nrealize it's not true. For example, the test of \\dRp does not exist in\npsql.sql. Therefore, I moved the regression test of \\dX to stats_ext.sql\nto avoid the test failed in parallel.\n\nAttached patches is following:\n - 0001-v8-Add-dX-command-on-psql.patch\n - 0002-Add-regression-test-of-dX-to-stats_ext.sql.patch\n\nHowever, I feel the test of \\dX is not elegant, so I'm going to try\ncreating another one since it would be better to be aware of the context\nof existing extended stats tests.\n\nRegards,\nTatsuro Yamada",
"msg_date": "Tue, 10 Nov 2020 12:38:53 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi,\n\n \n>> 2) The test is failing intermittently because it's executed in parallel\n>> with stats_ext test, which is also creating extended statistics. So\n>> depending on the timing the \\dX may list some of the stats_ext stuff.\n>> I'm not sure what to do about this. Either this part needs to be moved\n>> to a separate test executed in a different group, or maybe we should\n>> simply move it to stats_ext.\n> \n> I thought all tests related to meta-commands exist in psql.sql, but I\n> realize it's not true. For example, the test of \\dRp does not exist in\n> psql.sql. Therefore, I moved the regression test of \\dX to stats_ext.sql\n> to avoid the test failed in parallel.\n> \n> Attached patches is following:\n> - 0001-v8-Add-dX-command-on-psql.patch\n> - 0002-Add-regression-test-of-dX-to-stats_ext.sql.patch\n> \n> However, I feel the test of \\dX is not elegant, so I'm going to try\n> creating another one since it would be better to be aware of the context\n> of existing extended stats tests.\n\nI tried to create another version of the regression test (0003).\n\"\\dX\" was added after ANALYZE command or SELECT... from pg_statistic_ext.\n\nPlease find the attached file:\n - 0003-Add-regression-test-of-dX-to-stats_ext.sql-another-ver\n\nBoth regression tests 0002 and 0003 are okay for me, I think.\nCould you choose one?\n\nRegards,\nTatsuro Yamada",
"msg_date": "Tue, 10 Nov 2020 17:12:19 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Thanks,\n\nIt's better to always post the whole patch series, so that cfbot can\ntest it properly. Sending just 0003 separately kind breaks that.\n\nAlso, 0003 seems to only tweak the .sql file, not the expected output,\nand there actually seems to be two places that mistakenly use \\dx (so\nlisting extensions) instead of \\dX. I've fixed both issues in the\nattached patches.\n\nHowever, I think the 0002 tests are better/sufficient - I prefer to keep\nit compact, not interleaving with the tests testing various other stuff.\nSo I don't intend to commit 0003, unless there's something that I don't\nsee for some reason.\n\nThe one remaining thing I'm not sure about is naming of the columns with\nsize of statistics - N_size, D_size and M_size does not seem very clear.\nAny clearer naming will however make the tables wider, though :-/\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 15 Nov 2020 19:22:59 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas,\n\nThanks for your comments and also revising patches.\n\nOn 2020/11/16 3:22, Tomas Vondra wrote:\n> It's better to always post the whole patch series, so that cfbot can\n> test it properly. Sending just 0003 separately kind breaks that.\n\nI now understand how \"cfbot\" works so that I'll take care of that\nwhen I send patches. Thanks.\n\n\n> Also, 0003 seems to only tweak the .sql file, not the expected output,\n> and there actually seems to be two places that mistakenly use \\dx (so\n> listing extensions) instead of \\dX. I've fixed both issues in the\n> attached patches.\n\nOops, sorry about that.\n\n \n> However, I think the 0002 tests are better/sufficient - I prefer to keep\n> it compact, not interleaving with the tests testing various other stuff.\n> So I don't intend to commit 0003, unless there's something that I don't\n> see for some reason.\n\nI Agreed. 0002 is easy to modify test cases and check results than 0003.\nTherefore, I'll go with 0002.\n\n \n> The one remaining thing I'm not sure about is naming of the columns with\n> size of statistics - N_size, D_size and M_size does not seem very clear.\n> Any clearer naming will however make the tables wider, though :-/\n\nYeah, I think so too, but I couldn't get an idea of a suitable name for\nthe columns when I created the patch.\nI don't prefer a long name but I'll replace the name with it to be clearer.\nFor example, s/N_size/Ndistinct_size/.\n\nPlease find attached patcheds:\n - 0001: Replace column names\n - 0002: Recreate regression test based on 0001\n\n\nRegards,\nTatsuro Yamada",
"msg_date": "Tue, 17 Nov 2020 13:35:07 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas and hackers,\n\n> I don't prefer a long name but I'll replace the name with it to be clearer.\n> For example, s/N_size/Ndistinct_size/.\n> \n> Please find attached patcheds:\n> - 0001: Replace column names\n> - 0002: Recreate regression test based on 0001\n\n\nI rebased the patch set on the master (7e5e1bba03), and the regression\ntest is good. Therefore, I changed the status of the patch: \"needs review\".\n\nI know that you proposed the new extended statistics[1], and it probably\nconflicts with the patch. I hope my patch will get commit before your\npatch committed to avoid the time of recreating. :-)\n\n\n[1] https://www.postgresql.org/message-id/flat/ad7891d2-e90c-b446-9fe2-7419143847d7%40enterprisedb.com\n\nThanks,\nTatsuro Yamada",
"msg_date": "Mon, 30 Nov 2020 11:19:10 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi,\n\n>I rebased the patch set on the master (7e5e1bba03), and the regression\n>test is good. Therefore, I changed the status of the patch: \"needs review\". \n\nHappy New Year!\n\nI rebased my patches on HEAD.\nPlease find attached files. :-D\n\nThanks,\nTatsuro Yamada",
"msg_date": "Tue, 05 Jan 2021 13:26:43 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On 1/5/21 5:26 AM, Tatsuro Yamada wrote:\n> Hi,\n> \n>> I rebased the patch set on the master (7e5e1bba03), and the regression\n>> test is good. Therefore, I changed the status of the patch: \"needs \n>> review\". \n> \n> Happy New Year!\n> \n> I rebased my patches on HEAD.\n> Please find attached files. :-D\n> \n\nThanks, and Happy new year to you too.\n\nI almost pushed this, but I have one more question. listExtendedStats \nfirst checks the server version, and errors out for pre-10 servers. \nShouldn't the logic building query check the server version too, to \ndecide whether to check the MCV stats? Otherwise it won't work on 10 and \n11, which does not support the \"mcv\" stats.\n\nI don't recall what exactly is our policy regarding new psql with older \nserver versions, but it seems strange to check for 10.0 and then fail \nanyway for \"supported\" versions.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 7 Jan 2021 00:09:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas,\n\n> Thanks, and Happy new year to you too.\n> \n> I almost pushed this, but I have one more question. listExtendedStats first checks the server version, and errors out for pre-10 servers. Shouldn't the logic building query check the server version too, to decide whether to check the MCV stats? Otherwise it won't work on 10 and 11, which does not support the \"mcv\" stats.\n>> I don't recall what exactly is our policy regarding new psql with older server versions, but it seems strange to check for 10.0 and then fail anyway for \"supported\" versions.\n\nThanks for your comments.\n\nI overlooked the check for MCV in the logic building query\nbecause I created the patch as a new feature on PG14.\nI'm not sure whether we should do back patch or not. However, I'll\nadd the check on the next patch because it is useful if you decide to\ndo the back patch on PG10, 11, 12, and 13.\n\nI wonder the column names added by \\dX+ is fine? For example,\n\"Ndistinct_size\" and \"Dependencies_size\". It looks like long names,\nbut acceptable?\n\nRegards,\nTatsuro Yamada\n\n\n\n\n",
"msg_date": "Thu, 07 Jan 2021 09:46:37 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "\n\nOn 1/7/21 1:46 AM, Tatsuro Yamada wrote:\n> Hi Tomas,\n> \n>> Thanks, and Happy new year to you too.\n>>\n>> I almost pushed this, but I have one more question. listExtendedStats \n>> first checks the server version, and errors out for pre-10 servers. \n>> Shouldn't the logic building query check the server version too, to \n>> decide whether to check the MCV stats? Otherwise it won't work on 10 \n>> and 11, which does not support the \"mcv\" stats.\n>>> I don't recall what exactly is our policy regarding new psql with \n>>> older server versions, but it seems strange to check for 10.0 and \n>>> then fail anyway for \"supported\" versions.\n> \n> Thanks for your comments.\n> \n> I overlooked the check for MCV in the logic building query\n> because I created the patch as a new feature on PG14.\n> I'm not sure whether we should do back patch or not. However, I'll\n> add the check on the next patch because it is useful if you decide to\n> do the back patch on PG10, 11, 12, and 13.\n> \n\n+1\n\nBTW perhaps a quick look at the other \\d commands would show if there \nare precedents. I didn't have time for that.\n\n> I wonder the column names added by \\dX+ is fine? For example,\n> \"Ndistinct_size\" and \"Dependencies_size\". It looks like long names,\n> but acceptable?\n> \n\nSeems acceptable - I don't have a better idea.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 7 Jan 2021 01:56:18 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On 2021-Jan-07, Tomas Vondra wrote:\n\n> On 1/7/21 1:46 AM, Tatsuro Yamada wrote:\n\n> > I overlooked the check for MCV in the logic building query\n> > because I created the patch as a new feature on PG14.\n> > I'm not sure whether we should do back patch or not. However, I'll\n> > add the check on the next patch because it is useful if you decide to\n> > do the back patch on PG10, 11, 12, and 13.\n> \n> BTW perhaps a quick look at the other \\d commands would show if there are\n> precedents. I didn't have time for that.\n\nYes, we do promise that new psql works with older servers.\n\nI think we would not backpatch any of this, though.\n\n-- \n�lvaro Herrera\n\n\n",
"msg_date": "Thu, 7 Jan 2021 11:47:13 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "\n\nOn 1/7/21 3:47 PM, Alvaro Herrera wrote:\n> On 2021-Jan-07, Tomas Vondra wrote:\n> \n>> On 1/7/21 1:46 AM, Tatsuro Yamada wrote:\n> \n>>> I overlooked the check for MCV in the logic building query\n>>> because I created the patch as a new feature on PG14.\n>>> I'm not sure whether we should do back patch or not. However, I'll\n>>> add the check on the next patch because it is useful if you decide to\n>>> do the back patch on PG10, 11, 12, and 13.\n>>\n>> BTW perhaps a quick look at the other \\d commands would show if there are\n>> precedents. I didn't have time for that.\n> \n> Yes, we do promise that new psql works with older servers.\n> \n\nYeah, makes sense. That means we need add the check for 12 / MCV.\n\n> I think we would not backpatch any of this, though.\n\nI wasn't really planning to backpatch any of this, of course.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 7 Jan 2021 16:56:26 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi,\n\nOn 2021/01/08 0:56, Tomas Vondra wrote:\n> On 1/7/21 3:47 PM, Alvaro Herrera wrote:\n>> On 2021-Jan-07, Tomas Vondra wrote:\n>>\n>>> On 1/7/21 1:46 AM, Tatsuro Yamada wrote:\n>>\n>>>> I overlooked the check for MCV in the logic building query\n>>>> because I created the patch as a new feature on PG14.\n>>>> I'm not sure whether we should do back patch or not. However, I'll\n>>>> add the check on the next patch because it is useful if you decide to\n>>>> do the back patch on PG10, 11, 12, and 13.\n>>>\n>>> BTW perhaps a quick look at the other \\d commands would show if there are\n>>> precedents. I didn't have time for that.\n>>\n>> Yes, we do promise that new psql works with older servers.\n>>\n> \n> Yeah, makes sense. That means we need add the check for 12 / MCV.\n\n\nAh, I got it.\nI fixed the patch to work with older servers to add the checking versions. And I tested \\dX command on older servers (PG10 - 13).\nThese results look fine.\n\n0001:\n Added the check code to handle pre-PG12. It has not MCV and\n pg_statistic_ext_data.\n0002:\n This patch is the same as the previous patch (not changed).\n\nPlease find the attached files.\n\n\n>> I wonder the column names added by \\dX+ is fine? For example,\n>> \"Ndistinct_size\" and \"Dependencies_size\". It looks like long names,\n>> but acceptable?\n>>\n> \n> Seems acceptable - I don't have a better idea. \n\nI see, thanks!\n\n\nThanks,\nTatsuro Yamada",
"msg_date": "Fri, 08 Jan 2021 08:52:02 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "\n\nOn 1/8/21 12:52 AM, Tatsuro Yamada wrote:\n> Hi,\n> \n> On 2021/01/08 0:56, Tomas Vondra wrote:\n>> On 1/7/21 3:47 PM, Alvaro Herrera wrote:\n>>> On 2021-Jan-07, Tomas Vondra wrote:\n>>>\n>>>> On 1/7/21 1:46 AM, Tatsuro Yamada wrote:\n>>>\n>>>>> I overlooked the check for MCV in the logic building query\n>>>>> because I created the patch as a new feature on PG14.\n>>>>> I'm not sure whether we should do back patch or not. However, I'll\n>>>>> add the check on the next patch because it is useful if you decide to\n>>>>> do the back patch on PG10, 11, 12, and 13.\n>>>>\n>>>> BTW perhaps a quick look at the other \\d commands would show if \n>>>> there are\n>>>> precedents. I didn't have time for that.\n>>>\n>>> Yes, we do promise that new psql works with older servers.\n>>>\n>>\n>> Yeah, makes sense. That means we need add the check for 12 / MCV.\n> \n> \n> Ah, I got it.\n> I fixed the patch to work with older servers to add the checking \n> versions. And I tested \\dX command on older servers (PG10 - 13).\n> These results look fine.\n> \n> 0001:\n> Added the check code to handle pre-PG12. It has not MCV and\n> pg_statistic_ext_data.\n> 0002:\n> This patch is the same as the previous patch (not changed).\n> \n> Please find the attached files.\n> \n\nOK, thanks. I'll take a look and probably push tomorrow. FWIW I plan to \nsquash the patches into a single commit.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 8 Jan 2021 01:14:39 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On 1/8/21 1:14 AM, Tomas Vondra wrote:\n> \n> \n> On 1/8/21 12:52 AM, Tatsuro Yamada wrote:\n>> Hi,\n>>\n>> On 2021/01/08 0:56, Tomas Vondra wrote:\n>>> On 1/7/21 3:47 PM, Alvaro Herrera wrote:\n>>>> On 2021-Jan-07, Tomas Vondra wrote:\n>>>>\n>>>>> On 1/7/21 1:46 AM, Tatsuro Yamada wrote:\n>>>>\n>>>>>> I overlooked the check for MCV in the logic building query\n>>>>>> because I created the patch as a new feature on PG14.\n>>>>>> I'm not sure whether we should do back patch or not. However, I'll\n>>>>>> add the check on the next patch because it is useful if you decide to\n>>>>>> do the back patch on PG10, 11, 12, and 13.\n>>>>>\n>>>>> BTW perhaps a quick look at the other \\d commands would show if\n>>>>> there are\n>>>>> precedents. I didn't have time for that.\n>>>>\n>>>> Yes, we do promise that new psql works with older servers.\n>>>>\n>>>\n>>> Yeah, makes sense. That means we need add the check for 12 / MCV.\n>>\n>>\n>> Ah, I got it.\n>> I fixed the patch to work with older servers to add the checking\n>> versions. And I tested \\dX command on older servers (PG10 - 13).\n>> These results look fine.\n>>\n>> 0001:\n>> Added the check code to handle pre-PG12. It has not MCV and\n>> pg_statistic_ext_data.\n>> 0002:\n>> This patch is the same as the previous patch (not changed).\n>>\n>> Please find the attached files.\n>>\n> \n> OK, thanks. I'll take a look and probably push tomorrow. FWIW I plan to\n> squash the patches into a single commit.\n> \n\nAttached is a patch I plan to commit - 0001 is the last submitted\nversion with a couple minor tweaks, mostly in docs/comments, and small\nrework of branching to be more like the other functions in describe.c.\n\nWhile working on that, I realized that 'defined' might be a bit\nambiguous, I initially thought it means 'NOT NULL' (which it does not).\nI propose to change it to 'requested' instead. Tatsuro, do you agree, or\ndo you think 'defined' is better?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 9 Jan 2021 01:01:27 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas,\n\nOn 2021/01/09 9:01, Tomas Vondra wrote:\n> On 1/8/21 1:14 AM, Tomas Vondra wrote:\n>> On 1/8/21 12:52 AM, Tatsuro Yamada wrote:\n>>> On 2021/01/08 0:56, Tomas Vondra wrote:\n>>>> On 1/7/21 3:47 PM, Alvaro Herrera wrote:\n>>>>> On 2021-Jan-07, Tomas Vondra wrote:\n>>>>>> On 1/7/21 1:46 AM, Tatsuro Yamada wrote:\n>>>>>\n>>>>>>> I overlooked the check for MCV in the logic building query\n>>>>>>> because I created the patch as a new feature on PG14.\n>>>>>>> I'm not sure whether we should do back patch or not. However, I'll\n>>>>>>> add the check on the next patch because it is useful if you decide to\n>>>>>>> do the back patch on PG10, 11, 12, and 13.\n>>>>>>\n>>>>>> BTW perhaps a quick look at the other \\d commands would show if\n>>>>>> there are\n>>>>>> precedents. I didn't have time for that.\n>>>>>\n>>>>> Yes, we do promise that new psql works with older servers.\n>>>>>\n>>>>\n>>>> Yeah, makes sense. That means we need add the check for 12 / MCV.\n>>>\n>>>\n>>> Ah, I got it.\n>>> I fixed the patch to work with older servers to add the checking\n>>> versions. And I tested \\dX command on older servers (PG10 - 13).\n>>> These results look fine.\n>>>\n>>> 0001:\n>>> Added the check code to handle pre-PG12. It has not MCV and\n>>> pg_statistic_ext_data.\n>>> 0002:\n>>> This patch is the same as the previous patch (not changed).\n>>>\n>>> Please find the attached files.\n>>>\n>>\n>> OK, thanks. I'll take a look and probably push tomorrow. FWIW I plan to\n>> squash the patches into a single commit.\n>>\n> \n> Attached is a patch I plan to commit - 0001 is the last submitted\n> version with a couple minor tweaks, mostly in docs/comments, and small\n> rework of branching to be more like the other functions in describe.c.\n\nThanks for revising the patch.\nI reviewed the 0001, and the branching and comments look good to me.\nHowever, I added an alias name in processSQLNamePattern() on the patch:\ns/\"stxname\"/\"es.stxname\"/\n\n\n> While working on that, I realized that 'defined' might be a bit\n> ambiguous, I initially thought it means 'NOT NULL' (which it does not).\n> I propose to change it to 'requested' instead. Tatsuro, do you agree, or\n> do you think 'defined' is better?\n\nRegarding the status of extended stats, I think the followings:\n\n - \"defined\": it shows the extended stats defined only. We can't know\n whether it needs to analyze or not. I agree this name was\n ambiguous. Therefore we should replace it with a more suitable\n name.\n - \"requested\": it shows the extended stats needs something. Of course,\n we know it needs to ANALYZE because we can create the patch.\n However, I feel there is a little ambiguity for DBA.\n To solve this, it would be better to write an explanation of\n the status in the document. For example,\n\n======\nThe column of the kind of extended stats (e. g. Ndistinct) shows some statuses.\n\"requested\" means that it needs to gather data by ANALYZE. \"built\" means ANALYZE\n was finished, and the planner can use it. NULL means that it doesn't exists.\n======\n\nWhat do you think? :-D\n\n\nThanks,\nTatsuro Yamada",
"msg_date": "Tue, 12 Jan 2021 10:57:33 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "\nOn 1/12/21 2:57 AM, Tatsuro Yamada wrote:\n> Hi Tomas,\n> \n> On 2021/01/09 9:01, Tomas Vondra wrote:\n...>\n>> While working on that, I realized that 'defined' might be a bit\n>> ambiguous, I initially thought it means 'NOT NULL' (which it does not).\n>> I propose to change it to 'requested' instead. Tatsuro, do you agree, or\n>> do you think 'defined' is better?\n> \n> Regarding the status of extended stats, I think the followings:\n> \n> - \"defined\": it shows the extended stats defined only. We can't know\n> whether it needs to analyze or not. I agree this name was\n> ambiguous. Therefore we should replace it with a more \n> suitable\n> name.\n> - \"requested\": it shows the extended stats needs something. Of course,\n> we know it needs to ANALYZE because we can create the patch.\n> However, I feel there is a little ambiguity for DBA.\n> To solve this, it would be better to write an explanation of\n> the status in the document. For example,\n> \n> ======\n> The column of the kind of extended stats (e. g. Ndistinct) shows some \n> statuses.\n> \"requested\" means that it needs to gather data by ANALYZE. \"built\" means \n> ANALYZE\n> was finished, and the planner can use it. NULL means that it doesn't \n> exists.\n> ======\n> \n> What do you think? :-D\n> \n\nYes, that seems reasonable to me. Will you provide an updated patch?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 12 Jan 2021 12:08:30 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas,\n\nOn 2021/01/12 20:08, Tomas Vondra wrote:\n> \n> On 1/12/21 2:57 AM, Tatsuro Yamada wrote:\n>> Hi Tomas,\n>>\n>> On 2021/01/09 9:01, Tomas Vondra wrote:\n> ...>\n>>> While working on that, I realized that 'defined' might be a bit\n>>> ambiguous, I initially thought it means 'NOT NULL' (which it does not).\n>>> I propose to change it to 'requested' instead. Tatsuro, do you agree, or\n>>> do you think 'defined' is better?\n>>\n>> Regarding the status of extended stats, I think the followings:\n>>\n>> - \"defined\": it shows the extended stats defined only. We can't know\n>> whether it needs to analyze or not. I agree this name was\n>> ambiguous. Therefore we should replace it with a more suitable\n>> name.\n>> - \"requested\": it shows the extended stats needs something. Of course,\n>> we know it needs to ANALYZE because we can create the patch.\n>> However, I feel there is a little ambiguity for DBA.\n>> To solve this, it would be better to write an explanation of\n>> the status in the document. For example,\n>>\n>> ======\n>> The column of the kind of extended stats (e. g. Ndistinct) shows some statuses.\n>> \"requested\" means that it needs to gather data by ANALYZE. \"built\" means ANALYZE\n>> was finished, and the planner can use it. NULL means that it doesn't exists.\n>> ======\n>>\n>> What do you think? :-D\n>>\n> \n> Yes, that seems reasonable to me. Will you provide an updated patch?\n\n\nSounds good. I'll send the updated patch today.\n\n\nThanks,\nTatsuro Yamada\n\n\n\n\n\n",
"msg_date": "Wed, 13 Jan 2021 07:48:20 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas,\n\nOn 2021/01/13 7:48, Tatsuro Yamada wrote:\n> On 2021/01/12 20:08, Tomas Vondra wrote:\n>> On 1/12/21 2:57 AM, Tatsuro Yamada wrote:\n>>> On 2021/01/09 9:01, Tomas Vondra wrote:\n>> ...>\n>>>> While working on that, I realized that 'defined' might be a bit\n>>>> ambiguous, I initially thought it means 'NOT NULL' (which it does not).\n>>>> I propose to change it to 'requested' instead. Tatsuro, do you agree, or\n>>>> do you think 'defined' is better?\n>>>\n>>> Regarding the status of extended stats, I think the followings:\n>>>\n>>> - \"defined\": it shows the extended stats defined only. We can't know\n>>> whether it needs to analyze or not. I agree this name was\n>>> ambiguous. Therefore we should replace it with a more suitable\n>>> name.\n>>> - \"requested\": it shows the extended stats needs something. Of course,\n>>> we know it needs to ANALYZE because we can create the patch.\n>>> However, I feel there is a little ambiguity for DBA.\n>>> To solve this, it would be better to write an explanation of\n>>> the status in the document. For example,\n>>>\n>>> ======\n>>> The column of the kind of extended stats (e. g. Ndistinct) shows some statuses.\n>>> \"requested\" means that it needs to gather data by ANALYZE. \"built\" means ANALYZE\n>>> was finished, and the planner can use it. NULL means that it doesn't exists.\n>>> ======\n>>>\n>>> What do you think? :-D\n>>>\n>>\n>> Yes, that seems reasonable to me. Will you provide an updated patch?\n> \n> \n> Sounds good. I'll send the updated patch today.\n\n\n\nI updated the patch to add the explanation of the extended stats' statuses.\nPlease feel free to modify the patch to improve it more clearly.\n\nThe attached files are:\n 0001: Add psql \\dx and the fixed document\n 0002: Regression test for psql \\dX\n app-psql.html: Created by \"make html\" command (You can check the\n explanation of the statuses easily, probably)\n\nThanks,\nTatsuro Yamada",
"msg_date": "Wed, 13 Jan 2021 10:22:05 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Wed, Jan 13, 2021 at 10:22:05AM +0900, Tatsuro Yamada wrote:\n> Hi Tomas,\n> \n> On 2021/01/13 7:48, Tatsuro Yamada wrote:\n> > On 2021/01/12 20:08, Tomas Vondra wrote:\n> > > On 1/12/21 2:57 AM, Tatsuro Yamada wrote:\n> > > > On 2021/01/09 9:01, Tomas Vondra wrote:\n> > > ...>\n> > > > > While working on that, I realized that 'defined' might be a bit\n> > > > > ambiguous, I initially thought it means 'NOT NULL' (which it does not).\n> > > > > I propose to change it to 'requested' instead. Tatsuro, do you agree, or\n> > > > > do you think 'defined' is better?\n> > > > \n> > > > Regarding the status of extended stats, I think the followings:\n> > > > \n> > > > ��- \"defined\": it shows the extended stats defined only. We can't know\n> > > > �������������� whether it needs to analyze or not. I agree this name was\n> > > > ��������������� ambiguous. Therefore we should replace it with a more suitable\n> > > > �������������� name.\n> > > > ��- \"requested\": it shows the extended stats needs something. Of course,\n> > > > �������������� we know it needs to ANALYZE because we can create the patch.\n> > > > �������������� However, I feel there is a little ambiguity for DBA.\n> > > > �������������� To solve this, it would be better to write an explanation of\n> > > > �������������� the status in the document. For example,\n> > > > \n> > > > ======\n> > > > The column of the kind of extended stats (e. g. Ndistinct) shows some statuses.\n> > > > \"requested\" means that it needs to gather data by ANALYZE. \"built\" means ANALYZE\n> > > > ��was finished, and the planner can use it. NULL means that it doesn't exists.\n> > > > ======\n> > > > \n> > > > What do you think? :-D\n> > > > \n> > > \n> > > Yes, that seems reasonable to me. Will you provide an updated patch?\n> > \n> > \n> > Sounds good. I'll send the updated patch today.\n> \n> \n> \n> I updated the patch to add the explanation of the extended stats' statuses.\n> Please feel free to modify the patch to improve it more clearly.\n> \n> The attached files are:\n> 0001: Add psql \\dx and the fixed document\n> 0002: Regression test for psql \\dX\n> app-psql.html: Created by \"make html\" command (You can check the\n> explanation of the statuses easily, probably)\n\nHello Yamada-san,\n\nI reviewed the patch and don't have specific complaints, it all looks good!\n\nI'm however thinking about the \"requested\" status. I'm wondering if it could\nlead to people think that an ANALYZE is scheduled and will happen soon.\nMaybe \"defined\" or \"declared\" might be less misleading, or even \"waiting for\nanalyze\"?\n\n\n",
"msg_date": "Fri, 15 Jan 2021 16:47:41 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "\n\nOn 1/15/21 9:47 AM, Julien Rouhaud wrote:\n> On Wed, Jan 13, 2021 at 10:22:05AM +0900, Tatsuro Yamada wrote:\n>> Hi Tomas,\n>>\n>> On 2021/01/13 7:48, Tatsuro Yamada wrote:\n>>> On 2021/01/12 20:08, Tomas Vondra wrote:\n>>>> On 1/12/21 2:57 AM, Tatsuro Yamada wrote:\n>>>>> On 2021/01/09 9:01, Tomas Vondra wrote:\n>>>> ...>\n>>>>>> While working on that, I realized that 'defined' might be a bit\n>>>>>> ambiguous, I initially thought it means 'NOT NULL' (which it does not).\n>>>>>> I propose to change it to 'requested' instead. Tatsuro, do you agree, or\n>>>>>> do you think 'defined' is better?\n>>>>>\n>>>>> Regarding the status of extended stats, I think the followings:\n>>>>>\n>>>>> - \"defined\": it shows the extended stats defined only. We can't know\n>>>>> whether it needs to analyze or not. I agree this name was\n>>>>> ambiguous. Therefore we should replace it with a more suitable\n>>>>> name.\n>>>>> - \"requested\": it shows the extended stats needs something. Of course,\n>>>>> we know it needs to ANALYZE because we can create the patch.\n>>>>> However, I feel there is a little ambiguity for DBA.\n>>>>> To solve this, it would be better to write an explanation of\n>>>>> the status in the document. For example,\n>>>>>\n>>>>> ======\n>>>>> The column of the kind of extended stats (e. g. Ndistinct) shows some statuses.\n>>>>> \"requested\" means that it needs to gather data by ANALYZE. \"built\" means ANALYZE\n>>>>> was finished, and the planner can use it. NULL means that it doesn't exists.\n>>>>> ======\n>>>>>\n>>>>> What do you think? :-D\n>>>>>\n>>>>\n>>>> Yes, that seems reasonable to me. Will you provide an updated patch?\n>>>\n>>>\n>>> Sounds good. I'll send the updated patch today.\n>>\n>>\n>>\n>> I updated the patch to add the explanation of the extended stats' statuses.\n>> Please feel free to modify the patch to improve it more clearly.\n>>\n>> The attached files are:\n>> 0001: Add psql \\dx and the fixed document\n>> 0002: Regression test for psql \\dX\n>> app-psql.html: Created by \"make html\" command (You can check the\n>> explanation of the statuses easily, probably)\n> \n> Hello Yamada-san,\n> \n> I reviewed the patch and don't have specific complaints, it all looks good!\n> \n> I'm however thinking about the \"requested\" status. I'm wondering if it could\n> lead to people think that an ANALYZE is scheduled and will happen soon.\n> Maybe \"defined\" or \"declared\" might be less misleading, or even \"waiting for\n> analyze\"?\n> \n\nWell, the \"defined\" option is not great either, because it can be\ninterpreted as \"NOT NULL\" - that's why I proposed \"requested\". Not sure\nabout \"declared\" - I wouldn't use it in this context, but I'm not a\nnative speaker so maybe it's OK.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 15 Jan 2021 17:19:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "\n\nOn 1/15/21 5:19 PM, Tomas Vondra wrote:\n> \n> \n> On 1/15/21 9:47 AM, Julien Rouhaud wrote:\n>> On Wed, Jan 13, 2021 at 10:22:05AM +0900, Tatsuro Yamada wrote:\n>>> Hi Tomas,\n>>>\n>>> On 2021/01/13 7:48, Tatsuro Yamada wrote:\n>>>> On 2021/01/12 20:08, Tomas Vondra wrote:\n>>>>> On 1/12/21 2:57 AM, Tatsuro Yamada wrote:\n>>>>>> On 2021/01/09 9:01, Tomas Vondra wrote:\n>>>>> ...>\n>>>>>>> While working on that, I realized that 'defined' might be a bit\n>>>>>>> ambiguous, I initially thought it means 'NOT NULL' (which it does not).\n>>>>>>> I propose to change it to 'requested' instead. Tatsuro, do you agree, or\n>>>>>>> do you think 'defined' is better?\n>>>>>>\n>>>>>> Regarding the status of extended stats, I think the followings:\n>>>>>>\n>>>>>> - \"defined\": it shows the extended stats defined only. We can't know\n>>>>>> whether it needs to analyze or not. I agree this name was\n>>>>>> ambiguous. Therefore we should replace it with a more suitable\n>>>>>> name.\n>>>>>> - \"requested\": it shows the extended stats needs something. Of course,\n>>>>>> we know it needs to ANALYZE because we can create the patch.\n>>>>>> However, I feel there is a little ambiguity for DBA.\n>>>>>> To solve this, it would be better to write an explanation of\n>>>>>> the status in the document. For example,\n>>>>>>\n>>>>>> ======\n>>>>>> The column of the kind of extended stats (e. g. Ndistinct) shows some statuses.\n>>>>>> \"requested\" means that it needs to gather data by ANALYZE. \"built\" means ANALYZE\n>>>>>> was finished, and the planner can use it. NULL means that it doesn't exists.\n>>>>>> ======\n>>>>>>\n>>>>>> What do you think? :-D\n>>>>>>\n>>>>>\n>>>>> Yes, that seems reasonable to me. Will you provide an updated patch?\n>>>>\n>>>>\n>>>> Sounds good. I'll send the updated patch today.\n>>>\n>>>\n>>>\n>>> I updated the patch to add the explanation of the extended stats' statuses.\n>>> Please feel free to modify the patch to improve it more clearly.\n>>>\n>>> The attached files are:\n>>> 0001: Add psql \\dx and the fixed document\n>>> 0002: Regression test for psql \\dX\n>>> app-psql.html: Created by \"make html\" command (You can check the\n>>> explanation of the statuses easily, probably)\n>>\n>> Hello Yamada-san,\n>>\n>> I reviewed the patch and don't have specific complaints, it all looks good!\n>>\n>> I'm however thinking about the \"requested\" status. I'm wondering if it could\n>> lead to people think that an ANALYZE is scheduled and will happen soon.\n>> Maybe \"defined\" or \"declared\" might be less misleading, or even \"waiting for\n>> analyze\"?\n>>\n> \n> Well, the \"defined\" option is not great either, because it can be\n> interpreted as \"NOT NULL\" - that's why I proposed \"requested\". Not sure\n> about \"declared\" - I wouldn't use it in this context, but I'm not a\n> native speaker so maybe it's OK.\n> \n\nI've pushed this, keeping the \"requested\". If we decide that some other \nterm is a better choice, we can tweak that later of course.\n\nThanks Tatsuro-san for the patience!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 17 Jan 2021 00:32:10 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi, hackers.\r\n\r\nI tested this committed feature. \r\nIt doesn't seem to be available to non-superusers due to the inability to access pg_statistics_ext_data. \r\nIs this the expected behavior?\r\n\r\n--- operation ---\r\npostgres=> CREATE STATISTICS stat1_data1 ON c1, c2 FROM data1;\r\nCREATE STATISTICS\r\npostgres=> ANALYZE data1;\r\nANALYZE\r\npostgres=> SELECT * FROM pg_statistic_ext;\r\n oid | stxrelid | stxname | stxnamespace | stxowner | stxstattarget | stxkeys | stxkind\r\n-------+----------+-------------+--------------+----------+---------------+---------+---------\r\n 16393 | 16385 | stat1_data1 | 2200 | 16384 | -1 | 1 2 | {d,f,m}\r\n(1 row)\r\n\r\npostgres=> \\dX\r\nERROR: permission denied for table pg_statistic_ext_data\r\npostgres=>\r\npostgres=> \\connect postgres postgres\r\nYou are now connected to database \"postgres\" as user \"postgres\".\r\npostgres=#\r\npostgres=# \\dX\r\n List of extended statistics\r\n Schema | Name | Definition | Ndistinct | Dependencies | MCV\r\n--------+-------------+-------------------+-----------+--------------+-----------\r\n public | stat1_data1 | c1, c2 FROM data1 | built | built | requested\r\n(1 row)\r\n\r\n--- operation ---\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Tomas Vondra [mailto:tomas.vondra@enterprisedb.com] \r\nSent: Sunday, January 17, 2021 8:32 AM\r\nTo: Julien Rouhaud <rjuju123@gmail.com>; Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>\r\nCc: Alvaro Herrera <alvherre@2ndquadrant.com>; Tomas Vondra <tomas.vondra@2ndquadrant.com>; Michael Paquier <michael@paquier.xyz>; Pavel Stehule <pavel.stehule@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\r\nSubject: Re: list of extended statistics on psql\r\n\r\n\r\n\r\nOn 1/15/21 5:19 PM, Tomas Vondra wrote:\r\n> \r\n> \r\n> On 1/15/21 9:47 AM, Julien Rouhaud wrote:\r\n>> On Wed, Jan 13, 2021 at 10:22:05AM +0900, Tatsuro Yamada wrote:\r\n>>> Hi Tomas,\r\n>>>\r\n>>> On 2021/01/13 7:48, Tatsuro Yamada wrote:\r\n>>>> On 2021/01/12 20:08, Tomas Vondra wrote:\r\n>>>>> On 1/12/21 2:57 AM, Tatsuro Yamada wrote:\r\n>>>>>> On 2021/01/09 9:01, Tomas Vondra wrote:\r\n>>>>> ...>\r\n>>>>>>> While working on that, I realized that 'defined' might be a bit \r\n>>>>>>> ambiguous, I initially thought it means 'NOT NULL' (which it does not).\r\n>>>>>>> I propose to change it to 'requested' instead. Tatsuro, do you \r\n>>>>>>> agree, or do you think 'defined' is better?\r\n>>>>>>\r\n>>>>>> Regarding the status of extended stats, I think the followings:\r\n>>>>>>\r\n>>>>>> - \"defined\": it shows the extended stats defined only. We \r\n>>>>>> can't know\r\n>>>>>> whether it needs to analyze or not. I agree this \r\n>>>>>> name was\r\n>>>>>> ambiguous. Therefore we should replace it with a \r\n>>>>>> more suitable\r\n>>>>>> name.\r\n>>>>>> - \"requested\": it shows the extended stats needs something. Of \r\n>>>>>> course,\r\n>>>>>> we know it needs to ANALYZE because we can create the patch.\r\n>>>>>> However, I feel there is a little ambiguity for DBA.\r\n>>>>>> To solve this, it would be better to write an \r\n>>>>>> explanation of\r\n>>>>>> the status in the document. For example,\r\n>>>>>>\r\n>>>>>> ======\r\n>>>>>> The column of the kind of extended stats (e. g. Ndistinct) shows some statuses.\r\n>>>>>> \"requested\" means that it needs to gather data by ANALYZE. \r\n>>>>>> \"built\" means ANALYZE\r\n>>>>>> was finished, and the planner can use it. NULL means that it doesn't exists.\r\n>>>>>> ======\r\n>>>>>>\r\n>>>>>> What do you think? :-D\r\n>>>>>>\r\n>>>>>\r\n>>>>> Yes, that seems reasonable to me. Will you provide an updated patch?\r\n>>>>\r\n>>>>\r\n>>>> Sounds good. I'll send the updated patch today.\r\n>>>\r\n>>>\r\n>>>\r\n>>> I updated the patch to add the explanation of the extended stats' statuses.\r\n>>> Please feel free to modify the patch to improve it more clearly.\r\n>>>\r\n>>> The attached files are:\r\n>>> 0001: Add psql \\dx and the fixed document\r\n>>> 0002: Regression test for psql \\dX\r\n>>> app-psql.html: Created by \"make html\" command (You can check the\r\n>>> explanation of the statuses easily, probably)\r\n>>\r\n>> Hello Yamada-san,\r\n>>\r\n>> I reviewed the patch and don't have specific complaints, it all looks good!\r\n>>\r\n>> I'm however thinking about the \"requested\" status. I'm wondering if \r\n>> it could lead to people think that an ANALYZE is scheduled and will happen soon.\r\n>> Maybe \"defined\" or \"declared\" might be less misleading, or even \r\n>> \"waiting for analyze\"?\r\n>>\r\n> \r\n> Well, the \"defined\" option is not great either, because it can be \r\n> interpreted as \"NOT NULL\" - that's why I proposed \"requested\". Not \r\n> sure about \"declared\" - I wouldn't use it in this context, but I'm not \r\n> a native speaker so maybe it's OK.\r\n> \r\n\r\nI've pushed this, keeping the \"requested\". If we decide that some other term is a better choice, we can tweak that later of course.\r\n\r\nThanks Tatsuro-san for the patience!\r\n\r\n\r\nregards\r\n\r\n--\r\nTomas Vondra\r\nEnterpriseDB: http://www.enterprisedb.com\r\nThe Enterprise PostgreSQL Company\r\n\r\n\r\n",
"msg_date": "Sun, 17 Jan 2021 01:41:04 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": false,
"msg_subject": "RE: list of extended statistics on psql"
},
{
"msg_contents": "On 1/17/21 2:41 AM, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> Hi, hackers.\n> \n> I tested this committed feature.\n> It doesn't seem to be available to non-superusers due to the inability to access pg_statistics_ext_data.\n> Is this the expected behavior?\n> \n\nHmmm, that's a good point. Bummer we haven't noticed that earlier.\n\nI wonder what the right fix should be - presumably we could do something \nlike pg_stats_ext (we can't use that view directly, because it formats \nthe data, so the sizes are different).\n\nBut should it list just the stats the user has access to, or should it \nlist everything and leave the inaccessible fields NULL?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 17 Jan 2021 03:01:34 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "\n\nOn 1/17/21 3:01 AM, Tomas Vondra wrote:\n> On 1/17/21 2:41 AM, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n>> Hi, hackers.\n>>\n>> I tested this committed feature.\n>> It doesn't seem to be available to non-superusers due to the inability \n>> to access pg_statistics_ext_data.\n>> Is this the expected behavior?\n>>\n> \n> Hmmm, that's a good point. Bummer we haven't noticed that earlier.\n> \n> I wonder what the right fix should be - presumably we could do something \n> like pg_stats_ext (we can't use that view directly, because it formats \n> the data, so the sizes are different).\n> \n> But should it list just the stats the user has access to, or should it \n> list everything and leave the inaccessible fields NULL?\n> \n\nI've reverted the commit - once we find the right way to handle this, \nI'll get it committed again.\n\nAs for how to deal with this, I can think of about three ways:\n\n1) simplify the command to only print information from pg_statistic_ext \n(so on information about which stats are built or sizes)\n\n2) extend pg_stats_ext with necessary information (e.g. sizes)\n\n3) create a new system view, with necessary information (so that \npg_stats_ext does not need to be modified)\n\n4) add functions returning the necessary information, possibly only for \nstatistics the user can access (similarly to what pg_stats_ext does)\n\nOptions 2-4 have the obvious disadvantage that this won't work on older \nreleases (we can't add views or functions there). So I'm leaning towards \n#1 even if that means we have to remove some of the details. We can \nconsider adding that for new releases, though.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 17 Jan 2021 15:31:57 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Sun, Jan 17, 2021 at 03:31:57PM +0100, Tomas Vondra wrote:\n> I've reverted the commit - once we find the right way to handle this, I'll\n> get it committed again.\n\nPlease consider these doc changes for the next iteration.\n\ncommit 1a69f648ce6c63ebb37b6d8ec7c6539b3cb70787\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat Jan 16 17:47:35 2021 -0600\n\n doc review: psql \\dX 891a1d0bca262ca78564e0fea1eaa5ae544ea5ee\n\ndiff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml\nindex aaf55df921..a678a69dfb 100644\n--- a/doc/src/sgml/ref/psql-ref.sgml\n+++ b/doc/src/sgml/ref/psql-ref.sgml\n@@ -1928,15 +1928,15 @@ testdb=>\n is specified, only those extended statistics whose names match the\n pattern are listed.\n If <literal>+</literal> is appended to the command name, each extended\n- statistics is listed with its size.\n+ statistic is listed with its size.\n </para>\n \n <para>\n- The column of the kind of extended stats (e.g. Ndistinct) shows some statuses.\n+ The column of the kind of extended stats (e.g. Ndistinct) shows its status.\n \"requested\" means that it needs to collect statistics by <link\n linkend=\"sql-analyze\"><command>ANALYZE</command></link>. \n \"built\" means <link linkend=\"sql-analyze\"><command>ANALYZE</command></link> was \n- finished, and the planner can use it. NULL means that it doesn't exists. \n+ run, and statistics are available to the planner. NULL means that it doesn't exist. \n </para>\n </listitem>\n </varlistentry>\n\n\n",
"msg_date": "Sun, 17 Jan 2021 10:52:10 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Julien,\n\nOn 2021/01/15 17:47, Julien Rouhaud wrote:\n> Hello Yamada-san,\n> \n> I reviewed the patch and don't have specific complaints, it all looks good!\n> \n> I'm however thinking about the \"requested\" status. I'm wondering if it could\n> lead to people think that an ANALYZE is scheduled and will happen soon.\n> Maybe \"defined\" or \"declared\" might be less misleading, or even \"waiting for\n> analyze\"?\n\n\nThanks for reviewing the patch.\nYeah, \"waiting for analyze\" was preferable but it was a little long to use on the columns. Unfortunately, I couldn't imagine a suitable term. Therefore,\nI'm keeping it as is.\n\nRegards,\nTatsuro Yamada\n\n\n\n",
"msg_date": "Mon, 18 Jan 2021 16:18:32 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas,\n\nOn 2021/01/17 8:32, Tomas Vondra wrote:\n> I've pushed this, keeping the \"requested\". If we decide that some other term is a better choice, we can tweak that later of course.\n> \n> Thanks Tatsuro-san for the patience!\n\nThanks for taking the time to review the patches.\nI believe this feature is useful for DBA when they use Extended stats.\nFor example, the execution plan tuning phase and so on.\n\nThen, I know the patch was reverted. So, I keep going to fix the patch\non the Second iteration. :-D\n\nRegards,\nTatsuro Yamada\n\n\n\n",
"msg_date": "Mon, 18 Jan 2021 16:24:58 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas and Shinoda-san,\n\nOn 2021/01/17 23:31, Tomas Vondra wrote:\n> \n> \n> On 1/17/21 3:01 AM, Tomas Vondra wrote:\n>> On 1/17/21 2:41 AM, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n>>> Hi, hackers.\n>>>\n>>> I tested this committed feature.\n>>> It doesn't seem to be available to non-superusers due to the inability to access pg_statistics_ext_data.\n>>> Is this the expected behavior?\n\n\nUgh. I overlooked the test to check the case of the user hasn't Superuser privilege. The user without the privilege was able to access pg_statistics_ext. Therefore I supposed that it's also able to access pg_statics_ext_data. Oops.\n\n\n>> Hmmm, that's a good point. Bummer we haven't noticed that earlier.\n>>\n>> I wonder what the right fix should be - presumably we could do something like pg_stats_ext (we can't use that view directly, because it formats the data, so the sizes are different).\n>>\n>> But should it list just the stats the user has access to, or should it list everything and leave the inaccessible fields NULL?\n>>\n> \n> I've reverted the commit - once we find the right way to handle this, I'll get it committed again.\n> \n> As for how to deal with this, I can think of about three ways:\n> \n> 1) simplify the command to only print information from pg_statistic_ext (so on information about which stats are built or sizes)\n> \n> 2) extend pg_stats_ext with necessary information (e.g. sizes)\n> \n> 3) create a new system view, with necessary information (so that pg_stats_ext does not need to be modified)\n> \n> 4) add functions returning the necessary information, possibly only for statistics the user can access (similarly to what pg_stats_ext does)\n> \n> Options 2-4 have the obvious disadvantage that this won't work on older releases (we can't add views or functions there). So I'm leaning towards #1 even if that means we have to remove some of the details. We can consider adding that for new releases, though.\n\n\nThanks for the useful advice. I go with option 1).\nThe following query is created by using pg_stats_ext instead of pg_statistic_ext and pg_statistic_ext_data. However, I was confused\nabout writing a part of the query for calculating MCV size because\nthere are four columns related to MCV. For example, most_common_vals, most_common_val_nulls, most_common_freqs, and most_common_base_freqs.\nCurrently, I don't know how to calculate the size of MCV by using the\nfour columns. Thoughts? :-)\n\n===================================================\n\\connect postgres hoge\ncreate table hoge_t(a int, b int);\ninsert into hoge_t select i,i from generate_series(1,100) i;\ncreate statistics hoge_t_ext on a, b from hoge_t;\n\n\nSELECT\n es.statistics_schemaname AS \"Schema\",\n es.statistics_name AS \"Name\",\n pg_catalog.format('%s FROM %s',\n (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(s.attname),', ')\n FROM pg_catalog.unnest(es.attnames) s(attname)),\n es.tablename) AS \"Definition\",\n CASE WHEN es.n_distinct IS NOT NULL THEN 'built'\n WHEN 'd' = any(es.kinds) THEN 'requested'\n END AS \"Ndistinct\",\n CASE WHEN es.dependencies IS NOT NULL THEN 'built'\n WHEN 'f' = any(es.kinds) THEN 'requested'\n END AS \"Dependencies\",\n CASE WHEN es.most_common_vals IS NOT NULL THEN 'built'\n WHEN 'm' = any(es.kinds) THEN 'requested'\n END AS \"MCV\",\n CASE WHEN es.n_distinct IS NOT NULL THEN\n pg_catalog.pg_size_pretty(pg_catalog.length(es.n_distinct)::bigint)\n WHEN 'd' = any(es.kinds) THEN '0 bytes'\n END AS \"Ndistinct_size\",\n CASE WHEN es.dependencies IS NOT NULL THEN\n pg_catalog.pg_size_pretty(pg_catalog.length(es.dependencies)::bigint)\n WHEN 'f' = any(es.kinds) THEN '0 bytes'\n END AS \"Dependencies_size\"\n FROM pg_catalog.pg_stats_ext es\n ORDER BY 1, 2;\n\n-[ RECORD 1 ]-----+-----------------\nSchema | public\nName | hoge_t_ext\nDefinition | a, b FROM hoge_t\nNdistinct | requested\nDependencies | requested\nMCV | requested\nNdistinct_size | 0 bytes\nDependencies_size | 0 bytes\n\nanalyze hoge_t;\n\n-[ RECORD 1 ]-----+-----------------\nSchema | public\nName | hoge_t_ext\nDefinition | a, b FROM hoge_t\nNdistinct | built\nDependencies | built\nMCV | built\nNdistinct_size | 13 bytes\nDependencies_size | 40 bytes\n===================================================\n\nThanks,\nTatsuro Yamada\n\n\n\n\n\n",
"msg_date": "Mon, 18 Jan 2021 16:31:56 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Justin,\n\nOn 2021/01/18 1:52, Justin Pryzby wrote:\n> On Sun, Jan 17, 2021 at 03:31:57PM +0100, Tomas Vondra wrote:\n>> I've reverted the commit - once we find the right way to handle this, I'll\n>> get it committed again.\n> \n> Please consider these doc changes for the next iteration.\n> \n> commit 1a69f648ce6c63ebb37b6d8ec7c6539b3cb70787\n> Author: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sat Jan 16 17:47:35 2021 -0600\n> \n> doc review: psql \\dX 891a1d0bca262ca78564e0fea1eaa5ae544ea5ee\n\nThanks for your comments!\nIt helps a lot since I'm not a native speaker.\nI'll fix the document based on your suggestion on the next patch.\n\n \n> diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml\n> index aaf55df921..a678a69dfb 100644\n> --- a/doc/src/sgml/ref/psql-ref.sgml\n> +++ b/doc/src/sgml/ref/psql-ref.sgml\n> @@ -1928,15 +1928,15 @@ testdb=>\n> is specified, only those extended statistics whose names match the\n> pattern are listed.\n> If <literal>+</literal> is appended to the command name, each extended\n> - statistics is listed with its size.\n> + statistic is listed with its size.\n\nAgreed.\n\n \n> <para>\n> - The column of the kind of extended stats (e.g. Ndistinct) shows some statuses.\n> + The column of the kind of extended stats (e.g. Ndistinct) shows its status.\n> \"requested\" means that it needs to collect statistics by <link\n> linkend=\"sql-analyze\"><command>ANALYZE</command></link>.\n> \"built\" means <link linkend=\"sql-analyze\"><command>ANALYZE</command></link> was\n\nAgreed.\n\n\n> - finished, and the planner can use it. NULL means that it doesn't exists.\n> + run, and statistics are available to the planner. NULL means that it doesn't exist.\n\n\nAgreed.\n\n\nThanks,\nTatsuro Yamada\n\n\n\n",
"msg_date": "Mon, 18 Jan 2021 16:43:12 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "\n\nOn 1/18/21 8:31 AM, Tatsuro Yamada wrote:\n> Hi Tomas and Shinoda-san,\n> \n> On 2021/01/17 23:31, Tomas Vondra wrote:\n>>\n>>\n>> On 1/17/21 3:01 AM, Tomas Vondra wrote:\n>>> On 1/17/21 2:41 AM, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n>>>> Hi, hackers.\n>>>>\n>>>> I tested this committed feature.\n>>>> It doesn't seem to be available to non-superusers due to the \n>>>> inability to access pg_statistics_ext_data.\n>>>> Is this the expected behavior?\n> \n> \n> Ugh. I overlooked the test to check the case of the user hasn't \n> Superuser privilege. The user without the privilege was able to access \n> pg_statistics_ext. Therefore I supposed that it's also able to access \n> pg_statics_ext_data. Oops.\n> \n> \n>>> Hmmm, that's a good point. Bummer we haven't noticed that earlier.\n>>>\n>>> I wonder what the right fix should be - presumably we could do \n>>> something like pg_stats_ext (we can't use that view directly, because \n>>> it formats the data, so the sizes are different).\n>>>\n>>> But should it list just the stats the user has access to, or should \n>>> it list everything and leave the inaccessible fields NULL?\n>>>\n>>\n>> I've reverted the commit - once we find the right way to handle this, \n>> I'll get it committed again.\n>>\n>> As for how to deal with this, I can think of about three ways:\n>>\n>> 1) simplify the command to only print information from \n>> pg_statistic_ext (so on information about which stats are built or sizes)\n>>\n>> 2) extend pg_stats_ext with necessary information (e.g. sizes)\n>>\n>> 3) create a new system view, with necessary information (so that \n>> pg_stats_ext does not need to be modified)\n>>\n>> 4) add functions returning the necessary information, possibly only \n>> for statistics the user can access (similarly to what pg_stats_ext does)\n>>\n>> Options 2-4 have the obvious disadvantage that this won't work on \n>> older releases (we can't add views or functions there). So I'm leaning \n>> towards #1 even if that means we have to remove some of the details. \n>> We can consider adding that for new releases, though.\n> \n> \n> Thanks for the useful advice. I go with option 1).\n> The following query is created by using pg_stats_ext instead of \n> pg_statistic_ext and pg_statistic_ext_data. However, I was confused\n> about writing a part of the query for calculating MCV size because\n> there are four columns related to MCV. For example, most_common_vals, \n> most_common_val_nulls, most_common_freqs, and most_common_base_freqs.\n> Currently, I don't know how to calculate the size of MCV by using the\n> four columns. Thoughts? :-)\n\nWell, my suggestion was to use pg_statistic_ext, because that lists all \nstatistics, while pg_stats_ext is filtering statistics depending on \naccess privileges. I think that's more appropriate for \\dX, the contents \nshould not change depending on the user.\n\nAlso, let me clarify - with option (1) we'd not show the sizes at all. \nThe size of the formatted statistics may be very different from the \non-disk representation, so I see no point in showing it in \\dX.\n\nWe might show other stats (e.g. number of MCV items, or the fraction of \ndata represented by the MCV list), but the user can inspect pg_stats_ext \nif needed.\n\nWhat we might do is to show those stats when a superuser is running this \ncommand, but I'm not sure that's a good idea (or how difficult would it \nbe to implement).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 18 Jan 2021 14:23:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas,\n\n>>> As for how to deal with this, I can think of about three ways:\n>>>\n>>> 1) simplify the command to only print information from pg_statistic_ext (so on information about which stats are built or sizes)\n>>>\n>>> 2) extend pg_stats_ext with necessary information (e.g. sizes)\n>>>\n>>> 3) create a new system view, with necessary information (so that pg_stats_ext does not need to be modified)\n>>>\n>>> 4) add functions returning the necessary information, possibly only for statistics the user can access (similarly to what pg_stats_ext does)\n>>>\n>>> Options 2-4 have the obvious disadvantage that this won't work on older releases (we can't add views or functions there). So I'm leaning towards #1 even if that means we have to remove some of the details. We can consider adding that for new releases, though.\n>>\n>>\n>> Thanks for the useful advice. I go with option 1).\n>> The following query is created by using pg_stats_ext instead of pg_statistic_ext and pg_statistic_ext_data. However, I was confused\n>> about writing a part of the query for calculating MCV size because\n>> there are four columns related to MCV. For example, most_common_vals, most_common_val_nulls, most_common_freqs, and most_common_base_freqs.\n>> Currently, I don't know how to calculate the size of MCV by using the\n>> four columns. Thoughts? :-)\n> \n> Well, my suggestion was to use pg_statistic_ext, because that lists all statistics, while pg_stats_ext is filtering statistics depending on access privileges. I think that's more appropriate for \\dX, the contents should not change depending on the user.\n> \n> Also, let me clarify - with option (1) we'd not show the sizes at all. The size of the formatted statistics may be very different from the on-disk representation, so I see no point in showing it in \\dX.\n> \n> We might show other stats (e.g. number of MCV items, or the fraction of data represented by the MCV list), but the user can inspect pg_stats_ext if needed.\n> \n> What we might do is to show those stats when a superuser is running this command, but I'm not sure that's a good idea (or how difficult would it be to implement).\n\n\nThanks for clarifying.\nI see that your suggestion was to use pg_statistic_ext, not pg_stats_ext.\nAnd we don't need the size of stats.\n\nIf that's the case, we also can't get the status of stats since PG12 or later\nbecause we can't use pg_statistic_ext_data, as you know. Therefore, it would be\nbetter to replace the query with the old query that I sent five months ago like this:\n\n# the old query\nSELECT\n stxnamespace::pg_catalog.regnamespace AS \"Schema\",\n stxrelid::pg_catalog.regclass AS \"Table\",\n stxname AS \"Name\",\n (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(attname),', ')\n FROM pg_catalog.unnest(stxkeys) s(attnum)\n JOIN pg_catalog.pg_attribute a ON (stxrelid = a.attrelid AND\n a.attnum = s.attnum AND NOT attisdropped)) AS \"Columns\",\n 'd' = any(stxkind) AS \"Ndistinct\",\n 'f' = any(stxkind) AS \"Dependencies\",\n 'm' = any(stxkind) AS \"MCV\"\nFROM pg_catalog.pg_statistic_ext stat\nORDER BY 1,2;\n\n Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n--------+--------+------------+---------+-----------+--------------+-----\n public | hoge1 | hoge1_ext | a, b | t | t | t\n public | hoge_t | hoge_t_ext | a, b | t | t | t\n(2 rows)\n\n\nThe above query is so simple so that we would better to use the following query:\n\n# This query works on PG10 or later\nSELECT\n es.stxnamespace::pg_catalog.regnamespace::text AS \"Schema\",\n es.stxname AS \"Name\",\n pg_catalog.format('%s FROM %s',\n (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(a.attname),', ')\n FROM pg_catalog.unnest(es.stxkeys) s(attnum)\n JOIN pg_catalog.pg_attribute a\n ON (es.stxrelid = a.attrelid\n AND a.attnum = s.attnum\n AND NOT a.attisdropped)),\n es.stxrelid::regclass) AS \"Definition\",\n CASE WHEN 'd' = any(es.stxkind) THEN 'defined'\n END AS \"Ndistinct\",\n CASE WHEN 'f' = any(es.stxkind) THEN 'defined'\n END AS \"Dependencies\",\n CASE WHEN 'm' = any(es.stxkind) THEN 'defined'\n END AS \"MCV\"\nFROM pg_catalog.pg_statistic_ext es\nORDER BY 1, 2;\n\n Schema | Name | Definition | Ndistinct | Dependencies | Dependencies\n--------+------------+------------------+-----------+--------------+--------------\n public | hoge1_ext | a, b FROM hoge1 | defined | defined | defined\n public | hoge_t_ext | a, b FROM hoge_t | defined | defined | defined\n(2 rows)\n\n\nI'm going to create the WIP patch to use the above queriy.\nAny comments welcome. :-D\n\nThanks,\nTatsuro Yamada\n\n\n\n",
"msg_date": "Tue, 19 Jan 2021 09:44:31 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "\n\nOn 1/19/21 1:44 AM, Tatsuro Yamada wrote:\n> Hi Tomas,\n> \n>>>> As for how to deal with this, I can think of about three ways:\n>>>>\n>>>> 1) simplify the command to only print information from \n>>>> pg_statistic_ext (so on information about which stats are built or \n>>>> sizes)\n>>>>\n>>>> 2) extend pg_stats_ext with necessary information (e.g. sizes)\n>>>>\n>>>> 3) create a new system view, with necessary information (so that \n>>>> pg_stats_ext does not need to be modified)\n>>>>\n>>>> 4) add functions returning the necessary information, possibly only \n>>>> for statistics the user can access (similarly to what pg_stats_ext \n>>>> does)\n>>>>\n>>>> Options 2-4 have the obvious disadvantage that this won't work on \n>>>> older releases (we can't add views or functions there). So I'm \n>>>> leaning towards #1 even if that means we have to remove some of the \n>>>> details. We can consider adding that for new releases, though.\n>>>\n>>>\n>>> Thanks for the useful advice. I go with option 1).\n>>> The following query is created by using pg_stats_ext instead of \n>>> pg_statistic_ext and pg_statistic_ext_data. However, I was confused\n>>> about writing a part of the query for calculating MCV size because\n>>> there are four columns related to MCV. For example, most_common_vals, \n>>> most_common_val_nulls, most_common_freqs, and most_common_base_freqs.\n>>> Currently, I don't know how to calculate the size of MCV by using the\n>>> four columns. Thoughts? :-)\n>>\n>> Well, my suggestion was to use pg_statistic_ext, because that lists \n>> all statistics, while pg_stats_ext is filtering statistics depending \n>> on access privileges. I think that's more appropriate for \\dX, the \n>> contents should not change depending on the user.\n>>\n>> Also, let me clarify - with option (1) we'd not show the sizes at all. \n>> The size of the formatted statistics may be very different from the \n>> on-disk representation, so I see no point in showing it in \\dX.\n>>\n>> We might show other stats (e.g. number of MCV items, or the fraction \n>> of data represented by the MCV list), but the user can inspect \n>> pg_stats_ext if needed.\n>>\n>> What we might do is to show those stats when a superuser is running \n>> this command, but I'm not sure that's a good idea (or how difficult \n>> would it be to implement).\n> \n> \n> Thanks for clarifying.\n> I see that your suggestion was to use pg_statistic_ext, not pg_stats_ext.\n> And we don't need the size of stats.\n> \n> If that's the case, we also can't get the status of stats since PG12 or \n> later\n> because we can't use pg_statistic_ext_data, as you know. Therefore, it \n> would be\n> better to replace the query with the old query that I sent five months \n> ago like this:\n> \n> # the old query\n> SELECT\n> stxnamespace::pg_catalog.regnamespace AS \"Schema\",\n> stxrelid::pg_catalog.regclass AS \"Table\",\n> stxname AS \"Name\",\n> (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(attname),', ')\n> FROM pg_catalog.unnest(stxkeys) s(attnum)\n> JOIN pg_catalog.pg_attribute a ON (stxrelid = a.attrelid AND\n> a.attnum = s.attnum AND NOT attisdropped)) AS \"Columns\",\n> 'd' = any(stxkind) AS \"Ndistinct\",\n> 'f' = any(stxkind) AS \"Dependencies\",\n> 'm' = any(stxkind) AS \"MCV\"\n> FROM pg_catalog.pg_statistic_ext stat\n> ORDER BY 1,2;\n> \n> Schema | Table | Name | Columns | Ndistinct | Dependencies | MCV\n> --------+--------+------------+---------+-----------+--------------+-----\n> public | hoge1 | hoge1_ext | a, b | t | t | t\n> public | hoge_t | hoge_t_ext | a, b | t | t | t\n> (2 rows)\n> \n> \n> The above query is so simple so that we would better to use the \n> following query:\n> \n> # This query works on PG10 or later\n> SELECT\n> es.stxnamespace::pg_catalog.regnamespace::text AS \"Schema\",\n> es.stxname AS \"Name\",\n> pg_catalog.format('%s FROM %s',\n> (SELECT \n> pg_catalog.string_agg(pg_catalog.quote_ident(a.attname),', ')\n> FROM pg_catalog.unnest(es.stxkeys) s(attnum)\n> JOIN pg_catalog.pg_attribute a\n> ON (es.stxrelid = a.attrelid\n> AND a.attnum = s.attnum\n> AND NOT a.attisdropped)),\n> es.stxrelid::regclass) AS \"Definition\",\n> CASE WHEN 'd' = any(es.stxkind) THEN 'defined'\n> END AS \"Ndistinct\",\n> CASE WHEN 'f' = any(es.stxkind) THEN 'defined'\n> END AS \"Dependencies\",\n> CASE WHEN 'm' = any(es.stxkind) THEN 'defined'\n> END AS \"MCV\"\n> FROM pg_catalog.pg_statistic_ext es\n> ORDER BY 1, 2;\n> \n> Schema | Name | Definition | Ndistinct | Dependencies | \n> Dependencies\n> --------+------------+------------------+-----------+--------------+-------------- \n> \n> public | hoge1_ext | a, b FROM hoge1 | defined | defined | \n> defined\n> public | hoge_t_ext | a, b FROM hoge_t | defined | defined | \n> defined\n> (2 rows)\n> \n> \n> I'm going to create the WIP patch to use the above queriy.\n> Any comments welcome. :-D\n> \n\nYes, I think using this simpler query makes sense. If we decide we need \nsomething more elaborate, we can improve that by in future PostgreSQL \nversions (after adding view/function to core), but I'd leave that as a \nwork for the future.\n\nApologies for all the extra work - I haven't realized this flaw when \npushing for showing more stuff :-(\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 19 Jan 2021 03:52:22 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi,\n\n> The above query is so simple so that we would better to use the following query:\n> \n> # This query works on PG10 or later\n> SELECT\n> es.stxnamespace::pg_catalog.regnamespace::text AS \"Schema\",\n> es.stxname AS \"Name\",\n> pg_catalog.format('%s FROM %s',\n> (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(a.attname),', ')\n> FROM pg_catalog.unnest(es.stxkeys) s(attnum)\n> JOIN pg_catalog.pg_attribute a\n> ON (es.stxrelid = a.attrelid\n> AND a.attnum = s.attnum\n> AND NOT a.attisdropped)),\n> es.stxrelid::regclass) AS \"Definition\",\n> CASE WHEN 'd' = any(es.stxkind) THEN 'defined'\n> END AS \"Ndistinct\",\n> CASE WHEN 'f' = any(es.stxkind) THEN 'defined'\n> END AS \"Dependencies\",\n> CASE WHEN 'm' = any(es.stxkind) THEN 'defined'\n> END AS \"MCV\"\n> FROM pg_catalog.pg_statistic_ext es\n> ORDER BY 1, 2;\n> \n> Schema | Name | Definition | Ndistinct | Dependencies | Dependencies\n> --------+------------+------------------+-----------+--------------+--------------\n> public | hoge1_ext | a, b FROM hoge1 | defined | defined | defined\n> public | hoge_t_ext | a, b FROM hoge_t | defined | defined | defined\n> (2 rows)\n> \n> \n> I'm going to create the WIP patch to use the above query.\n> Any comments welcome. :-D\n\n\nAttached patch is WIP patch.\n\nThe changes are:\n - Use pg_statistic_ext only\n - Remove these statuses: \"required\" and \"built\"\n - Add new status: \"defined\"\n - Remove the size columns\n - Fix document\n\nI'll create and send the regression test on the next patch if there is\nno objection. Is it Okay?\n\nRegards,\nTatsuro Yamada",
"msg_date": "Tue, 19 Jan 2021 12:02:02 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas,\n\nOn 2021/01/19 11:52, Tomas Vondra wrote:\n> \n>> I'm going to create the WIP patch to use the above queriy.\n>> Any comments welcome. :-D\n> \n> Yes, I think using this simpler query makes sense. If we decide we need something more elaborate, we can improve that by in future PostgreSQL versions (after adding view/function to core), but I'd leave that as a work for the future.\n\n\nI see, thanks!\n\n\n> Apologies for all the extra work - I haven't realized this flaw when pushing for showing more stuff :-(\n\n\nDon't worry about it. We didn't notice the problem even when viewed by multiple\npeople on -hackers. Let's keep moving forward. :-D\n\nI'll send a patch including a regression test on the next patch.\n\nRegards,\nTatsuro Yamada\n\n\n\n",
"msg_date": "Wed, 20 Jan 2021 11:35:03 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas,\n\nOn 2021/01/20 11:35, Tatsuro Yamada wrote:\n>> Apologies for all the extra work - I haven't realized this flaw when pushing for showing more stuff :-(\n> \n> Don't worry about it. We didn't notice the problem even when viewed by multiple\n> people on -hackers. Let's keep moving forward. :-D\n> \n> I'll send a patch including a regression test on the next patch.\n\n\nI created patches and my test results on PG10, 11, 12, and 14 are fine.\n\n 0001:\n - Fix query to use pg_statistic_ext only\n - Replace statuses \"required\" and \"built\" with \"defined\"\n - Remove the size columns\n - Fix document\n - Add schema name as a filter condition on the query\n\n 0002:\n - Fix all results of \\dX\n - Add new testcase by non-superuser\n\nPlease find attached files. :-D\n\n\nRegards,\nTatsuro Yamada",
"msg_date": "Wed, 20 Jan 2021 15:41:57 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "\n\nOn 1/20/21 7:41 AM, Tatsuro Yamada wrote:\n> Hi Tomas,\n> \n> On 2021/01/20 11:35, Tatsuro Yamada wrote:\n>>> Apologies for all the extra work - I haven't realized this flaw when \n>>> pushing for showing more stuff :-(\n>>\n>> Don't worry about it. We didn't notice the problem even when viewed by \n>> multiple\n>> people on -hackers. Let's keep moving forward. :-D\n>>\n>> I'll send a patch including a regression test on the next patch.\n> \n> \n> I created patches and my test results on PG10, 11, 12, and 14 are fine.\n> \n> 0001:\n> - Fix query to use pg_statistic_ext only\n> - Replace statuses \"required\" and \"built\" with \"defined\"\n> - Remove the size columns\n> - Fix document\n> - Add schema name as a filter condition on the query\n> \n> 0002:\n> - Fix all results of \\dX\n> - Add new testcase by non-superuser\n> \n> Please find attached files. :-D\n\nThanks, I've pushed this. I had to tweak the regression tests a bit, for \ntwo reasons:\n\n1) to change user in regression tests, don't use \\connect, but SET ROLE \nand RESET ROLE\n\n2) roles in regression tests should use names with regress_ prefix\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 20 Jan 2021 23:00:50 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "Hi Tomas and hackers,\n\nOn 2021/01/21 7:00, Tomas Vondra wrote:\n>> I created patches and my test results on PG10, 11, 12, and 14 are fine.\n>>\n>> 0001:\n>> - Fix query to use pg_statistic_ext only\n>> - Replace statuses \"required\" and \"built\" with \"defined\"\n>> - Remove the size columns\n>> - Fix document\n>> - Add schema name as a filter condition on the query\n>>\n>> 0002:\n>> - Fix all results of \\dX\n>> - Add new testcase by non-superuser\n>>\n>> Please find attached files. :-D\n> \n> Thanks, I've pushed this. I had to tweak the regression tests a bit, for two reasons:\n> \n> 1) to change user in regression tests, don't use \\connect, but SET ROLE and RESET ROLE\n> \n> 2) roles in regression tests should use names with regress_ prefix\n\n\nThanks for reviewing many times and committing the feature!\n\nI understood 1) and 2). I'll keep that in mind for the next developing patch.\nThen, If possible, could you add Justin to the commit message as a reviewer?\nBecause I revised the document partly based on his comments.\n\nFinally, As extended stats were more used, this feature becomes more useful.\nI hope it helps DBA. :-D\n\n\nThanks,\nTatsuro Yamada\n\n\n\n\n",
"msg_date": "Thu, 21 Jan 2021 08:53:14 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql"
},
{
"msg_contents": "On Wed, Jan 20, 2021 at 11:00:50PM +0100, Tomas Vondra wrote:\n> Thanks, I've pushed this. I had to tweak the regression tests a bit, for two\n> reasons:\n\n\\dX isn't checking schema visibility rules, so accidentally shows stats objects\noutside of the search path. I noticed after installing the PG14b1 client,\nsince we create stats objects in a separate schema to allow excluding them with\npg_dump -N.\n\ndiff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c\nindex 195f8d8cd2..e29f13c65e 100644\n--- a/src/bin/psql/describe.c\n+++ b/src/bin/psql/describe.c\n@@ -4774,7 +4774,7 @@ listExtendedStats(const char *pattern)\n \tprocessSQLNamePattern(pset.db, &buf, pattern,\n \t\t\t\t\t\t false, false,\n \t\t\t\t\t\t \"es.stxnamespace::pg_catalog.regnamespace::text\", \"es.stxname\",\n-\t\t\t\t\t\t NULL, NULL);\n+\t\t\t\t\t\t NULL, \"pg_catalog.pg_statistics_obj_is_visible(es.oid)\");\n \n \tappendPQExpBufferStr(&buf, \"ORDER BY 1, 2;\");\n \n\n\n",
"msg_date": "Sun, 30 May 2021 12:24:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql (\\dX)"
},
{
"msg_contents": "\n\nOn 5/30/21 7:24 PM, Justin Pryzby wrote:\n> On Wed, Jan 20, 2021 at 11:00:50PM +0100, Tomas Vondra wrote:\n>> Thanks, I've pushed this. I had to tweak the regression tests a bit, for two\n>> reasons:\n> \n> \\dX isn't checking schema visibility rules, so accidentally shows stats objects\n> outside of the search path. I noticed after installing the PG14b1 client,\n> since we create stats objects in a separate schema to allow excluding them with\n> pg_dump -N.\n> \n> diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c\n> index 195f8d8cd2..e29f13c65e 100644\n> --- a/src/bin/psql/describe.c\n> +++ b/src/bin/psql/describe.c\n> @@ -4774,7 +4774,7 @@ listExtendedStats(const char *pattern)\n> \tprocessSQLNamePattern(pset.db, &buf, pattern,\n> \t\t\t\t\t\t false, false,\n> \t\t\t\t\t\t \"es.stxnamespace::pg_catalog.regnamespace::text\", \"es.stxname\",\n> -\t\t\t\t\t\t NULL, NULL);\n> +\t\t\t\t\t\t NULL, \"pg_catalog.pg_statistics_obj_is_visible(es.oid)\");\n> \n> \tappendPQExpBufferStr(&buf, \"ORDER BY 1, 2;\");\n> \n\nThanks for noticing this! Will push.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 30 May 2021 22:05:21 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql (\\dX)"
},
{
"msg_contents": "Hi,\n\nHere's a slightly more complete patch, tweaking the regression tests a\nbit to detect this.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 6 Jun 2021 21:47:16 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql (\\dX)"
},
{
"msg_contents": "Hi Tomas and Justin,\n\nOn 2021/06/07 4:47, Tomas Vondra wrote:\n> Here's a slightly more complete patch, tweaking the regression tests a\n> bit to detect this.\n\n\nI tested your patch on PG14beta2 and PG15devel.\nAnd they work fine.\n=======================\n All 209 tests passed.\n=======================\n\nNext time I create a feature on psql, I will be careful to add\na check for schema visibility rules. :-D\n\nThanks,\nTatsuro Yamada\n\n\n\n\n\n",
"msg_date": "Thu, 08 Jul 2021 13:46:41 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql (\\dX)"
},
{
"msg_contents": "Hi,\n\nI've pushed the last version of the fix, including the regression tests \netc. Backpatch to 14, where \\dX was introduced.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 26 Jul 2021 21:26:26 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: list of extended statistics on psql (\\dX)"
},
{
"msg_contents": "Hi Tomas and Justin,\n\nOn 2021/07/27 4:26, Tomas Vondra wrote:\n> Hi,\n> \n> I've pushed the last version of the fix, including the regression tests etc. Backpatch to 14, where \\dX was introduced.\n\n\nThank you!\n\n\nRegards,\nTatsuro Yamada\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 09:25:57 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: list of extended statistics on psql (\\dX)"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI may be wrong, but an issue seems to have been introduced by the\nfollowing commit (March 11, 2019):\n\n Allow fractional input values for integer GUCs, and improve rounding logic\n https://github.com/postgres/postgres/commit/1a83a80a2fe5b559f85ed4830acb92d5124b7a9a\n\nThe changes made allow fractional input for some cases where I believe\nit shouldn't be allowed (i.e. when the setting does not accept a\nunit).\nFor example,\n\nlog_file_mode = 384.234\nmax_connections = 1.0067e2\nport = 5432.123\n\n(Is it intentional - or indeed useful - to allow such settings, for\ninteger options?)\n\nAlso, the modified parse_int() function is used for parsing other\noptions, such as the integer storage parameters for CREATE TABLE and\nCREATE INDEX. For example, the following integer parameter settings\nare currently allowed but I don't believe that they should be:\n\nCREATE TABLE ... WITH (fillfactor = 23.45);\nCREATE TABLE ... WITH (parallel_workers = 5.4);\n\n\nI have attached a patch with a proposed correction, keeping it a\nsimple change to the existing parse_int() function, rather than making\nfurther changes for more optimal integer parsing code. The patch also\nupdates a couple of test cases (reverting one to its original state\nbefore the commit mentioned above).\n\nLet me know what you think.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia",
"msg_date": "Mon, 24 Aug 2020 19:31:12 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": true,
"msg_subject": "Issue with past commit: Allow fractional input values for integer\n GUCs ..."
},
{
"msg_contents": "Greg Nancarrow <gregn4422@gmail.com> writes:\n> The changes made allow fractional input for some cases where I believe\n> it shouldn't be allowed (i.e. when the setting does not accept a\n> unit).\n> ...\n> (Is it intentional - or indeed useful - to allow such settings, for\n> integer options?)\n\nGiven that the commit included a test case exercising exactly that,\nI'm not sure why you might think it was unintentional. IIRC, the\nreasoning was that we ought to hide whether any given GUC is int or\nfloat underneath, in anticipation of future changes like caf626b2c.\nAnother argument is that in regular SQL, you can assign a fractional\nvalue to an integer column and the system will let you do it; so\nwhy not in SET?\n\nIn any case, we already shipped that behavior in v12, so I don't think\nwe can take it away now. People don't appreciate formerly valid\nsettings suddenly not working any more.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Aug 2020 10:17:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Issue with past commit: Allow fractional input values for integer\n GUCs ..."
},
{
"msg_contents": "> Given that the commit included a test case exercising exactly that,\n> I'm not sure why you might think it was unintentional.\n\nWell, maybe not exercising exactly that. No positive test case was\nadded. The commit replaced a CREATE TABLE fillfactor test case testing\nthat \"30.5\" is invalid, with a test case testing that \"-30.1\" is\nout-of-range. I guess that does indirectly test that \"-30.1\" is not an\nimproper value, though the out-of-range error means that test case\nshould really be put in the \"-- Fail min/max values check\" section and\nnot in the \"-- Fail while setting improper values\" section.\n\nMy point was that allowing the fractional input really only makes\nsense if the \"integer\" option/GUC has an associated unit. That's why I\nquestioned whether allowing it in this case (when the integer\noption/GUC has no associated unit, like \"port\" or \"max_connections\")\nwas intentional or useful.\n\n\n> IIRC, the\n> reasoning was that we ought to hide whether any given GUC is int or\n> float underneath, in anticipation of future changes like caf626b2c.\n> Another argument is that in regular SQL, you can assign a fractional\n> value to an integer column and the system will let you do it; so\n> why not in SET?\n>\n> In any case, we already shipped that behavior in v12, so I don't think\n> we can take it away now. People don't appreciate formerly valid\n> settings suddenly not working any more.\n>\n\nI guess we'll have to live with the current behaviour then.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 25 Aug 2020 12:41:25 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Issue with past commit: Allow fractional input values for integer\n GUCs ..."
},
{
"msg_contents": "On Mon, Aug 24, 2020 at 5:32 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> For example,\n>\n> log_file_mode = 384.234\n> max_connections = 1.0067e2\n> port = 5432.123\n> CREATE TABLE ... WITH (fillfactor = 23.45);\n> CREATE TABLE ... WITH (parallel_workers = 5.4);\n\nI don't think any of these cases should be allowed. Surely if we\nallowed 384.234 to be inserted into an integer column, everyone would\nsay that we'd lost our minds. These cases seem no different. The\ndiscussion to which the commit links is mainly about allowing 0.2s to\nwork like 200ms, or something of that sort, when the value is\nspecified as a fraction but works out to an integer when converted to\nthe base unit. That is a completely different thing from letting\npeople configure 5.4 parallel workers.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 26 Aug 2020 16:12:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with past commit: Allow fractional input values for integer\n GUCs ..."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I don't think any of these cases should be allowed. Surely if we\n> allowed 384.234 to be inserted into an integer column, everyone would\n> say that we'd lost our minds.\n\nregression=# create table itable (f1 int);\nCREATE TABLE\nregression=# insert into itable values (384.234);\nINSERT 0 1\nregression=# table itable;\n f1 \n-----\n 384\n(1 row)\n\nIt's always worked like that, and nobody's complained about it.\nI suspect, in fact, that one could find chapter and verse in the\nSQL spec that requires it, just like \"numeric\" values should get\nrounded if you write too many digits.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Aug 2020 16:47:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Issue with past commit: Allow fractional input values for integer\n GUCs ..."
},
{
"msg_contents": "On Wed, Aug 26, 2020 at 4:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> regression=# create table itable (f1 int);\n> CREATE TABLE\n> regression=# insert into itable values (384.234);\n> INSERT 0 1\n> regression=# table itable;\n> f1\n> -----\n> 384\n> (1 row)\n>\n> It's always worked like that, and nobody's complained about it.\n> I suspect, in fact, that one could find chapter and verse in the\n> SQL spec that requires it, just like \"numeric\" values should get\n> rounded if you write too many digits.\n\nThat is a bit different from what I had in mind, because it does not\ninvolve a call to int4in(). Instead, it involves a cast. I was\nimagining that you would put quotation marks around 384.234, which\ndoes indeed fail:\n\nERROR: invalid input syntax for type integer: \"384.234\"\n\nSo the question is whether users who supply values for integer-valued\nreloptions, or integer-valued GUCs, expect that they will be parsed as\nintegers, or whether they expect that they will be parsed as float\nvalues and the cast to integers.\n\nWhile the new behavior seems fine -- and indeed convenient -- for GUCs\nthat are numeric with a unit, it does not seem very nice at all for\nGUCs that are unitless integers. Why do you think that anyone would be\npleased to discover that they can set port = 543.2? We must decide\nwhether it is more likely that such a setting is the result of a user\nerror about which we should issue some complaint, or on the other hand\nwhether the user is hoping that we will be good enough to round the\nvalue off so as to spare them the trouble. My own view is that the\nformer is vastly more probably than the latter.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 26 Aug 2020 18:34:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with past commit: Allow fractional input values for integer\n GUCs ..."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> While the new behavior seems fine -- and indeed convenient -- for GUCs\n> that are numeric with a unit, it does not seem very nice at all for\n> GUCs that are unitless integers.\n\nI find that distinction to be entirely without merit; not least because\nwe also have unitless float GUCs. I think the fact that we have some\nfloat and some integer GUCs is an implementation detail more than a\nfundamental property --- especially since SQL considers integers\nto be just the scale-zero subset of numerics. I recognize that your\nopinion is different, but to me it seems fine as-is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Aug 2020 19:27:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Issue with past commit: Allow fractional input values for integer\n GUCs ..."
}
] |
[
{
"msg_contents": "Hi,\n\nAs specified in $subject, if the bitmap constructed by bitmap index\nscan is non-lossy i.e. row-level bitmap, then showing \"Recheck Cond\"\nin EXPLAIN ANALYZE output is pointless. However in EXPLAIN without\nANALYZE we can't say the bitmap is actually a non-lossy one, as we\ndon't actually construct the \"original\" bitmap, so showing \"Recheck\nCond\" in this case makes sense.\n\nAttaching a small patch that corrects EXPLAIN ANALYZE output for bitmap scans.\n\nNote: $subject is identified in [1].\n\nThoughts?\n\n[1] - https://www.youtube.com/watch?v=UXKYAZOWDgk ---> at 13:50 (mm:ss)\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 24 Aug 2020 16:06:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid displaying unnecessary \"Recheck Cond\" in EXPLAIN ANALYZE output\n if the bitmap is non-lossy"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> As specified in $subject, if the bitmap constructed by bitmap index\n> scan is non-lossy i.e. row-level bitmap, then showing \"Recheck Cond\"\n> in EXPLAIN ANALYZE output is pointless. However in EXPLAIN without\n> ANALYZE we can't say the bitmap is actually a non-lossy one, as we\n> don't actually construct the \"original\" bitmap, so showing \"Recheck\n> Cond\" in this case makes sense.\n\nI do not think this change makes even a little bit of sense.\nThe recheck condition is part of the plan structure, it is not\nexecution statistics.\n\nI compare this proposal to having EXPLAIN suppress plan tree nodes\nentirely if they weren't executed. We don't do that and it\nwouldn't be an improvement. Especially not for non-text output\nformats, where the schema of fields that are presented ought to\nbe fixed for any given plan tree.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Aug 2020 10:04:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid displaying unnecessary \"Recheck Cond\" in EXPLAIN ANALYZE\n output if the bitmap is non-lossy"
}
] |
[
{
"msg_contents": "Hi all,\n\nAdmittedly quite ahead of time, I would like to volunteer as Commitfest manager for 2020-11.\n\nIf the role is not filled and there are no objections, I can reach out again in October for confirmation.\n\n//Georgios\n\n\n",
"msg_date": "Mon, 24 Aug 2020 13:08:53 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": true,
"msg_subject": "Commitfest manager 2020-11"
},
{
"msg_contents": "On 24.08.2020 16:08, gkokolatos@pm.me wrote:\n> Hi all,\n>\n> Admittedly quite ahead of time, I would like to volunteer as Commitfest manager for 2020-11.\n>\n> If the role is not filled and there are no objections, I can reach out again in October for confirmation.\n>\n> //Georgios\n\nWow, that was well in advance) I am willing to assist if you need any help.\n\nI was looking for this message, to find out who is the current CFM. \nApparently, the November commitfest is not in progress yet.\n\nStill, I have a question. Should we also maintain statuses of the \npatches in the \"Open\" commitfest? 21 patches were already committed \nduring this CF, which shows that even \"open\" CF is quite active. I've \nupdated a few patches, that were sent by my colleagues. If there are no \nobjections, I can do that for other entries too.\n\nOn the other hand, I noticed a lot of stall threads, that weren't \nupdated in months. Some of them seem to pass several CFs without any \nactivity at all. I believe that it is wrong for many reasons, the major \nof which IMHO is a frustration of the authors. Can we come up with \nsomething to impove this situation?\n\nP.S. I have a few more ideas about the CF management. I suppose, that \nthey are usually being discussed at pgcon meetings, but those won't \nhappen anytime soon. Is there a special place for such discussions, or I \nmay continue this thread?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 16 Oct 2020 21:04:45 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager 2020-11"
},
{
"msg_contents": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n> I was looking for this message, to find out who is the current CFM. \n> Apparently, the November commitfest is not in progress yet.\n\nNope, nor have we officially appointed a CFM for it yet. We're seldom\norganized enough to do that much in advance of the CF's start.\n\n> Still, I have a question. Should we also maintain statuses of the \n> patches in the \"Open\" commitfest?\n\nYes, absolutely, if you notice something out-of-date there, go ahead\nand fix it. If nothing else, you'll save the eventual CFM some time.\n\n> On the other hand, I noticed a lot of stall threads, that weren't \n> updated in months. Some of them seem to pass several CFs without any \n> activity at all. I believe that it is wrong for many reasons, the major \n> of which IMHO is a frustration of the authors. Can we come up with \n> something to impove this situation?\n\nYeah, that's a perennial problem. Part of the issue is just a shortage\nof people --- there are always more patches than we can review and\ncommit in one month. IMO, another cause is that we have a hard time\nsaying \"no\". If a particular patch isn't too well liked, we tend to\njust let it slide to the next CF rather than making the uncomfortable\ndecision to reject it. If you've got thoughts about that, or any other\nways to improve the process, for sure speak up.\n\n> P.S. I have a few more ideas about the CF management. I suppose, that \n> they are usually being discussed at pgcon meetings, but those won't \n> happen anytime soon. Is there a special place for such discussions, or I \n> may continue this thread?\n\nThis thread seems like an OK place for the discussion. As you say,\nthere are not likely to be any in-person meetings for awhile :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Oct 2020 14:57:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager 2020-11"
},
{
"msg_contents": "On 16.10.2020 21:57, Tom Lane wrote:\n> Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n>> On the other hand, I noticed a lot of stall threads, that weren't\n>> updated in months. Some of them seem to pass several CFs without any\n>> activity at all. I believe that it is wrong for many reasons, the major\n>> of which IMHO is a frustration of the authors. Can we come up with\n>> something to impove this situation?\n> Yeah, that's a perennial problem. Part of the issue is just a shortage\n> of people --- there are always more patches than we can review and\n> commit in one month. IMO, another cause is that we have a hard time\n> saying \"no\". If a particular patch isn't too well liked, we tend to\n> just let it slide to the next CF rather than making the uncomfortable\n> decision to reject it. If you've got thoughts about that, or any other\n> ways to improve the process, for sure speak up.\n>\n\n From a CFM perspective, we can try the following things:\n\n- Write recaps for long-running threads, listing open questions and TODOs.\nThis one is my personal pain. Some threads do look scary and it is less \nlikely that someone will even start a review if they have to catch up \nwith a year-long discussion of 10 people.\n\n- Mark patches from first-time contributors with some tag.\nProbably, these entries are simple/dummy enough to handle them faster. \nAnd also it will be a good reminder to be a bit less demanding with \nbeginners. See Dmitry's statistic about how many people have sent patch \nonly once [1].\n\n- Proactively ask committers, if they are going to work on the upcoming \nCF and will they need any specific help.\nMaybe we can also ask about their preferred code areas and check what is \nleft uncovered. It's really bad if there is no one, who is working with \nlet's say WAL internals during the CF. TBH, I have no idea, what are we \ngoing to do with this knowledge, but I think it's better to know.\n\n- From time to time call a council of several committers and make tough \ndecisions about patches that are in discussion for too long (let's say 4 \ncommitfests).\nHopefully, it will be easier to reach a consensus in a \"real-time\" \ndiscussion, or we can toss a coin. This problem was raised in previous \ndiscussions too [2].\n\n[1] \nhttps://www.postgresql.org/message-id/CA+q6zcXtg7cFwX-c+BoOwk65+jtR-sQGZ=1mqG-VGMVZuH86sQ@mail.gmail.com\n[2] \nhttps://www.postgresql.org/message-id/flat/CAA8%3DA7-owFLugBVZ0JjehTZJue7brEs2qTjVyZFRDq-B%3D%2BNwNg%40mail.gmail.com\n\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 16 Oct 2020 23:21:38 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager 2020-11"
},
{
"msg_contents": "\n\n\n\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> [snip]\n>\n> Wow, that was well in advance) I am willing to assist if you need any help.\n>\n\nIndeed it was a bit early. I left for vacation after that. For the record, I am newly active to the community. In our PUG, in Stockholm, we held a meetup during which a contributor presented ways to contribute to the community, one of which is becoming CFM. So, I thought of picking up the recommendation.\n\nI have taken little part in CFs as reviewer/author and I have no idea how a CF is actually run. A contributor from Stockholm has been willing to mentor me to the part.\n\nSince you have both the knowledge and specific ideas on improving the CF, how about me assisting you? I could shadow you and you can groom me to the part, so that I can take the lead to a future CF more effectively.\n\nThis is just a suggestion of course. I am happy with anything that can help the community as a whole.\n\n\n\n",
"msg_date": "Tue, 20 Oct 2020 07:30:39 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": true,
"msg_subject": "Re: Commitfest manager 2020-11"
},
{
"msg_contents": "On 20.10.2020 10:30, gkokolatos@pm.me wrote:\n>\n>\n>\n>\n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n>> [snip]\n>>\n>> Wow, that was well in advance) I am willing to assist if you need any help.\n>>\n> Indeed it was a bit early. I left for vacation after that. For the record, I am newly active to the community. In our PUG, in Stockholm, we held a meetup during which a contributor presented ways to contribute to the community, one of which is becoming CFM. So, I thought of picking up the recommendation.\n>\n> I have taken little part in CFs as reviewer/author and I have no idea how a CF is actually run. A contributor from Stockholm has been willing to mentor me to the part.\n>\n> Since you have both the knowledge and specific ideas on improving the CF, how about me assisting you? I could shadow you and you can groom me to the part, so that I can take the lead to a future CF more effectively.\n>\n> This is just a suggestion of course. I am happy with anything that can help the community as a whole.\n>\nEven though, I've worked a lot with community, I have never been CFM \nbefore as well. So, I think we can just follow these articles:\n\nhttps://www.2ndquadrant.com/en/blog/managing-a-postgresql-commitfest/\nhttps://wiki.postgresql.org/wiki/CommitFest_Checklist\n\nSome parts are a bit outdated, but in general the checklist is clear. \nI've already requested CFM privileges in pgsql-www and I'm going to \nspend next week sending pings and updates to the patches at commitfest.\n\nThere are already 219 patches, so I will appreciate if you join me in \nthis task.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Sun, 25 Oct 2020 21:01:01 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest manager 2020-11"
},
{
"msg_contents": "\n\n\n\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Sunday, October 25, 2020 8:01 PM, Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:\n\n> On 20.10.2020 10:30, gkokolatos@pm.me wrote:\n>\n> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> >\n> > > [snip]\n> > > Wow, that was well in advance) I am willing to assist if you need any help.\n> >\n> > Indeed it was a bit early. I left for vacation after that. For the record, I am newly active to the community. In our PUG, in Stockholm, we held a meetup during which a contributor presented ways to contribute to the community, one of which is becoming CFM. So, I thought of picking up the recommendation.\n> > I have taken little part in CFs as reviewer/author and I have no idea how a CF is actually run. A contributor from Stockholm has been willing to mentor me to the part.\n> > Since you have both the knowledge and specific ideas on improving the CF, how about me assisting you? I could shadow you and you can groom me to the part, so that I can take the lead to a future CF more effectively.\n> > This is just a suggestion of course. I am happy with anything that can help the community as a whole.\n>\n> Even though, I've worked a lot with community, I have never been CFM\n> before as well. So, I think we can just follow these articles:\n>\n> https://www.2ndquadrant.com/en/blog/managing-a-postgresql-commitfest/\n> https://wiki.postgresql.org/wiki/CommitFest_Checklist\n>\n> Some parts are a bit outdated, but in general the checklist is clear.\n> I've already requested CFM privileges in pgsql-www and I'm going to\n> spend next week sending pings and updates to the patches at commitfest.\n\nAwesome. I will start with requesting the privileges then.\n\n>\n> There are already 219 patches, so I will appreciate if you join me in\n> this task.\n\nCount me in.\n\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Anastasia Lubennikova\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n\n\n\n\n",
"msg_date": "Tue, 27 Oct 2020 08:57:24 +0000",
"msg_from": "gkokolatos@pm.me",
"msg_from_op": true,
"msg_subject": "Re: Commitfest manager 2020-11"
}
] |
[
{
"msg_contents": "Hi Tom,\nI'm starting tests using ASAN (address sanitizer) at Windows side, using\nmsvc 2019 (built in asan support):\n\nFirst test report this:\n2020-08-24 10:02:33.220 -03 postmaster[6656] LOG: starting PostgreSQL\n14devel, compiled by Visual C++ build 1927, 64-bit\n2020-08-24 10:02:33.228 -03 postmaster[6656] LOG: listening on IPv6\naddress \"::1\", port 58080\n2020-08-24 10:02:33.228 -03 postmaster[6656] LOG: listening on IPv4\naddress \"127.0.0.1\", port 58080\n2020-08-24 10:02:33.415 -03 startup[1604] LOG: database system was shut\ndown at 2020-08-24 10:02:28 -03\n2020-08-24 10:02:33.495 -03 postmaster[6656] LOG: database system is ready\nto accept connections\n2020-08-24 10:02:34.580 -03 checkpointer[5680] LOG: checkpoint starting:\nimmediate force wait flush-all\n2020-08-24 10:02:34.598 -03 checkpointer[5680] LOG: checkpoint complete:\nwrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\nwrite=0.006 s, sync=0.000 s, total=0.018 s; sync files=0, longest=0.000 s,\naverage=0.000 s; distance=1 kB, estimate=1 kB\n2020-08-24 10:02:35.146 -03 checkpointer[5680] LOG: checkpoint starting:\nimmediate force wait\n2020-08-24 10:02:35.155 -03 checkpointer[5680] LOG: checkpoint complete:\nwrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\nwrite=0.002 s, sync=0.000 s, total=0.009 s; sync files=0, longest=0.000 s,\naverage=0.000 s; distance=0 kB, estimate=1 kB\n=================================================================\n==8400==AddressSanitizer CHECK failed:\nD:\\agent\\_work\\9\\s\\src\\vctools\\crt\\asan\\llvm\\compiler-rt\\lib\\asan\\asan_thread.cc:356\n\"((ptr[0] == kCurrentStackFrameMagic)) != (0)\" (0x0, 0x0)\n #0 0x7ffe985d0148 (C:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x180050148)\n #1 0x7ffe98597f3f (C:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x180017f3f)\n #2 0x7ffe985d5129 (C:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x180055129)\n #3 0x7ffe985b1de1 (C:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x180031de1)\n #4 0x7ffe985b0dea (C:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x180030dea)\n #5 0x7ffe985b30b5 (C:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x1800330b5)\n #6 0x7ffe985ce2bb (C:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x18004e2bb)\n #7 0x7ffe985d1d11 (C:\\Program Files (x86)\\Microsoft Visual\nStudio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x180051d11)\n #8 0x14123da71 in dopr C:\\dll\\postgres\\src\\port\\snprintf.c:441\n #9 0x14123c127 in pg_vsnprintf C:\\dll\\postgres\\src\\port\\snprintf.c:195\n #10 0x141214cc0 in pvsnprintf C:\\dll\\postgres\\src\\common\\psprintf.c:110\n #11 0x14121cefe in appendStringInfoVA\nC:\\dll\\postgres\\src\\common\\stringinfo.c:149\n #12 0x14121cd9d in appendStringInfo\nC:\\dll\\postgres\\src\\common\\stringinfo.c:103\n #13 0x1411134c6 in send_message_to_server_log\nC:\\dll\\postgres\\src\\backend\\utils\\error\\elog.c:2923\n #14 0x14110d4f1 in EmitErrorReport\nC:\\dll\\postgres\\src\\backend\\utils\\error\\elog.c:1456\n #15 0x140c7537c in PostgresMain\nC:\\dll\\postgres\\src\\backend\\tcop\\postgres.c:4079\n #16 0x140a98f28 in BackendRun\nC:\\dll\\postgres\\src\\backend\\postmaster\\postmaster.c:4530\n #17 0x140a932ef in SubPostmasterMain\nC:\\dll\\postgres\\src\\backend\\postmaster\\postmaster.c:5053\n #18 0x14069dfab in main C:\\dll\\postgres\\src\\backend\\main\\main.c:186\n #19 0x1412694c8 in invoke_main\nD:\\agent\\_work\\9\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl:78\n #20 0x14126941d in __scrt_common_main_seh\nD:\\agent\\_work\\9\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl:288\n #21 0x1412692dd in __scrt_common_main\nD:\\agent\\_work\\9\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl:330\n #22 0x141269538 in mainCRTStartup\nD:\\agent\\_work\\9\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_main.cpp:16\n #23 0x7ffed1d46fd3 (C:\\WINDOWS\\System32\\KERNEL32.DLL+0x180016fd3)\n #24 0x7ffed30fcec0 (C:\\WINDOWS\\SYSTEM32\\ntdll.dll+0x18004cec0)\n\nI'm not sure if ASAN can report false positives or if this CHECK error is\nown asan bug?\nCan you take a look, please?\n\nregards,\nRanier Vilela\n\nHi Tom,I'm starting tests using ASAN (address sanitizer) at Windows side, using msvc 2019 (built in asan support):First test report this:2020-08-24 10:02:33.220 -03 postmaster[6656] LOG: starting PostgreSQL 14devel, compiled by Visual C++ build 1927, 64-bit2020-08-24 10:02:33.228 -03 postmaster[6656] LOG: listening on IPv6 address \"::1\", port 580802020-08-24 10:02:33.228 -03 postmaster[6656] LOG: listening on IPv4 address \"127.0.0.1\", port 580802020-08-24 10:02:33.415 -03 startup[1604] LOG: database system was shut down at 2020-08-24 10:02:28 -032020-08-24 10:02:33.495 -03 postmaster[6656] LOG: database system is ready to accept connections2020-08-24 10:02:34.580 -03 checkpointer[5680] LOG: checkpoint starting: immediate force wait flush-all2020-08-24 10:02:34.598 -03 checkpointer[5680] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.006 s, sync=0.000 s, total=0.018 s; sync files=0, longest=0.000 s, average=0.000 s; distance=1 kB, estimate=1 kB2020-08-24 10:02:35.146 -03 checkpointer[5680] LOG: checkpoint starting: immediate force wait2020-08-24 10:02:35.155 -03 checkpointer[5680] LOG: checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.000 s, total=0.009 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=1 kB===================================================================8400==AddressSanitizer CHECK failed: D:\\agent\\_work\\9\\s\\src\\vctools\\crt\\asan\\llvm\\compiler-rt\\lib\\asan\\asan_thread.cc:356 \"((ptr[0] == kCurrentStackFrameMagic)) != (0)\" (0x0, 0x0) #0 0x7ffe985d0148 (C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x180050148) #1 0x7ffe98597f3f (C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x180017f3f) #2 0x7ffe985d5129 (C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x180055129) #3 0x7ffe985b1de1 (C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x180031de1) #4 0x7ffe985b0dea (C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x180030dea) #5 0x7ffe985b30b5 (C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x1800330b5) #6 0x7ffe985ce2bb (C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x18004e2bb) #7 0x7ffe985d1d11 (C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.27.29110\\bin\\HostX64\\x64\\clang_rt.asan_dynamic-x86_64.dll+0x180051d11) #8 0x14123da71 in dopr C:\\dll\\postgres\\src\\port\\snprintf.c:441 #9 0x14123c127 in pg_vsnprintf C:\\dll\\postgres\\src\\port\\snprintf.c:195 #10 0x141214cc0 in pvsnprintf C:\\dll\\postgres\\src\\common\\psprintf.c:110 #11 0x14121cefe in appendStringInfoVA C:\\dll\\postgres\\src\\common\\stringinfo.c:149 #12 0x14121cd9d in appendStringInfo C:\\dll\\postgres\\src\\common\\stringinfo.c:103 #13 0x1411134c6 in send_message_to_server_log C:\\dll\\postgres\\src\\backend\\utils\\error\\elog.c:2923 #14 0x14110d4f1 in EmitErrorReport C:\\dll\\postgres\\src\\backend\\utils\\error\\elog.c:1456 #15 0x140c7537c in PostgresMain C:\\dll\\postgres\\src\\backend\\tcop\\postgres.c:4079 #16 0x140a98f28 in BackendRun C:\\dll\\postgres\\src\\backend\\postmaster\\postmaster.c:4530 #17 0x140a932ef in SubPostmasterMain C:\\dll\\postgres\\src\\backend\\postmaster\\postmaster.c:5053 #18 0x14069dfab in main C:\\dll\\postgres\\src\\backend\\main\\main.c:186 #19 0x1412694c8 in invoke_main D:\\agent\\_work\\9\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl:78 #20 0x14126941d in __scrt_common_main_seh D:\\agent\\_work\\9\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl:288 #21 0x1412692dd in __scrt_common_main D:\\agent\\_work\\9\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl:330 #22 0x141269538 in mainCRTStartup D:\\agent\\_work\\9\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_main.cpp:16 #23 0x7ffed1d46fd3 (C:\\WINDOWS\\System32\\KERNEL32.DLL+0x180016fd3) #24 0x7ffed30fcec0 (C:\\WINDOWS\\SYSTEM32\\ntdll.dll+0x18004cec0)I'm not sure if ASAN can report false positives or if this CHECK error is own asan bug?Can you take a look, please?regards,Ranier Vilela",
"msg_date": "Mon, 24 Aug 2020 10:24:04 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "[ASAN] Postgres14 (Windows 64 bits)"
}
] |
[
{
"msg_contents": "Hi\n\nI wrote a proof concept for the support window function from plpgsql.\n\nWindow function API - functions named WinFuncArg* are polymorphic and it is\nnot easy to wrap these functions for usage from SQL level. I wrote an\nenhancement of the GET statement - for this case GET WINDOW_CONTEXT, that\nallows safe and fast access to the result of these functions.\n\nCustom variant of row_number can look like:\n\ncreate or replace function pl_row_number()\nreturns bigint as $$\ndeclare pos int8;\nbegin\n pos := get_current_position(windowobject);\n pos := pos + 1;\n perform set_mark_position(windowobject, pos);\n return pos;\nend\n$$\nlanguage plpgsql window;\n\nCustom variant of lag function can look like:\n\ncreate or replace function pl_lag(numeric)\nreturns numeric as $$\ndeclare\n v numeric;\nbegin\n perform get_input_value_in_partition(windowobject, 1, -1, 'seek_current',\nfalse);\n get pg_window_context v = PG_INPUT_VALUE;\n return v;\nend;\n$$ language plpgsql window;\n\nCustom window functions can be used for generating missing data in time\nseries\n\ncreate table test_missing_values(id int, v integer);\ninsert into test_missing_values\nvalues(1,10),(2,11),(3,12),(4,null),(5,null),(6,15),(7,16);\n\ncreate or replace function pl_pcontext_test(numeric)\nreturns numeric as $$\ndeclare\n n numeric;\n v numeric;\nbegin\n perform get_input_value_for_row(windowobject, 1);\n get pg_window_context v = PG_INPUT_VALUE;\n if v is null then\n v := get_partition_context_value(windowobject, null::numeric);\n else\n perform set_partition_context_value(windowobject, v);\n end if;\n return v;\nend\n$$\nlanguage plpgsql window;\n\nselect id, v, pl_pcontext_test(v) over (order by id) from\ntest_missing_values;\n id | v | pl_pcontext_test.\n----+----+------------------\n 1 | 10 | 10\n 2 | 11 | 11\n 3 | 12 | 12\n 4 | | 12\n 5 | | 12\n 6 | 15 | 15\n 7 | 16 | 16\n(7 rows)\n\nI think about another variant for WinFuncArg functions where polymorphic\nargument is used similarly like in get_partition_context_value - this patch\nis prototype, but it works and I think so support of custom window\nfunctions in PL languages is possible and probably useful.\n\nComments, notes, ideas, objections?\n\nRegards\n\nPavel",
"msg_date": "Mon, 24 Aug 2020 18:08:06 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "poc - possibility to write window function in PL languages"
},
{
"msg_contents": "Hi\n\nI simplified access to results of winfuncargs functions by proxy type\n\"typedvalue\". This type can hold any Datum value, and allows fast cast to\nbasic buildin types or it can use (slower) generic cast functions. It is\nused in cooperation with a plpgsql assign statement that can choose the\ncorrect cast implicitly. When the winfuncarg function returns a value of\nthe same type, that is expected by the variable on the left side of the\nassign statement, then (for basic types), the value is just copied without\ncasts. With this proxy type is not necessary to have special statement for\nassigning returned value from winfuncargs functions, so source code of\nwindow function in plpgsql looks intuitive to me.\n\nExample - implementation of \"lag\" function in plpgsql\n\ncreate or replace function pl_lag(numeric)\nreturns numeric as $$\ndeclare v numeric;\nbegin\n v := get_input_value_in_partition(windowobject, 1, -1, 'seek_current',\nfalse);\n return v;\nend;\n$$ language plpgsql window;\n\nI think this code is usable, and I assign this patch to commitfest.\n\nRegards\n\nPavel",
"msg_date": "Wed, 26 Aug 2020 17:06:22 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "st 26. 8. 2020 v 17:06 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> I simplified access to results of winfuncargs functions by proxy type\n> \"typedvalue\". This type can hold any Datum value, and allows fast cast to\n> basic buildin types or it can use (slower) generic cast functions. It is\n> used in cooperation with a plpgsql assign statement that can choose the\n> correct cast implicitly. When the winfuncarg function returns a value of\n> the same type, that is expected by the variable on the left side of the\n> assign statement, then (for basic types), the value is just copied without\n> casts. With this proxy type is not necessary to have special statement for\n> assigning returned value from winfuncargs functions, so source code of\n> window function in plpgsql looks intuitive to me.\n>\n> Example - implementation of \"lag\" function in plpgsql\n>\n> create or replace function pl_lag(numeric)\n> returns numeric as $$\n> declare v numeric;\n> begin\n> v := get_input_value_in_partition(windowobject, 1, -1, 'seek_current',\n> false);\n> return v;\n> end;\n> $$ language plpgsql window;\n>\n> I think this code is usable, and I assign this patch to commitfest.\n>\n> Regards\n>\n> Pavel\n>\n\nfix regress tests and some doc",
"msg_date": "Fri, 28 Aug 2020 08:14:27 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "pá 28. 8. 2020 v 8:14 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> st 26. 8. 2020 v 17:06 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>> Hi\n>>\n>> I simplified access to results of winfuncargs functions by proxy type\n>> \"typedvalue\". This type can hold any Datum value, and allows fast cast to\n>> basic buildin types or it can use (slower) generic cast functions. It is\n>> used in cooperation with a plpgsql assign statement that can choose the\n>> correct cast implicitly. When the winfuncarg function returns a value of\n>> the same type, that is expected by the variable on the left side of the\n>> assign statement, then (for basic types), the value is just copied without\n>> casts. With this proxy type is not necessary to have special statement for\n>> assigning returned value from winfuncargs functions, so source code of\n>> window function in plpgsql looks intuitive to me.\n>>\n>> Example - implementation of \"lag\" function in plpgsql\n>>\n>> create or replace function pl_lag(numeric)\n>> returns numeric as $$\n>> declare v numeric;\n>> begin\n>> v := get_input_value_in_partition(windowobject, 1, -1, 'seek_current',\n>> false);\n>> return v;\n>> end;\n>> $$ language plpgsql window;\n>>\n>> I think this code is usable, and I assign this patch to commitfest.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>\n> fix regress tests and some doc\n>\n\nupdate - refactored implementation typedvalue type",
"msg_date": "Fri, 28 Aug 2020 19:39:27 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "Hi\n\nrebase\n\nRegards\n\nPavel",
"msg_date": "Fri, 1 Jan 2021 12:28:37 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "Hi, Pavel:\nHappy New Year.\n\n+ command with clause <literal>WINDOW</literal>. The specific feature of\n+ this functions is a possibility to two special storages with\n\nthis functions -> this function\n\npossibility to two special storages: there is no verb.\n\n'store with stored one value': store is repeated.\n\n+ * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n\nIt would be better to change 2020 to 2021 in the new files.\n\nFor some functions, such as windowobject_get_func_arg_frame, it would be\nbetter to add comment explaining their purposes.\n\nFor estimate_partition_context_size():\n+ errmsg(\"size of value is greather than limit (1024\nbytes)\")));\n\nPlease include the value of typlen in the message. There is similar error\nmessage in the else block where value of size should be included.\n\n+ return *realsize;\n+ }\n+ else\n\nThe 'else' is not needed since the if block ends with return.\n\n+ size += size / 3;\n\nPlease add a comment for the choice of constant 3.\n\n+ /* by default we allocate 30 bytes */\n+ *realsize = 0;\n\nThe value 30 may not be accurate - from the caller:\n\n+ if (PG_ARGISNULL(2))\n+ minsize = VARLENA_MINSIZE;\n+ else\n+ minsize = PG_GETARG_INT32(2);\n\nVARLENA_MINSIZE is 32.\n\nCheers\n\nOn Fri, Jan 1, 2021 at 3:29 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> rebase\n>\n> Regards\n>\n> Pavel\n>\n>\n\nHi, Pavel:Happy New Year.+ command with clause <literal>WINDOW</literal>. The specific feature of+ this functions is a possibility to two special storages withthis functions -> this functionpossibility to two special storages: there is no verb.'store with stored one value': store is repeated.+ * Portions Copyright (c) 1996-2020, PostgreSQL Global Development GroupIt would be better to change 2020 to 2021 in the new files.For some functions, such as windowobject_get_func_arg_frame, it would be better to add comment explaining their purposes.For estimate_partition_context_size():+ errmsg(\"size of value is greather than limit (1024 bytes)\")));Please include the value of typlen in the message. There is similar error message in the else block where value of size should be included.+ return *realsize;+ }+ elseThe 'else' is not needed since the if block ends with return.+ size += size / 3;Please add a comment for the choice of constant 3.+ /* by default we allocate 30 bytes */+ *realsize = 0;The value 30 may not be accurate - from the caller:+ if (PG_ARGISNULL(2))+ minsize = VARLENA_MINSIZE;+ else+ minsize = PG_GETARG_INT32(2);VARLENA_MINSIZE is 32.CheersOn Fri, Jan 1, 2021 at 3:29 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:HirebaseRegardsPavel",
"msg_date": "Fri, 1 Jan 2021 09:58:43 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "Hi\n\npá 1. 1. 2021 v 18:57 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:\n\n> Hi, Pavel:\n> Happy New Year.\n>\n> + command with clause <literal>WINDOW</literal>. The specific feature of\n> + this functions is a possibility to two special storages with\n>\n> this functions -> this function\n>\n> possibility to two special storages: there is no verb.\n>\n> 'store with stored one value': store is repeated.\n>\n> + * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n>\n> It would be better to change 2020 to 2021 in the new files.\n>\n\nfixed\n\n>\n> For some functions, such as windowobject_get_func_arg_frame, it would be\n> better to add comment explaining their purposes.\n>\n\nIt is commented before. These functions just call WinAPI functions\n\n/*\n * High level access function. These functions are wrappers for windows API\n * for PL languages based on usage WindowObjectProxy.\n */\n\n\n\n> For estimate_partition_context_size():\n> + errmsg(\"size of value is greather than limit (1024\n> bytes)\")));\n>\n> Please include the value of typlen in the message. There is similar error\n> message in the else block where value of size should be included.\n>\n> + return *realsize;\n> + }\n> + else\n>\n> The 'else' is not needed since the if block ends with return.\n>\n\nyes, but it is there for better readability (symmetry)\n\n>\n> + size += size / 3;\n>\n> Please add a comment for the choice of constant 3.\n>\n> + /* by default we allocate 30 bytes */\n> + *realsize = 0;\n>\n> The value 30 may not be accurate - from the caller:\n>\n> + if (PG_ARGISNULL(2))\n> + minsize = VARLENA_MINSIZE;\n> + else\n> + minsize = PG_GETARG_INT32(2);\n>\n> VARLENA_MINSIZE is 32.\n>\n> Cheers\n>\n> On Fri, Jan 1, 2021 at 3:29 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> rebase\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>\nI am sending updated patch\n\nThank you for comments\n\nRegards\n\nPavel",
"msg_date": "Mon, 4 Jan 2021 12:14:24 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "Hi, Pavel:\nThanks for the update.\n\nI don't have other comment.\n\nCheers\n\nOn Mon, Jan 4, 2021 at 3:15 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> pá 1. 1. 2021 v 18:57 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:\n>\n>> Hi, Pavel:\n>> Happy New Year.\n>>\n>> + command with clause <literal>WINDOW</literal>. The specific feature of\n>> + this functions is a possibility to two special storages with\n>>\n>> this functions -> this function\n>>\n>> possibility to two special storages: there is no verb.\n>>\n>> 'store with stored one value': store is repeated.\n>>\n>> + * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n>>\n>> It would be better to change 2020 to 2021 in the new files.\n>>\n>\n> fixed\n>\n>>\n>> For some functions, such as windowobject_get_func_arg_frame, it would be\n>> better to add comment explaining their purposes.\n>>\n>\n> It is commented before. These functions just call WinAPI functions\n>\n> /*\n> * High level access function. These functions are wrappers for windows API\n> * for PL languages based on usage WindowObjectProxy.\n> */\n>\n>\n>\n>> For estimate_partition_context_size():\n>> + errmsg(\"size of value is greather than limit (1024\n>> bytes)\")));\n>>\n>> Please include the value of typlen in the message. There is similar error\n>> message in the else block where value of size should be included.\n>>\n>> + return *realsize;\n>> + }\n>> + else\n>>\n>> The 'else' is not needed since the if block ends with return.\n>>\n>\n> yes, but it is there for better readability (symmetry)\n>\n>>\n>> + size += size / 3;\n>>\n>> Please add a comment for the choice of constant 3.\n>>\n>> + /* by default we allocate 30 bytes */\n>> + *realsize = 0;\n>>\n>> The value 30 may not be accurate - from the caller:\n>>\n>> + if (PG_ARGISNULL(2))\n>> + minsize = VARLENA_MINSIZE;\n>> + else\n>> + minsize = PG_GETARG_INT32(2);\n>>\n>> VARLENA_MINSIZE is 32.\n>>\n>> Cheers\n>>\n>> On Fri, Jan 1, 2021 at 3:29 AM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>> Hi\n>>>\n>>> rebase\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>\n> I am sending updated patch\n>\n> Thank you for comments\n>\n> Regards\n>\n> Pavel\n>\n\nHi, Pavel:Thanks for the update.I don't have other comment.CheersOn Mon, Jan 4, 2021 at 3:15 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipá 1. 1. 2021 v 18:57 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:Hi, Pavel:Happy New Year.+ command with clause <literal>WINDOW</literal>. The specific feature of+ this functions is a possibility to two special storages withthis functions -> this functionpossibility to two special storages: there is no verb.'store with stored one value': store is repeated.+ * Portions Copyright (c) 1996-2020, PostgreSQL Global Development GroupIt would be better to change 2020 to 2021 in the new files.fixed For some functions, such as windowobject_get_func_arg_frame, it would be better to add comment explaining their purposes.It is commented before. These functions just call WinAPI functions /* * High level access function. These functions are wrappers for windows API * for PL languages based on usage WindowObjectProxy. */For estimate_partition_context_size():+ errmsg(\"size of value is greather than limit (1024 bytes)\")));Please include the value of typlen in the message. There is similar error message in the else block where value of size should be included.+ return *realsize;+ }+ elseThe 'else' is not needed since the if block ends with return.yes, but it is there for better readability (symmetry) + size += size / 3;Please add a comment for the choice of constant 3.+ /* by default we allocate 30 bytes */+ *realsize = 0;The value 30 may not be accurate - from the caller:+ if (PG_ARGISNULL(2))+ minsize = VARLENA_MINSIZE;+ else+ minsize = PG_GETARG_INT32(2);VARLENA_MINSIZE is 32.CheersOn Fri, Jan 1, 2021 at 3:29 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:HirebaseRegardsPavelI am sending updated patch Thank you for commentsRegardsPavel",
"msg_date": "Mon, 4 Jan 2021 09:38:02 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> [ plpgsql-window-functions-20210104.patch.gz ]\n\nI spent some time looking at this patch. It would certainly be\nappealing to have some ability to write custom window functions\nwithout descending into C; but I'm not very happy about the details.\n\nI'm okay with the idea of having a special variable of a new pseudotype.\nThat's not exactly pretty, but it descends directly from how we handle\nthe arguments of trigger functions, so at least there's precedent.\nWhat's bugging me though is the \"typedvalue\" stuff. That seems like a\nconceptual mess, a performance loss, and a permanent maintenance time\nsink. To avoid performance complaints, eventually this hard-wired set\nof conversions would have to bloom to cover every built-in cast, and\nas for extension types, you're just out of luck.\n\nOne way to avoid that would be to declare the argument-fetching\nfunctions as polymorphics with a dummy argument that just provides\nthe expected result type. So users would write something like\n\ncreate function pl_lag(x numeric)\n ...\n v := get_input_value_in_partition(windowobject, x, 1, -1,\n 'seek_current', false);\n\nwhere the argument-fetching function is declared\n\n get_input_value_in_partition(windowobject, anyelement, int, ...)\n returns anyelement\n\nand internally it could verify that the n'th window function argument\nmatches the type of its second argument. While this could be made\nto work, it's kind of unsatisfying because the argument number \"1\" is\nso obviously redundant with the reference to \"x\". Ideally one should\nonly have to write \"x\". I don't quite see how to make that work,\nbut maybe there's a way?\n\nOn the whole though, I think your original idea of bespoke plpgsql\nsyntax is better, ie let's write something like\n\n GET WINDOW VALUE v := x AT PARTITION CURRENT(-1);\n\nand hide all the mechanism behind that. The reference to \"x\" is enough\nto provide the argument number and type, and the window object doesn't\nhave to be explicitly visible at all.\n\nYeah, this will mean that anybody who wants to provide equivalent\nfunctionality in some other PL will have to do more work. But it's\nnot like it was going to be zero effort for them before. Furthermore,\nit's not clear to me that other PLs would want to adopt your current\ndesign anyway. For example, I bet PL/R would like to somehow make\nwindow arguments map into vectors on the R side, but there's no chance\nof that with this SQL layer in between.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Jan 2021 18:09:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "Hi\n\nso 16. 1. 2021 v 0:09 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > [ plpgsql-window-functions-20210104.patch.gz ]\n>\n> I spent some time looking at this patch. It would certainly be\n> appealing to have some ability to write custom window functions\n> without descending into C; but I'm not very happy about the details.\n>\n> I'm okay with the idea of having a special variable of a new pseudotype.\n> That's not exactly pretty, but it descends directly from how we handle\n> the arguments of trigger functions, so at least there's precedent.\n> What's bugging me though is the \"typedvalue\" stuff. That seems like a\n> conceptual mess, a performance loss, and a permanent maintenance time\n> sink. To avoid performance complaints, eventually this hard-wired set\n> of conversions would have to bloom to cover every built-in cast, and\n> as for extension types, you're just out of luck.\n>\n\nI invited typed values with an idea of larger usability. With this type we\ncan implement dynamic iteration over records better than now, when the\nfields of records should be cast to text or json before operation. With\nthis type I can hold typed value longer time and I can do some like:\n\nDECLARE var typedvalue;\n\nvar := fx(..);\nIF var IS OF integer THEN\n var_int := CAST(var AS int);\nELSEIF var IS OF date THEN\n var_date := CAST(var AS date);\nELSE\n var_text := CAST(var AS text);\nEND;\n\nSometimes (when you process some external data) this late (lazy) cast can\nbe better and allows you to use typed values. When I read external data,\nsometimes I don't know types of these data before reading. I would like to\ninject a possibility of more dynamic work with values and variables (but\nstill cleanly and safely). It should be more safe and faster than now, when\npeople should use the \"text\" type.\n\nBut I understand and I agree with your objections. Probably a lot of people\nwill use this type badly.\n\n\n\n> One way to avoid that would be to declare the argument-fetching\n> functions as polymorphics with a dummy argument that just provides\n> the expected result type. So users would write something like\n>\n> create function pl_lag(x numeric)\n> ...\n> v := get_input_value_in_partition(windowobject, x, 1, -1,\n> 'seek_current', false);\n>\n> where the argument-fetching function is declared\n>\n> get_input_value_in_partition(windowobject, anyelement, int, ...)\n> returns anyelement\n>\n> and internally it could verify that the n'th window function argument\n> matches the type of its second argument. While this could be made\n> to work, it's kind of unsatisfying because the argument number \"1\" is\n> so obviously redundant with the reference to \"x\". Ideally one should\n> only have to write \"x\". I don't quite see how to make that work,\n> but maybe there's a way?\n>\n> On the whole though, I think your original idea of bespoke plpgsql\n> syntax is better, ie let's write something like\n>\n> GET WINDOW VALUE v := x AT PARTITION CURRENT(-1);\n>\n> and hide all the mechanism behind that. The reference to \"x\" is enough\n> to provide the argument number and type, and the window object doesn't\n> have to be explicitly visible at all.\n>\n\nyes, this syntax looks well.\n\nThe second question is work with partition context value. This should be\nonly one value, and of only one but of any type per function. In this case\nwe cannot use GET statements. I had an idea of enhancing declaration. Some\nlike\n\nDECLARE\n pcx PARTITION CONTEXT (int); -- read partition context\nBEGIN\n pcx := 10; -- set partition context\n\nWhat do you think about it?\n\nRegards\n\nPavel\n\n\n\n\n\n\n\n\n\n\n> Yeah, this will mean that anybody who wants to provide equivalent\n> functionality in some other PL will have to do more work. But it's\n> not like it was going to be zero effort for them before. Furthermore,\n> it's not clear to me that other PLs would want to adopt your current\n> design anyway. For example, I bet PL/R would like to somehow make\n> window arguments map into vectors on the R side, but there's no chance\n> of that with this SQL layer in between.\n>\n> regards, tom lane\n>\n\nHiso 16. 1. 2021 v 0:09 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> [ plpgsql-window-functions-20210104.patch.gz ]\n\nI spent some time looking at this patch. It would certainly be\nappealing to have some ability to write custom window functions\nwithout descending into C; but I'm not very happy about the details.\n\nI'm okay with the idea of having a special variable of a new pseudotype.\nThat's not exactly pretty, but it descends directly from how we handle\nthe arguments of trigger functions, so at least there's precedent.\nWhat's bugging me though is the \"typedvalue\" stuff. That seems like a\nconceptual mess, a performance loss, and a permanent maintenance time\nsink. To avoid performance complaints, eventually this hard-wired set\nof conversions would have to bloom to cover every built-in cast, and\nas for extension types, you're just out of luck.I invited typed values with an idea of larger usability. With this type we can implement dynamic iteration over records better than now, when the fields of records should be cast to text or json before operation. With this type I can hold typed value longer time and I can do some like:DECLARE var typedvalue;var := fx(..);IF var IS OF integer THEN var_int := CAST(var AS int);ELSEIF var IS OF date THEN var_date := CAST(var AS date);ELSE var_text := CAST(var AS text);END;Sometimes (when you process some external data) this late (lazy) cast can be better and allows you to use typed values. When I read external data, sometimes I don't know types of these data before reading. I would like to inject a possibility of more dynamic work with values and variables (but still cleanly and safely). It should be more safe and faster than now, when people should use the \"text\" type.But I understand and I agree with your objections. Probably a lot of people will use this type badly.\n\nOne way to avoid that would be to declare the argument-fetching\nfunctions as polymorphics with a dummy argument that just provides\nthe expected result type. So users would write something like\n\ncreate function pl_lag(x numeric)\n ...\n v := get_input_value_in_partition(windowobject, x, 1, -1,\n 'seek_current', false);\n\nwhere the argument-fetching function is declared\n\n get_input_value_in_partition(windowobject, anyelement, int, ...)\n returns anyelement\n\nand internally it could verify that the n'th window function argument\nmatches the type of its second argument. While this could be made\nto work, it's kind of unsatisfying because the argument number \"1\" is\nso obviously redundant with the reference to \"x\". Ideally one should\nonly have to write \"x\". I don't quite see how to make that work,\nbut maybe there's a way?\n\nOn the whole though, I think your original idea of bespoke plpgsql\nsyntax is better, ie let's write something like\n\n GET WINDOW VALUE v := x AT PARTITION CURRENT(-1);\n\nand hide all the mechanism behind that. The reference to \"x\" is enough\nto provide the argument number and type, and the window object doesn't\nhave to be explicitly visible at all.yes, this syntax looks well.The second question is work with partition context value. This should be only one value, and of only one but of any type per function. In this case we cannot use GET statements. I had an idea of enhancing declaration. Some likeDECLARE pcx PARTITION CONTEXT (int); -- read partition context BEGIN pcx := 10; -- set partition context What do you think about it?RegardsPavel\n\nYeah, this will mean that anybody who wants to provide equivalent\nfunctionality in some other PL will have to do more work. But it's\nnot like it was going to be zero effort for them before. Furthermore,\nit's not clear to me that other PLs would want to adopt your current\ndesign anyway. For example, I bet PL/R would like to somehow make\nwindow arguments map into vectors on the R side, but there's no chance\nof that with this SQL layer in between.\n\n regards, tom lane",
"msg_date": "Wed, 20 Jan 2021 09:11:33 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> The second question is work with partition context value. This should be\n> only one value, and of only one but of any type per function. In this case\n> we cannot use GET statements. I had an idea of enhancing declaration. Some\n> like\n\n> DECLARE\n> pcx PARTITION CONTEXT (int); -- read partition context\n> BEGIN\n> pcx := 10; -- set partition context\n\n> What do you think about it?\n\nUh, what? I don't understand what this \"partition context\" is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jan 2021 15:07:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "st 20. 1. 2021 v 21:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > The second question is work with partition context value. This should be\n> > only one value, and of only one but of any type per function. In this\n> case\n> > we cannot use GET statements. I had an idea of enhancing declaration.\n> Some\n> > like\n>\n> > DECLARE\n> > pcx PARTITION CONTEXT (int); -- read partition context\n> > BEGIN\n> > pcx := 10; -- set partition context\n>\n> > What do you think about it?\n>\n> Uh, what? I don't understand what this \"partition context\" is.\n>\n\nIt was my name for an access to window partition local memory -\nWinGetPartitionLocalMemory\n\nWe need some interface for this cache\n\nRegards\n\nPavel\n\n\n\n\n\n\n\n>\n> regards, tom lane\n>\n\nst 20. 1. 2021 v 21:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> The second question is work with partition context value. This should be\n> only one value, and of only one but of any type per function. In this case\n> we cannot use GET statements. I had an idea of enhancing declaration. Some\n> like\n\n> DECLARE\n> pcx PARTITION CONTEXT (int); -- read partition context\n> BEGIN\n> pcx := 10; -- set partition context\n\n> What do you think about it?\n\nUh, what? I don't understand what this \"partition context\" is.It was my name for an access to window partition local memory - WinGetPartitionLocalMemoryWe need some interface for this cacheRegardsPavel \n\n regards, tom lane",
"msg_date": "Wed, 20 Jan 2021 21:14:27 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 20. 1. 2021 v 21:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Uh, what? I don't understand what this \"partition context\" is.\n\n> It was my name for an access to window partition local memory -\n> WinGetPartitionLocalMemory\n\nAh.\n\n> We need some interface for this cache\n\nI'm not convinced we need to expose that, or that it'd be very\nsatisfactory to plpgsql users if we did. The fact that it's fixed-size\nand initializes to zeroes are both things that are okay for C programmers\nbut might be awkward to deal with in plpgsql code. At the very least it\nwould greatly constrain what data types you could usefully store.\n\nSo I'd be inclined to leave that out, at least for the first version.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jan 2021 15:32:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "st 20. 1. 2021 v 21:32 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > st 20. 1. 2021 v 21:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> Uh, what? I don't understand what this \"partition context\" is.\n>\n> > It was my name for an access to window partition local memory -\n> > WinGetPartitionLocalMemory\n>\n> Ah.\n>\n> > We need some interface for this cache\n>\n> I'm not convinced we need to expose that, or that it'd be very\n> satisfactory to plpgsql users if we did. The fact that it's fixed-size\n> and initializes to zeroes are both things that are okay for C programmers\n> but might be awkward to deal with in plpgsql code. At the very least it\n> would greatly constrain what data types you could usefully store.\n>\n> So I'd be inclined to leave that out, at least for the first version.\n>\n\nI think this functionality is relatively important. If somebody tries to\nimplement own window function, then he starts with some variation of the\nrow_num function.\n\nWe can support only types of fixed length to begin.\n\nRegards\n\nPavel\n\n\n\n>\n> regards, tom lane\n>\n\nst 20. 1. 2021 v 21:32 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 20. 1. 2021 v 21:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Uh, what? I don't understand what this \"partition context\" is.\n\n> It was my name for an access to window partition local memory -\n> WinGetPartitionLocalMemory\n\nAh.\n\n> We need some interface for this cache\n\nI'm not convinced we need to expose that, or that it'd be very\nsatisfactory to plpgsql users if we did. The fact that it's fixed-size\nand initializes to zeroes are both things that are okay for C programmers\nbut might be awkward to deal with in plpgsql code. At the very least it\nwould greatly constrain what data types you could usefully store.\n\nSo I'd be inclined to leave that out, at least for the first version.I think this functionality is relatively important. If somebody tries to implement own window function, then he starts with some variation of the row_num function.We can support only types of fixed length to begin. RegardsPavel \n\n regards, tom lane",
"msg_date": "Wed, 20 Jan 2021 22:03:51 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
},
{
"msg_contents": "st 20. 1. 2021 v 21:14 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> st 20. 1. 2021 v 21:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> > The second question is work with partition context value. This should be\n>> > only one value, and of only one but of any type per function. In this\n>> case\n>> > we cannot use GET statements. I had an idea of enhancing declaration.\n>> Some\n>> > like\n>>\n>> > DECLARE\n>> > pcx PARTITION CONTEXT (int); -- read partition context\n>> > BEGIN\n>> > pcx := 10; -- set partition context\n>>\n>> > What do you think about it?\n>>\n>> Uh, what? I don't understand what this \"partition context\" is.\n>>\n>\n> It was my name for an access to window partition local memory -\n> WinGetPartitionLocalMemory\n>\n> We need some interface for this cache\n>\n\nI have to think more about declarative syntax. When I try to transform our\nWindowObject API directly, then it looks like Cobol. It needs a different\nconcept to be user friendly.\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>\n>\n>>\n>> regards, tom lane\n>>\n>\n\nst 20. 1. 2021 v 21:14 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:st 20. 1. 2021 v 21:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> The second question is work with partition context value. This should be\n> only one value, and of only one but of any type per function. In this case\n> we cannot use GET statements. I had an idea of enhancing declaration. Some\n> like\n\n> DECLARE\n> pcx PARTITION CONTEXT (int); -- read partition context\n> BEGIN\n> pcx := 10; -- set partition context\n\n> What do you think about it?\n\nUh, what? I don't understand what this \"partition context\" is.It was my name for an access to window partition local memory - WinGetPartitionLocalMemoryWe need some interface for this cacheI have to think more about declarative syntax. When I try to transform our WindowObject API directly, then it looks like Cobol. It needs a different concept to be user friendly.RegardsPavel RegardsPavel \n\n regards, tom lane",
"msg_date": "Wed, 27 Jan 2021 10:58:02 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: poc - possibility to write window function in PL languages"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently lwlock acquisition, whether direct, or via the LockBuffer()\nwrapper, have a mode argument. I don't like that much, for three\nreasons:\n\n1) I've tried to add annotations for static analyzers to help with\n locking correctness. The ones I looked at don't support annotating\n shared/exclusive locks where the mode is specified as a variable.\n2) When doing performance analysis it's quite useful to be able to see\n the difference between exlusive and shared acquisition. Typically all\n one has access to is the symbol name though.\n3) I don't like having the unnecessary branches for the lock mode, after\n all a lot of the lock protected code is fairly hot. It's pretty\n unnecessary because the caller almost (?) always uses a static lock\n mode.\n\nTherefore I'd like to replace the current lock functions with ones where\nthe lock mode is specified as part of the function name rather than an\nargument.\n\nTo avoid unnecessary backward compat pains it seems best to first\nintroduce compat wrappers using the current signature, and then\nsubsequently replace in-core callers with the direct calls.\n\n\nThere's several harder calls though:\n1) All of the above would benefit from lock release also being annotated\n with the lock mode. That'd be a lot more invasive however. I think\n it'd be best to add explicit functions (which would just assert\n held_lwlocks[] being correct), but keep a wrapper that determines the\n current lock level using held_lwlocks.\n\n2) For performance it'd be nice if we could move the BufferIsLocal()\n checks for LockBuffer* into the caller. Even better would be if we\n made them inline wrappers around\n LWLockAcquire(Shared|Exclusive). However, as the latter would require\n making BufferDescriptorGetContentLock() available in bufmgr.h I think\n that's not worth it. So I think we'd be best off having\n LockBufferExclusive() be a static inline wrapper doing the\n BufferIsLocal() check and then calling LockBufferExclusiveImpl\n which'd do the LWLockAcquireExclusive().\n\nThoughts?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Aug 2020 15:34:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "LWLockAcquire and LockBuffer mode argument"
},
{
"msg_contents": "On Mon, Aug 24, 2020 at 6:35 PM Andres Freund <andres@anarazel.de> wrote:\n> Thoughts?\n\nThis is likely to cause a certain amount of annoyance to many\nPostgreSQL developers, but if you have evidence that it will improve\nperformance significantly, I think it's very reasonable to do it\nanyway. However, if we do it all in a backward-compatible way as you\npropose, then we're likely to keep reintroducing code that does it the\nold way for a really long time. I'm not sure that actually makes a lot\nof sense. It might be better to just bite the bullet and make a hard\nbreak.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 25 Aug 2020 13:59:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LWLockAcquire and LockBuffer mode argument"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-25 13:59:35 -0400, Robert Haas wrote:\n> On Mon, Aug 24, 2020 at 6:35 PM Andres Freund <andres@anarazel.de> wrote:\n> > Thoughts?\n> \n> This is likely to cause a certain amount of annoyance to many\n> PostgreSQL developers, but if you have evidence that it will improve\n> performance significantly, I think it's very reasonable to do it\n> anyway.\n\nI don't think it'll be a \"significant\" performance benefit directly. It\nappears to be measurable, but I think to reach significant performance\nimprovements it'll take a while and it'll come from profilers and other\ntools working better.\n\n> However, if we do it all in a backward-compatible way as you propose,\n> then we're likely to keep reintroducing code that does it the old way\n> for a really long time. I'm not sure that actually makes a lot of\n> sense. It might be better to just bite the bullet and make a hard\n> break.\n\nIt seems easy enough to slap a compiler \"enforced\" deprecation warning\non the new compat version, in master only. Seems unnecessary to make\nlife immediately harder for extensions authors desiring cross-version\ncompatibility.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 25 Aug 2020 11:17:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: LWLockAcquire and LockBuffer mode argument"
},
{
"msg_contents": "On Tue, Aug 25, 2020 at 2:17 PM Andres Freund <andres@anarazel.de> wrote:\n> It seems easy enough to slap a compiler \"enforced\" deprecation warning\n> on the new compat version, in master only. Seems unnecessary to make\n> life immediately harder for extensions authors desiring cross-version\n> compatibility.\n\nI don't know exactly how you'd go about implementing that, but I am\nnot against compatibility. I *am* against coding rules that require a\nlot of manual enforcement.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 25 Aug 2020 14:22:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LWLockAcquire and LockBuffer mode argument"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-25 14:22:28 -0400, Robert Haas wrote:\n> On Tue, Aug 25, 2020 at 2:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > It seems easy enough to slap a compiler \"enforced\" deprecation warning\n> > on the new compat version, in master only. Seems unnecessary to make\n> > life immediately harder for extensions authors desiring cross-version\n> > compatibility.\n> \n> I don't know exactly how you'd go about implementing that, but I am\n> not against compatibility. I *am* against coding rules that require a\n> lot of manual enforcement.\n\n#if I_AM_GCC_OR_CLANG\n#define pg_attribute_deprecated __attribute__((deprecated))\n#elif I_AM_MSVC\n#define pg_attribute_deprecated __declspec(deprecated)\n#else\n#define pg_attribute_deprecated\n#endif\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 25 Aug 2020 11:30:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: LWLockAcquire and LockBuffer mode argument"
},
{
"msg_contents": "On Mon, Aug 24, 2020 at 3:35 PM Andres Freund <andres@anarazel.de> wrote:\n> To avoid unnecessary backward compat pains it seems best to first\n> introduce compat wrappers using the current signature, and then\n> subsequently replace in-core callers with the direct calls.\n\nI like the idea of doing this, purely to make profiler output easier\nto interpret.\n\nPassing a shared-or-exclusive flag is kind of a natural thing to do\nwithin code like _bt_search(), where we sometimes want to\nexclusive-lock the leaf level page but not the internal pages that we\ndescend through first. Fortunately we can handle the flag inside the\nexisting nbtree wrapper functions quite easily -- the recently added\n_bt_lockbuf() can test the flag directly. We already have\nnbtree-private flags (BT_READ and BT_WRITE) that we can continue to\nuse after the old interface is fully deprecated.\n\nMore generally, it probably is kind of natural to have a flag like\nBUFFER_LOCK_SHARE/BUFFER_LOCK_EXCLUSIVE (though not like\nBUFFER_LOCK_UNLOCK) within index access methods. But I think that\nthere are several good reasons to add something equivalent to\n_bt_lockbuf() to all index access methods.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 25 Aug 2020 11:30:19 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: LWLockAcquire and LockBuffer mode argument"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n\n> Hi,\n>\n> On 2020-08-25 13:59:35 -0400, Robert Haas wrote:\n>\n>> However, if we do it all in a backward-compatible way as you propose,\n>> then we're likely to keep reintroducing code that does it the old way\n>> for a really long time. I'm not sure that actually makes a lot of\n>> sense. It might be better to just bite the bullet and make a hard\n>> break.\n>\n> It seems easy enough to slap a compiler \"enforced\" deprecation warning\n> on the new compat version, in master only. Seems unnecessary to make\n> life immediately harder for extensions authors desiring cross-version\n> compatibility.\n\nWould it be possible to make the compat versions only available when\nbuilding extensions, but not to core code?\n\nIn Perl we do that a lot, using #ifndef PERL_CORE.\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n",
"msg_date": "Wed, 26 Aug 2020 12:47:06 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: LWLockAcquire and LockBuffer mode argument"
},
{
"msg_contents": "On Wed, Aug 26, 2020 at 7:47 AM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> Would it be possible to make the compat versions only available when\n> building extensions, but not to core code?\n\nI think that would be good if we can do it. We could even have it\ninside #ifdef LWLOCK_API_COMPAT, and extension authors who want the\ncompatibility interface can define that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 26 Aug 2020 11:41:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LWLockAcquire and LockBuffer mode argument"
},
{
"msg_contents": "On 2020-Aug-26, Robert Haas wrote:\n\n> On Wed, Aug 26, 2020 at 7:47 AM Dagfinn Ilmari Manns�ker\n> <ilmari@ilmari.org> wrote:\n> > Would it be possible to make the compat versions only available when\n> > building extensions, but not to core code?\n> \n> I think that would be good if we can do it. We could even have it\n> inside #ifdef LWLOCK_API_COMPAT, and extension authors who want the\n> compatibility interface can define that.\n\nWe had ENABLE_LIST_COMPAT available for 16 years; see commits\nd0b4399d81f3, 72b6ad631338, 1cff1b95ab6d. We could do the same here.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 26 Aug 2020 12:27:19 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: LWLockAcquire and LockBuffer mode argument"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile testing with DTrace, I realized we acquire\nReplicationSlotControl lwlock at some places even when\nmax_replication_slots is set to 0. For instance, we call\nReplicationSlotCleanup() within PostgresMian() when an error happens\nand acquire ReplicationSlotControl lwlock.\n\nThe attached patch fixes some functions so that we quickly return if\nmax_replication_slots is set to 0.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 25 Aug 2020 11:38:39 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Avoid unnecessary ReplicationSlotControl lwlock acquistion"
},
{
"msg_contents": "Hi, \n\nOn August 24, 2020 7:38:39 PM PDT, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>Hi all,\n>\n>While testing with DTrace, I realized we acquire\n>ReplicationSlotControl lwlock at some places even when\n>max_replication_slots is set to 0. For instance, we call\n>ReplicationSlotCleanup() within PostgresMian() when an error happens\n>and acquire ReplicationSlotControl lwlock.\n>\n>The attached patch fixes some functions so that we quickly return if\n>max_replication_slots is set to 0.\n\nWhy is it worth doing so?\n\nRegards,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Mon, 24 Aug 2020 19:42:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unnecessary ReplicationSlotControl lwlock acquistion"
},
{
"msg_contents": "On Tue, 25 Aug 2020 at 11:42, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On August 24, 2020 7:38:39 PM PDT, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> >Hi all,\n> >\n> >While testing with DTrace, I realized we acquire\n> >ReplicationSlotControl lwlock at some places even when\n> >max_replication_slots is set to 0. For instance, we call\n> >ReplicationSlotCleanup() within PostgresMian() when an error happens\n> >and acquire ReplicationSlotControl lwlock.\n> >\n> >The attached patch fixes some functions so that we quickly return if\n> >max_replication_slots is set to 0.\n>\n> Why is it worth doing so?\n\nI think we can avoid unnecessary overhead caused by acquiring and\nreleasing that lwlock itself. The functions modified by this patch are\ncalled during error cleanup or checkpoints. For the former case,\nsince it’s not a commit path the benefit might not be large on common\nworkload but it might help to reduce the latency on a workload whose\nabort rate is relatively high. Also looking at other functions in\nslot.c, other functions also do so. I think these are also for\npreventing unnecessary overhead.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 25 Aug 2020 12:00:47 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unnecessary ReplicationSlotControl lwlock acquistion"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-25 12:00:47 +0900, Masahiko Sawada wrote:\n> On Tue, 25 Aug 2020 at 11:42, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On August 24, 2020 7:38:39 PM PDT, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> > >Hi all,\n> > >\n> > >While testing with DTrace, I realized we acquire\n> > >ReplicationSlotControl lwlock at some places even when\n> > >max_replication_slots is set to 0. For instance, we call\n> > >ReplicationSlotCleanup() within PostgresMian() when an error happens\n> > >and acquire ReplicationSlotControl lwlock.\n> > >\n> > >The attached patch fixes some functions so that we quickly return if\n> > >max_replication_slots is set to 0.\n> >\n> > Why is it worth doing so?\n> \n> I think we can avoid unnecessary overhead caused by acquiring and\n> releasing that lwlock itself. The functions modified by this patch are\n> called during error cleanup or checkpoints. For the former case,\n> since it’s not a commit path the benefit might not be large on common\n> workload but it might help to reduce the latency on a workload whose\n> abort rate is relatively high. Also looking at other functions in\n> slot.c, other functions also do so. I think these are also for\n> preventing unnecessary overhead.\n\nI don't see how these could matter. The error checking path isn't\nreached if no slot is acquired. One uncontended lwlock acquisition isn't\nmeasurable compared to the cost of a checkpoint.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Aug 2020 21:13:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unnecessary ReplicationSlotControl lwlock acquistion"
},
{
"msg_contents": "On Tue, 25 Aug 2020 at 13:13, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-08-25 12:00:47 +0900, Masahiko Sawada wrote:\n> > On Tue, 25 Aug 2020 at 11:42, Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On August 24, 2020 7:38:39 PM PDT, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >Hi all,\n> > > >\n> > > >While testing with DTrace, I realized we acquire\n> > > >ReplicationSlotControl lwlock at some places even when\n> > > >max_replication_slots is set to 0. For instance, we call\n> > > >ReplicationSlotCleanup() within PostgresMian() when an error happens\n> > > >and acquire ReplicationSlotControl lwlock.\n> > > >\n> > > >The attached patch fixes some functions so that we quickly return if\n> > > >max_replication_slots is set to 0.\n> > >\n> > > Why is it worth doing so?\n> >\n> > I think we can avoid unnecessary overhead caused by acquiring and\n> > releasing that lwlock itself. The functions modified by this patch are\n> > called during error cleanup or checkpoints. For the former case,\n> > since it’s not a commit path the benefit might not be large on common\n> > workload but it might help to reduce the latency on a workload whose\n> > abort rate is relatively high. Also looking at other functions in\n> > slot.c, other functions also do so. I think these are also for\n> > preventing unnecessary overhead.\n>\n> I don't see how these could matter. The error checking path isn't\n> reached if no slot is acquired.\n\nI think we always call ReplicationSlotCleanup() which acquires the\nlwlock during error cleanup even if no slot is acquired.\n\n> One uncontended lwlock acquisition isn't\n> measurable compared to the cost of a checkpoint.\n\nThat's true.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 25 Aug 2020 13:31:24 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unnecessary ReplicationSlotControl lwlock acquistion"
}
] |
[
{
"msg_contents": "Here is a series of patches to remove some unused function parameters. \nIn each case, the need for them was removed by some other code changes \nover time but the unusedness was not noticed. I have included a \nreference to when they became unused in each case.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 25 Aug 2020 07:47:17 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "some unused parameters cleanup"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Here is a series of patches to remove some unused function parameters. \n> In each case, the need for them was removed by some other code changes \n> over time but the unusedness was not noticed. I have included a \n> reference to when they became unused in each case.\n\nFor some of these, there's an argument for keeping the unused parameter\nfor consistency with sibling functions that do use it. Not sure how\nimportant that is, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 Aug 2020 12:59:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: some unused parameters cleanup"
},
{
"msg_contents": "On Tue, Aug 25, 2020 at 12:59:31PM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > Here is a series of patches to remove some unused function parameters. \n> > In each case, the need for them was removed by some other code changes \n> > over time but the unusedness was not noticed. I have included a \n> > reference to when they became unused in each case.\n> \n> For some of these, there's an argument for keeping the unused parameter\n> for consistency with sibling functions that do use it. Not sure how\n> important that is, though.\n\nI think if they are kept for that reason, we should document that so we\nknow not to revisit this issue for them.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 25 Aug 2020 13:42:29 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: some unused parameters cleanup"
},
{
"msg_contents": "On 8/25/20 7:42 PM, Bruce Momjian wrote:\n> On Tue, Aug 25, 2020 at 12:59:31PM -0400, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> Here is a series of patches to remove some unused function parameters.\n>>> In each case, the need for them was removed by some other code changes\n>>> over time but the unusedness was not noticed. I have included a\n>>> reference to when they became unused in each case.\n>>\n>> For some of these, there's an argument for keeping the unused parameter\n>> for consistency with sibling functions that do use it. Not sure how\n>> important that is, though.\n> \n> I think if they are kept for that reason, we should document that so we\n> know not to revisit this issue for them.\n\n+1\n\nThat way we can avoid new people discovering the same unused parameters \nand then submitting patches for them.\n\nAndreas\n\n\n\n",
"msg_date": "Tue, 25 Aug 2020 19:50:55 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: some unused parameters cleanup"
},
{
"msg_contents": "\n\nOn 2020/08/26 2:50, Andreas Karlsson wrote:\n> On 8/25/20 7:42 PM, Bruce Momjian wrote:\n>> On Tue, Aug 25, 2020 at 12:59:31PM -0400, Tom Lane wrote:\n>>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>>> Here is a series of patches to remove some unused function parameters.\n>>>> In each case, the need for them was removed by some other code changes\n>>>> over time but the unusedness was not noticed. I have included a\n>>>> reference to when they became unused in each case.\n>>>\n>>> For some of these, there's an argument for keeping the unused parameter\n>>> for consistency with sibling functions that do use it. Not sure how\n>>> important that is, though.\n>>\n>> I think if they are kept for that reason, we should document that so we\n>> know not to revisit this issue for them.> \n> +1\n> \n> That way we can avoid new people discovering the same unused parameters and then submitting patches for them.\n\nI agree that some parameters were kept for that reason,\nbut ISTM that also some were kept just accidentally.\nFor example, regarding unused parameter \"encoding\" that 0010 patch\ntries to remove, commit f0d6f20278 got rid of the use of \"encoding\"\nfrom generate_normalized_query() but ISTM that it just forgot to\nremove that parameter.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 26 Aug 2020 10:26:23 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: some unused parameters cleanup"
},
{
"msg_contents": "On 2020-08-25 18:59, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Here is a series of patches to remove some unused function parameters.\n>> In each case, the need for them was removed by some other code changes\n>> over time but the unusedness was not noticed. I have included a\n>> reference to when they became unused in each case.\n> \n> For some of these, there's an argument for keeping the unused parameter\n> for consistency with sibling functions that do use it. Not sure how\n> important that is, though.\n\nI had meant to exclude cases like this from this patch set. If you see \na case like this in *this* patch set, please point it out.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 26 Aug 2020 06:38:52 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: some unused parameters cleanup"
},
{
"msg_contents": "On Wed, Aug 26, 2020 at 06:38:52AM +0200, Peter Eisentraut wrote:\n> I had meant to exclude cases like this from this patch set. If you see a\n> case like this in *this* patch set, please point it out.\n\nLast time I looked at that a lot of parameters are kept around as a\nmatter of symmetry with siblings, like tablecmds.c. FWIW:\nhttps://www.postgresql.org/message-id/20190130073317.GP3121@paquier.xyz\n\nSaying that, I can see that you have been careful here and I don't see\nanything like that in most of the changes you are proposing here. You\ncould say that for findNamespace() or _moveBefore() perhaps, but there\nare also some routines not making use of an Archive. So this cleanup\nlooks fine to me.\n--\nMichael",
"msg_date": "Wed, 26 Aug 2020 17:11:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: some unused parameters cleanup"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-08-25 18:59, Tom Lane wrote:\n>> For some of these, there's an argument for keeping the unused parameter\n>> for consistency with sibling functions that do use it. Not sure how\n>> important that is, though.\n\n> I had meant to exclude cases like this from this patch set. If you see \n> a case like this in *this* patch set, please point it out.\n\nI'd been thinking specifically of the changes in pg_backup_archiver.c.\nBut now that I look around a bit further, there's already very little\nconsistency in that file about whether to pass the ArchiveHandle* pointer\neverywhere. So no further objection here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Aug 2020 09:32:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: some unused parameters cleanup"
}
] |
[
{
"msg_contents": "A user tried to use the cracklib build-time option of the passwordcheck \nmodule. This failed, as it turned out because there was no dictionary \ninstalled in the right place, but the error was not properly reported, \nbecause the existing code just throws away the error message from \ncracklib. Attached is a patch that changes this by logging any error \nmessage returned from the cracklib call.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 25 Aug 2020 12:20:21 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "passwordcheck: Log cracklib diagnostics"
},
{
"msg_contents": "> On 25 Aug 2020, at 12:20, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> A user tried to use the cracklib build-time option of the passwordcheck module. This failed, as it turned out because there was no dictionary installed in the right place, but the error was not properly reported, because the existing code just throws away the error message from cracklib. Attached is a patch that changes this by logging any error message returned from the cracklib call.\n\n+1 on this, it's also in line with the example documentation from cracklib.\nThe returned error is potentially a bit misleading now, as it might say claim\nthat a strong password is easily cracked if the dictionary fails load. Given\nthat there is no way to distinguish between the class of returned errors it's\nhard to see how we can do better though.\n\nWhile poking at this, we might as well update the docs to point to the right\nURL for CrackLib as it moved from Sourceforge five years ago. The attached\ndiff fixes that.\n\ncheers ./daniel",
"msg_date": "Tue, 25 Aug 2020 13:48:39 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: passwordcheck: Log cracklib diagnostics"
},
{
"msg_contents": "On Tue, 2020-08-25 at 13:48 +0200, Daniel Gustafsson wrote:\n> > On 25 Aug 2020, at 12:20, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > \n> > A user tried to use the cracklib build-time option of the passwordcheck module. This failed, as it turned out because there was no dictionary installed in the right place, but the error was not\n> > properly reported, because the existing code just throws away the error message from cracklib. Attached is a patch that changes this by logging any error message returned from the cracklib call.\n> \n> +1 on this, it's also in line with the example documentation from cracklib.\n> The returned error is potentially a bit misleading now, as it might say claim\n> that a strong password is easily cracked if the dictionary fails load. Given\n> that there is no way to distinguish between the class of returned errors it's\n> hard to see how we can do better though.\n> \n> While poking at this, we might as well update the docs to point to the right\n> URL for CrackLib as it moved from Sourceforge five years ago. The attached\n> diff fixes that.\n\n+1 on both patches.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 25 Aug 2020 15:32:18 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: passwordcheck: Log cracklib diagnostics"
},
{
"msg_contents": "On 2020-08-25 15:32, Laurenz Albe wrote:\n> On Tue, 2020-08-25 at 13:48 +0200, Daniel Gustafsson wrote:\n>>> On 25 Aug 2020, at 12:20, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>>>\n>>> A user tried to use the cracklib build-time option of the passwordcheck module. This failed, as it turned out because there was no dictionary installed in the right place, but the error was not\n>>> properly reported, because the existing code just throws away the error message from cracklib. Attached is a patch that changes this by logging any error message returned from the cracklib call.\n>>\n>> +1 on this, it's also in line with the example documentation from cracklib.\n>> The returned error is potentially a bit misleading now, as it might say claim\n>> that a strong password is easily cracked if the dictionary fails load. Given\n>> that there is no way to distinguish between the class of returned errors it's\n>> hard to see how we can do better though.\n>>\n>> While poking at this, we might as well update the docs to point to the right\n>> URL for CrackLib as it moved from Sourceforge five years ago. The attached\n>> diff fixes that.\n> \n> +1 on both patches.\n\nPushed both patches, thanks.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 28 Aug 2020 08:26:00 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: passwordcheck: Log cracklib diagnostics"
}
] |
[
{
"msg_contents": "I noticed today there are a few places where we use bms_num_memebers()\nwhere we really should be using bms_membership(). These are not bugs,\nthey're mostly just bad examples to leave laying around, at best, and\na small performance penalty, at worst.\n\nUnless there are any objections, I plan to push this to master only in\nabout 10 hours time.\n\nDavid",
"msg_date": "Wed, 26 Aug 2020 00:51:37 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix a couple of misuages of bms_num_members()"
},
{
"msg_contents": "On Wed, Aug 26, 2020 at 12:51:37AM +1200, David Rowley wrote:\n>I noticed today there are a few places where we use bms_num_memebers()\n>where we really should be using bms_membership(). These are not bugs,\n>they're mostly just bad examples to leave laying around, at best, and\n>a small performance penalty, at worst.\n>\n>Unless there are any objections, I plan to push this to master only in\n>about 10 hours time.\n>\n\nSeems OK to me. Thanks.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 25 Aug 2020 15:18:02 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix a couple of misuages of bms_num_members()"
},
{
"msg_contents": "On Wed, 26 Aug 2020 at 01:18, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Wed, Aug 26, 2020 at 12:51:37AM +1200, David Rowley wrote:\n> >I noticed today there are a few places where we use bms_num_memebers()\n> >where we really should be using bms_membership(). These are not bugs,\n> >they're mostly just bad examples to leave laying around, at best, and\n> >a small performance penalty, at worst.\n> >\n> >Unless there are any objections, I plan to push this to master only in\n> >about 10 hours time.\n> >\n>\n> Seems OK to me. Thanks.\n\nThanks for having a look. Pushed.\n\nDavid\n\n\n",
"msg_date": "Wed, 26 Aug 2020 10:52:49 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix a couple of misuages of bms_num_members()"
}
] |
[
{
"msg_contents": "Hi Tom,\n\nPer Coverity.\n\nThe function parse_hba_auth_op at (src/backend/libpq/hba.c) allows resource\nleaks when called\nby the function parse_hba_line, with parameters LOG and DEBUG3 levels.\n\nThe function SplitGUCList (src/bin/pg_dump/dumputils.c) allows even\nreturning FALSE,\nthat namelist list is not empty and as memory allocated by pg_malloc.\n\nThe simplest solution is free namelist, even when calling ereport, why the\nlevel can be\nLOG or DEBUG3.\n\nregards,\nRanier Vilela\n\nPS. Are two SplitGUCList in codebase.\n1. SplitGUCList (src/bin/pg_dump/dumputils.c)\n2. SplitGUCList (src/backend/utils/adt/varlena.c)",
"msg_date": "Tue, 25 Aug 2020 10:20:07 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Resource leaks (src/backend/libpq/hba.c)"
},
{
"msg_contents": "At Tue, 25 Aug 2020 10:20:07 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi Tom,\n> \n> Per Coverity.\n> \n> The function parse_hba_auth_op at (src/backend/libpq/hba.c) allows resource\n> leaks when called\n> by the function parse_hba_line, with parameters LOG and DEBUG3 levels.\n> The function SplitGUCList (src/bin/pg_dump/dumputils.c) allows even\n> returning FALSE,\n> that namelist list is not empty and as memory allocated by pg_malloc.\n\nAs you know, there are two implementations of the function. One that\nuses pg_malloc is used in pg_dump and the returned char *namelist is\nalways pg_free'd after use. The other that returns a pg_list, and the\nreturned list is reclaimed by MemoryContextDelete at callers\n(concretely load_hba and fill_hba_view). Indeed they share the same\nname but have different signatures so the two are statically\ndistinguishable but Coverity seems failing to do so. You may need to\navoid feeding the whole source tree to Coverity at once.\n\nAnyway this is a very common style in the PostgreSQL code so I\nrecommend to verify the outcome from such tools against the actual\ncode.\n\n> The simplest solution is free namelist, even when calling ereport, why the\n> level can be\n> LOG or DEBUG3.\n\nSo we don't need to do anything there. Rather we can remove the\nexisting list_free(parsed_servers) in parse_hba_auth_opt.\n\n> regards,\n> Ranier Vilela\n> \n> PS. Are two SplitGUCList in codebase.\n> 1. SplitGUCList (src/bin/pg_dump/dumputils.c)\n> 2. SplitGUCList (src/backend/utils/adt/varlena.c)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 26 Aug 2020 11:02:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Resource leaks (src/backend/libpq/hba.c)"
},
{
"msg_contents": "Em ter., 25 de ago. de 2020 às 23:02, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> At Tue, 25 Aug 2020 10:20:07 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > Hi Tom,\n> >\n> > Per Coverity.\n> >\n> > The function parse_hba_auth_op at (src/backend/libpq/hba.c) allows\n> resource\n> > leaks when called\n> > by the function parse_hba_line, with parameters LOG and DEBUG3 levels.\n> > The function SplitGUCList (src/bin/pg_dump/dumputils.c) allows even\n> > returning FALSE,\n> > that namelist list is not empty and as memory allocated by pg_malloc.\n>\n> As you know, there are two implementations of the function. One that\n> uses pg_malloc is used in pg_dump and the returned char *namelist is\n> always pg_free'd after use. The other that returns a pg_list, and the\n> returned list is reclaimed by MemoryContextDelete at callers\n> (concretely load_hba and fill_hba_view). Indeed they share the same\n> name but have different signatures so the two are statically\n> distinguishable but Coverity seems failing to do so. You may need to\n> avoid feeding the whole source tree to Coverity at once.\n>\nYes, thanks for the hit.\n\n>\n> Anyway this is a very common style in the PostgreSQL code so I\n> recommend to verify the outcome from such tools against the actual\n> code.\n>\n Ok.\n\n\n> > The simplest solution is free namelist, even when calling ereport, why\n> the\n> > level can be\n> > LOG or DEBUG3.\n>\n> So we don't need to do anything there. Rather we can remove the\n> existing list_free(parsed_servers) in parse_hba_auth_opt.\n>\nIt would be good, the call helped to confuse.\n\nVery thanks, for the explanation.\n\nRanier Vilela\n\nEm ter., 25 de ago. de 2020 às 23:02, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:At Tue, 25 Aug 2020 10:20:07 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi Tom,\n> \n> Per Coverity.\n> \n> The function parse_hba_auth_op at (src/backend/libpq/hba.c) allows resource\n> leaks when called\n> by the function parse_hba_line, with parameters LOG and DEBUG3 levels.\n> The function SplitGUCList (src/bin/pg_dump/dumputils.c) allows even\n> returning FALSE,\n> that namelist list is not empty and as memory allocated by pg_malloc.\n\nAs you know, there are two implementations of the function. One that\nuses pg_malloc is used in pg_dump and the returned char *namelist is\nalways pg_free'd after use. The other that returns a pg_list, and the\nreturned list is reclaimed by MemoryContextDelete at callers\n(concretely load_hba and fill_hba_view). Indeed they share the same\nname but have different signatures so the two are statically\ndistinguishable but Coverity seems failing to do so. You may need to\navoid feeding the whole source tree to Coverity at once.Yes, thanks for the hit. \n\nAnyway this is a very common style in the PostgreSQL code so I\nrecommend to verify the outcome from such tools against the actual\ncode. Ok.\n\n> The simplest solution is free namelist, even when calling ereport, why the\n> level can be\n> LOG or DEBUG3.\n\nSo we don't need to do anything there. Rather we can remove the\nexisting list_free(parsed_servers) in parse_hba_auth_opt.It would be good, the call helped to confuse. Very thanks, for the explanation.Ranier Vilela",
"msg_date": "Tue, 25 Aug 2020 23:14:56 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Resource leaks (src/backend/libpq/hba.c)"
}
] |
[
{
"msg_contents": "The USE_OPENSSL_RANDOM macro is defined when OpenSSL is used as a randomness\nprovider, but the implementation of strong randomness is guarded by USE_OPENSSL\nin most places. This is technically the same thing today, but it seems\nhygienic to use the appropriate macro in case we ever want to allow OS\nrandomness together with OpenSSL or something similar (or just make git grep\neasier which is my itch to scratch with this).\n\nThe attached moves all invocations under the correct guards. RAND_poll() in\nfork_process.c needs to happen for both OpenSSL and OpenSSL random, thus the\ncheck for both.\n\ncheers ./daniel",
"msg_date": "Tue, 25 Aug 2020 15:52:14 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Tue, Aug 25, 2020 at 03:52:14PM +0200, Daniel Gustafsson wrote:\n> The USE_OPENSSL_RANDOM macro is defined when OpenSSL is used as a randomness\n> provider, but the implementation of strong randomness is guarded by USE_OPENSSL\n> in most places. This is technically the same thing today, but it seems\n> hygienic to use the appropriate macro in case we ever want to allow OS\n> randomness together with OpenSSL or something similar (or just make git grep\n> easier which is my itch to scratch with this).\n\n@@ -24,7 +24,7 @@\n #include <unistd.h>\n #include <sys/time.h>\n\n-#ifdef USE_OPENSSL\n+#ifdef USE_OPENSSL_RANDOM\n #include <openssl/rand.h>\n #endif\nI agree that this makes the header declarations more consistent with\nWIN32.\n\n> The attached moves all invocations under the correct guards. RAND_poll() in\n> fork_process.c needs to happen for both OpenSSL and OpenSSL random, thus the\n> check for both.\n\nYeah, it could be possible that somebody still calls RAND_bytes() or\nsimilar without going through pg_strong_random(), so we still need to\nuse USE_OPENSSL after forking. Per this argument, I am not sure I see\nthe point of the change in fork_process.c as it seems to me that \nUSE_OPENSSL_RANDOM should only be tied to pg_strong_random.c, and\nyou'd still get a compilation failure if trying to use\nUSE_OPENSSL_RANDOM without --with-openssl.\n--\nMichael",
"msg_date": "Wed, 26 Aug 2020 16:56:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "> On 26 Aug 2020, at 09:56, Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Aug 25, 2020 at 03:52:14PM +0200, Daniel Gustafsson wrote:\n\n>> The attached moves all invocations under the correct guards. RAND_poll() in\n>> fork_process.c needs to happen for both OpenSSL and OpenSSL random, thus the\n>> check for both.\n> \n> Yeah, it could be possible that somebody still calls RAND_bytes() or\n> similar without going through pg_strong_random(), so we still need to\n> use USE_OPENSSL after forking. Per this argument, I am not sure I see\n> the point of the change in fork_process.c as it seems to me that \n> USE_OPENSSL_RANDOM should only be tied to pg_strong_random.c, and\n> you'd still get a compilation failure if trying to use\n> USE_OPENSSL_RANDOM without --with-openssl.\n\nThat's certainly true. The intention though is to make the code easier to\nfollow (more explicit/discoverable) for anyone trying to implement support for\nTLS backends. It's a very git grep intense process already as it is.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 26 Aug 2020 14:19:04 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Wed, Aug 26, 2020 at 2:19 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 26 Aug 2020, at 09:56, Michael Paquier <michael@paquier.xyz> wrote:\n> > On Tue, Aug 25, 2020 at 03:52:14PM +0200, Daniel Gustafsson wrote:\n>\n> >> The attached moves all invocations under the correct guards. RAND_poll() in\n> >> fork_process.c needs to happen for both OpenSSL and OpenSSL random, thus the\n> >> check for both.\n> >\n> > Yeah, it could be possible that somebody still calls RAND_bytes() or\n> > similar without going through pg_strong_random(), so we still need to\n> > use USE_OPENSSL after forking. Per this argument, I am not sure I see\n> > the point of the change in fork_process.c as it seems to me that\n> > USE_OPENSSL_RANDOM should only be tied to pg_strong_random.c, and\n> > you'd still get a compilation failure if trying to use\n> > USE_OPENSSL_RANDOM without --with-openssl.\n>\n> That's certainly true. The intention though is to make the code easier to\n> follow (more explicit/discoverable) for anyone trying to implement support for\n\nIs it really a reasonable usecase to use RAND_bytes() outside of both\npg_stroing_random() *and' outside of the openssl-specific files (like\nbe-secure-openssl.c)? Because it would only be those cases that would\nhave this case, right?\n\nIf anything, perhaps the call to RAND_poll() in fork_process.c should\nactually be a call to a strong_random_initialize() or something which\nwould have an implementation in pg_strong_random.c, thereby isolating\nthe openssl specific code in there? (And with a void implementation\nwithout openssl)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 3 Nov 2020 10:15:48 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Tue, Nov 03, 2020 at 10:15:48AM +0100, Magnus Hagander wrote:\n> On Wed, Aug 26, 2020 at 2:19 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> That's certainly true. The intention though is to make the code easier to\n>> follow (more explicit/discoverable) for anyone trying to implement support for\n> \n> Is it really a reasonable usecase to use RAND_bytes() outside of both\n> pg_stroing_random() *and' outside of the openssl-specific files (like\n> be-secure-openssl.c)? Because it would only be those cases that would\n> have this case, right?\n\nIt does not sound that strange to me to assume if some out-of-core\ncode makes use of that to fetch a random set of bytes. Now I don't\nknow of any code doing that. Who knows.\n\n> If anything, perhaps the call to RAND_poll() in fork_process.c should\n> actually be a call to a strong_random_initialize() or something which\n> would have an implementation in pg_strong_random.c, thereby isolating\n> the openssl specific code in there? (And with a void implementation\n> without openssl)\n\nI don't think that we have any need to go to such extent just for this\ncase, as RAND_poll() after forking a process is irrelevant in 1.1.1.\nWe are still many years away from removing its support though.\n\nNo idea if other SSL implementations would require such a thing.\nDaniel, what about NSS?\n--\nMichael",
"msg_date": "Tue, 3 Nov 2020 19:35:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "> On 3 Nov 2020, at 11:35, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Nov 03, 2020 at 10:15:48AM +0100, Magnus Hagander wrote:\n>> On Wed, Aug 26, 2020 at 2:19 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> That's certainly true. The intention though is to make the code easier to\n>>> follow (more explicit/discoverable) for anyone trying to implement support for\n>> \n>> Is it really a reasonable usecase to use RAND_bytes() outside of both\n>> pg_stroing_random() *and' outside of the openssl-specific files (like\n>> be-secure-openssl.c)? Because it would only be those cases that would\n>> have this case, right?\n> \n> It does not sound that strange to me to assume if some out-of-core\n> code makes use of that to fetch a random set of bytes. Now I don't\n> know of any code doing that. Who knows.\n\nEven if there are, I doubt such codepaths will be stumped by using\nUSE_OPENSSL_RANDOM for pg_strong_random code as opposed to USE_OPENSSL.\n\n>> If anything, perhaps the call to RAND_poll() in fork_process.c should\n>> actually be a call to a strong_random_initialize() or something which\n>> would have an implementation in pg_strong_random.c, thereby isolating\n>> the openssl specific code in there? (And with a void implementation\n>> without openssl)\n> \n> I don't think that we have any need to go to such extent just for this\n> case, as RAND_poll() after forking a process is irrelevant in 1.1.1.\n> We are still many years away from removing its support though.\n\nAgreed, I doubt we'll be able to retire our <1.1.1 suppport any time soon.\n\n> No idea if other SSL implementations would require such a thing.\n> Daniel, what about NSS?\n\nPK11_GenerateRandom in NSS requires an NSSContext to be set up after fork,\nwhich could be performed in such an _initialize function. The PRNG in NSPR has\na similar requirement (which may be the one the NSS patch end up using, not\nsure yet).\n\nI kind of like the idea of continuing to abstract this functionality, not\npulling in OpenSSL headers in fork_process.c is a neat bonus. I'd say it's\nworth implementing to see what it would imply, and am happy to do unless\nsomeone beats me to it.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 3 Nov 2020 13:00:00 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Tue, Nov 3, 2020 at 1:00 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 3 Nov 2020, at 11:35, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Nov 03, 2020 at 10:15:48AM +0100, Magnus Hagander wrote:\n> >> On Wed, Aug 26, 2020 at 2:19 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>> That's certainly true. The intention though is to make the code easier to\n> >>> follow (more explicit/discoverable) for anyone trying to implement support for\n> >>\n> >> Is it really a reasonable usecase to use RAND_bytes() outside of both\n> >> pg_stroing_random() *and' outside of the openssl-specific files (like\n> >> be-secure-openssl.c)? Because it would only be those cases that would\n> >> have this case, right?\n> >\n> > It does not sound that strange to me to assume if some out-of-core\n> > code makes use of that to fetch a random set of bytes. Now I don't\n> > know of any code doing that. Who knows.\n>\n> Even if there are, I doubt such codepaths will be stumped by using\n> USE_OPENSSL_RANDOM for pg_strong_random code as opposed to USE_OPENSSL.\n>\n> >> If anything, perhaps the call to RAND_poll() in fork_process.c should\n> >> actually be a call to a strong_random_initialize() or something which\n> >> would have an implementation in pg_strong_random.c, thereby isolating\n> >> the openssl specific code in there? (And with a void implementation\n> >> without openssl)\n> >\n> > I don't think that we have any need to go to such extent just for this\n> > case, as RAND_poll() after forking a process is irrelevant in 1.1.1.\n> > We are still many years away from removing its support though.\n>\n> Agreed, I doubt we'll be able to retire our <1.1.1 suppport any time soon.\n>\n> > No idea if other SSL implementations would require such a thing.\n> > Daniel, what about NSS?\n>\n> PK11_GenerateRandom in NSS requires an NSSContext to be set up after fork,\n> which could be performed in such an _initialize function. The PRNG in NSPR has\n> a similar requirement (which may be the one the NSS patch end up using, not\n> sure yet).\n>\n> I kind of like the idea of continuing to abstract this functionality, not\n> pulling in OpenSSL headers in fork_process.c is a neat bonus. I'd say it's\n> worth implementing to see what it would imply, and am happy to do unless\n> someone beats me to it.\n\nYeah, if it's likely to be usable in the other implementations, then I\nthink we should definitely explore exactly what that kind of an\nabstraction would imply. Anything isolating the dependency on OpenSSL\nwould likely have to be done at that time anyway in that case, so\nbetter have it ready.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 3 Nov 2020 13:46:38 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Tue, Nov 03, 2020 at 01:46:38PM +0100, Magnus Hagander wrote:\n> On Tue, Nov 3, 2020 at 1:00 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> I kind of like the idea of continuing to abstract this functionality, not\n>> pulling in OpenSSL headers in fork_process.c is a neat bonus. I'd say it's\n>> worth implementing to see what it would imply, and am happy to do unless\n>> someone beats me to it.\n> \n> Yeah, if it's likely to be usable in the other implementations, then I\n> think we should definitely explore exactly what that kind of an\n> abstraction would imply. Anything isolating the dependency on OpenSSL\n> would likely have to be done at that time anyway in that case, so\n> better have it ready.\n\nWith the NSS argument, agreed. Documenting when this initialization\nroutine is used is important. And I think that we should force to\nlook at this code when adding a new SSL implementation to make sure\nthat we never see CVE-2013-1900 again, say:\nvoid\npg_strong_random_init(void)\n{\n#ifdef USE_SSL\n#ifdef USE_OPENSSL\n\tRAND_poll();\n#elif USE_NSS\n\t/* do the NSS initialization */\n#else\n\tHey, you are missing something here.\n#endif\n#endif\n}\n--\nMichael",
"msg_date": "Wed, 4 Nov 2020 10:01:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Wed, Nov 4, 2020 at 2:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Nov 03, 2020 at 01:46:38PM +0100, Magnus Hagander wrote:\n> > On Tue, Nov 3, 2020 at 1:00 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >> I kind of like the idea of continuing to abstract this functionality, not\n> >> pulling in OpenSSL headers in fork_process.c is a neat bonus. I'd say it's\n> >> worth implementing to see what it would imply, and am happy to do unless\n> >> someone beats me to it.\n> >\n> > Yeah, if it's likely to be usable in the other implementations, then I\n> > think we should definitely explore exactly what that kind of an\n> > abstraction would imply. Anything isolating the dependency on OpenSSL\n> > would likely have to be done at that time anyway in that case, so\n> > better have it ready.\n>\n> With the NSS argument, agreed. Documenting when this initialization\n> routine is used is important. And I think that we should force to\n> look at this code when adding a new SSL implementation to make sure\n> that we never see CVE-2013-1900 again, say:\n> void\n> pg_strong_random_init(void)\n> {\n> #ifdef USE_SSL\n> #ifdef USE_OPENSSL\n> RAND_poll();\n> #elif USE_NSS\n> /* do the NSS initialization */\n> #else\n> Hey, you are missing something here.\n> #endif\n> #endif\n> }\n\nYes, we should absolutely do that. We already do this for\npg_strong_random() itself, and we should definitely repeat the pattern\nin the init function.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 4 Nov 2020 10:05:48 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Wed, Nov 04, 2020 at 10:05:48AM +0100, Magnus Hagander wrote:\n> Yes, we should absolutely do that. We already do this for\n> pg_strong_random() itself, and we should definitely repeat the pattern\n> in the init function.\n\nThis poked at my curiosity, so I looked at it. The result looks\nindeed like an improvement to me, while taking care of the point of\nupthread to make the implementation stuff controlled only by\nUSE_OPENSSL_RANDOM. Per se the attached.\n\nThis could make random number generation predictible when an extension\ncalls directly RAND_bytes() if USE_OPENSSL_RANDOM is not used while\nbuilding with OpenSSL, but perhaps I am just too much of a pessimistic\nnature.\n--\nMichael",
"msg_date": "Thu, 5 Nov 2020 12:44:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "> On 5 Nov 2020, at 04:44, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Nov 04, 2020 at 10:05:48AM +0100, Magnus Hagander wrote:\n>> Yes, we should absolutely do that. We already do this for\n>> pg_strong_random() itself, and we should definitely repeat the pattern\n>> in the init function.\n> \n> This poked at my curiosity, so I looked at it. The result looks\n> indeed like an improvement to me, while taking care of the point of\n> upthread to make the implementation stuff controlled only by\n> USE_OPENSSL_RANDOM. Per se the attached.\n\nThis must check for USE_OPENSSL as well as per my original patch, since we'd\notherwise fail to perform post-fork initialization in case one use OpenSSL with\nanothe PRNG for pg_strong_random. That might be theoretical at this point, but\nif we ever support that and miss updating this it would be problematic.\n\n+#if defined(USE_OPENSSL_RANDOM)\n\nI'm not sure this comment adds any value, we currently have two non-TLS library\nPRNGs in pg_strong_random, so even if we add NSS it will at best be 50%:\n\n+ * Note that this applies normally to SSL implementations, so when\n+ * implementing a new one, be careful to consider this initialization\n+ * step.\n\n> This could make random number generation predictible when an extension\n> calls directly RAND_bytes() if USE_OPENSSL_RANDOM is not used while\n> building with OpenSSL, but perhaps I am just too much of a pessimistic\n> nature.\n\nAny extension blindly calling RAND_bytes and expecting there to automagically\nbe an OpenSSL library available seems questionable.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 5 Nov 2020 10:49:45 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Thu, Nov 05, 2020 at 10:49:45AM +0100, Daniel Gustafsson wrote:\n> This must check for USE_OPENSSL as well as per my original patch, since we'd\n> otherwise fail to perform post-fork initialization in case one use OpenSSL with\n> anothe PRNG for pg_strong_random. That might be theoretical at this point, but\n> if we ever support that and miss updating this it would be problematic.\n\nThat's actually the same point I tried to make at the end of my last\nemail, but worded differently, isn't it? In short we have\nUSE_OPENSSL, but !USE_OPENSSL_RANDOM and we still need an\ninitialization. We could just do something like the following:\n#ifdef USE_OPENSSL\n RAND_poll();\n#endif\n#if defined(USE_OPENSSL_RANDOM)\n /* OpenSSL is done above, because blah.. */\n#elif etc..\n[...]\n#error missing an init, pal.\n#endif\n\nOr do you jave something else in mind?\n\n> +#if defined(USE_OPENSSL_RANDOM)\n> \n> I'm not sure this comment adds any value, we currently have two non-TLS library\n> PRNGs in pg_strong_random, so even if we add NSS it will at best be 50%:\n\nI don't mind removing this part, the compilation hint may be enough,\nindeed.\n--\nMichael",
"msg_date": "Thu, 5 Nov 2020 21:12:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "> On 5 Nov 2020, at 13:12, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Nov 05, 2020 at 10:49:45AM +0100, Daniel Gustafsson wrote:\n>> This must check for USE_OPENSSL as well as per my original patch, since we'd\n>> otherwise fail to perform post-fork initialization in case one use OpenSSL with\n>> anothe PRNG for pg_strong_random. That might be theoretical at this point, but\n>> if we ever support that and miss updating this it would be problematic.\n> \n> That's actually the same point I tried to make at the end of my last\n> email, but worded differently, isn't it? \n\nAh, ok, then I failed to parse it that way. At least we are in agreement then\nwhich is good.\n\n> In short we have\n> USE_OPENSSL, but !USE_OPENSSL_RANDOM and we still need an\n> initialization. We could just do something like the following:\n> #ifdef USE_OPENSSL\n> RAND_poll();\n> #endif\n> #if defined(USE_OPENSSL_RANDOM)\n> /* OpenSSL is done above, because blah.. */\n> #elif etc..\n> [...]\n> #error missing an init, pal.\n> #endif\n> \n> Or do you jave something else in mind?\n\nWhat about the (hypothetical) situation where USE_OPENSSL_RANDOM is used\nwithout USE_OPENSSL? Wouldn't the below make sure we cover all bases?\n\n #if defined(USE_OPENSSL) || defined(USE_OPENSSL_RANDOM)\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 5 Nov 2020 13:18:15 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Thu, Nov 05, 2020 at 01:18:15PM +0100, Daniel Gustafsson wrote:\n> What about the (hypothetical) situation where USE_OPENSSL_RANDOM is used\n> without USE_OPENSSL? Wouldn't the below make sure we cover all bases?\n\nYou can actually try that combination, because it is possible today to\ncompile without --with-openssl but try to enforce USE_OPENSSL_RANDOM.\nThis leads to a compilation failure. I think that it is important to\nhave the #if/#elif business in the init function match the conditions\nof the main function.\n\n> #if defined(USE_OPENSSL) || defined(USE_OPENSSL_RANDOM)\n\nIt seems to me that this one would become incorrect if compiling with\nOpenSSL but select a random source that requires an initialization, as\nit would enforce only OpenSSL initialization all the time.\nTheoretical point now, of course, because such combination does not\nexist yet in the code.\n--\nMichael",
"msg_date": "Thu, 5 Nov 2020 21:28:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Thu, Nov 5, 2020 at 1:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Nov 05, 2020 at 01:18:15PM +0100, Daniel Gustafsson wrote:\n> > What about the (hypothetical) situation where USE_OPENSSL_RANDOM is used\n> > without USE_OPENSSL? Wouldn't the below make sure we cover all bases?\n>\n> You can actually try that combination, because it is possible today to\n> compile without --with-openssl but try to enforce USE_OPENSSL_RANDOM.\n> This leads to a compilation failure. I think that it is important to\n> have the #if/#elif business in the init function match the conditions\n> of the main function.\n\n+1 -- whatever those are, they should be the same.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 5 Nov 2020 13:56:27 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "> On 5 Nov 2020, at 13:28, Michael Paquier <michael@paquier.xyz> wrote:\n\n> It seems to me that this one would become incorrect if compiling with\n> OpenSSL but select a random source that requires an initialization, as\n> it would enforce only OpenSSL initialization all the time.\n\nRight, how about something like the attached (untested) diff?\n\n> Theoretical point now, of course, because such combination does not\n> exist yet in the code.\n\nNot yet, and potentially never will. Given the consequences of a PRNG which\nhasn't been properly initialized I think it's ok to be defensive in this\ncodepath however.\n\ncheers ./daniel",
"msg_date": "Thu, 5 Nov 2020 13:59:11 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Thu, Nov 05, 2020 at 01:59:11PM +0100, Daniel Gustafsson wrote:\n> Not yet, and potentially never will. Given the consequences of a PRNG which\n> hasn't been properly initialized I think it's ok to be defensive in this\n> codepath however.\n\n+ /*\n+ * In case the backend is using the PRNG from OpenSSL without being built\n+ * with support for OpenSSL, make sure to perform post-fork initialization.\n+ * If the backend is using OpenSSL then we have already performed this\n+ * step. The same version caveat as discussed in the comment above applies\n+ * here as well.\n+ */\n+#ifndef USE_OPENSSL\n+ RAND_poll();\n+#endif\n\nI still don't see the point of this extra complexity, as\nUSE_OPENSSL_RANDOM implies USE_OPENSSL, and we also call RAND_poll() a\ncouple of lines down in the main function under USE_OPENSSL_RANDOM.\nSo I would just remove this whole block, and replace the comment by a\nsimple \"initialization already done above\".\n--\nMichael",
"msg_date": "Fri, 6 Nov 2020 08:36:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "> On 6 Nov 2020, at 00:36, Michael Paquier <michael@paquier.xyz> wrote:\n\n> I still don't see the point of this extra complexity, as\n> USE_OPENSSL_RANDOM implies USE_OPENSSL,\n\nAs long as we're sure that we'll remember to fix this when that assumption no\nlonger holds (intentional or unintentional) then it's fine to skip and instead\nbe defensive in documentation rather than code.\n\ncheers ./daniel\n\n",
"msg_date": "Fri, 6 Nov 2020 12:08:11 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Fri, Nov 6, 2020 at 12:08 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 6 Nov 2020, at 00:36, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > I still don't see the point of this extra complexity, as\n> > USE_OPENSSL_RANDOM implies USE_OPENSSL,\n>\n> As long as we're sure that we'll remember to fix this when that assumption no\n> longer holds (intentional or unintentional) then it's fine to skip and instead\n> be defensive in documentation rather than code.\n\nI think the defensive-in-code instead of defensive-in-docs is a really\nstrong argument, so I have pushed it as such.\n\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 6 Nov 2020 13:31:44 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Fri, Nov 06, 2020 at 01:31:44PM +0100, Magnus Hagander wrote:\n> I think the defensive-in-code instead of defensive-in-docs is a really\n> strong argument, so I have pushed it as such.\n\nFine by me. Thanks for the commit.\n--\nMichael",
"msg_date": "Fri, 6 Nov 2020 22:27:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> I think the defensive-in-code instead of defensive-in-docs is a really\n> strong argument, so I have pushed it as such.\n\nI notice warnings that I think are caused by this patch on some buildfarm\nmembers, eg\n\n drongo | 2020-11-15 06:59:05 | C:\\prog\\bf\\root\\HEAD\\pgsql.build\\src\\port\\pg_strong_random.c(96,11): warning C4013: 'RAND_poll' undefined; assuming extern returning int [C:\\prog\\bf\\root\\HEAD\\pgsql.build\\postgres.vcxproj]\n drongo | 2020-11-15 06:59:05 | C:\\prog\\bf\\root\\HEAD\\pgsql.build\\src\\port\\pg_strong_random.c(96,11): warning C4013: 'RAND_poll' undefined; assuming extern returning int [C:\\prog\\bf\\root\\HEAD\\pgsql.build\\libpgport.vcxproj]\n drongo | 2020-11-15 06:59:05 | C:\\prog\\bf\\root\\HEAD\\pgsql.build\\src\\port\\pg_strong_random.c(96,11): warning C4013: 'RAND_poll' undefined; assuming extern returning int [C:\\prog\\bf\\root\\HEAD\\pgsql.build\\postgres.vcxproj]\n drongo | 2020-11-15 06:59:05 | C:\\prog\\bf\\root\\HEAD\\pgsql.build\\src\\port\\pg_strong_random.c(96,11): warning C4013: 'RAND_poll' undefined; assuming extern returning int [C:\\prog\\bf\\root\\HEAD\\pgsql.build\\libpgport.vcxproj]\n\n(bowerbird and hamerkop are showing the same).\n\nMy first thought about it was that this bit is busted:\n\n+#ifndef USE_OPENSSL\n+ RAND_poll();\n+#endif\n\nThe obvious problem with this is that if !USE_OPENSSL, we will not have\npulled in openssl's headers.\n\nHowever ... all these machines are pointing at line 96, which is not\nthat one but the one under \"#if defined(USE_OPENSSL)\". So I'm not sure\nwhat to make of that, except that a bit more finesse seems required.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 15 Nov 2020 12:16:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Sun, Nov 15, 2020 at 12:16:56PM -0500, Tom Lane wrote:\n> The obvious problem with this is that if !USE_OPENSSL, we will not have\n> pulled in openssl's headers.\n\nFWIW, I argued upthread against including this part because it is\nuseless: if not building with OpenSSL, we'll never have the base to be\nable to use RAND_poll().\n\n> However ... all these machines are pointing at line 96, which is not\n> that one but the one under \"#if defined(USE_OPENSSL)\". So I'm not sure\n> what to make of that, except that a bit more finesse seems required.\n\nThe build scripts of src/tools/msvc/ choose to not use OpenSSL as\nstrong random source even if building with OpenSSL. The top of the\nfile only includes openssl/rand.h if using USE_OPENSSL_RANDOM.\n\nThinking about that afresh, I think that we got that wrong here on\nthree points:\n- If attempting to use OpenSSL on Windows, let's just bite the bullet\nand use OpenSSL as random source, using Windows as source only when\nnot building with OpenSSL.\n- Instead of using a call to RAND_poll() that we know will never work,\nlet's just issue a compilation failure if attempting to use\nUSE_OPENSSL_RANDOM without USE_OPENSSL.\n- rand.h needs to be included under USE_OPENSSL.\n--\nMichael",
"msg_date": "Mon, 16 Nov 2020 09:20:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "> On 16 Nov 2020, at 01:20, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Sun, Nov 15, 2020 at 12:16:56PM -0500, Tom Lane wrote:\n>> The obvious problem with this is that if !USE_OPENSSL, we will not have\n>> pulled in openssl's headers.\n> \n> FWIW, I argued upthread against including this part because it is\n> useless: if not building with OpenSSL, we'll never have the base to be\n> able to use RAND_poll().\n\nHow do you mean? The OpenSSL PRNG can be used without setting up a context\netc, the code in pg_strong_random is all we need to use it without USE_OPENSSL\n(whether we'd like to is another story) or am I missing something?\n\n>> However ... all these machines are pointing at line 96, which is not\n>> that one but the one under \"#if defined(USE_OPENSSL)\". So I'm not sure\n>> what to make of that, except that a bit more finesse seems required.\n> \n> The build scripts of src/tools/msvc/ choose to not use OpenSSL as\n> strong random source even if building with OpenSSL. The top of the\n> file only includes openssl/rand.h if using USE_OPENSSL_RANDOM.\n\nThe fallout here is precisely the reason why I argued for belts and suspenders\nsuch that PRNG init is performed for (USE_OPENSSL || USE_OPENSSL_RANDOM). I\ndon't trust the assumption that if one is there other will always be there as\nwell as long as they are disjoint. Since we expose this PRNG to users, there\nis a vector for spooling the rand state via UUID generation in case the PRNG is\nfaulty and have predictability, so failing to protect the after-fork case can\nbe expensive. Granted, such vulnerabilities are rare but not inconcievable.\n\nNow, this patch didn't get the header inclusion right which is why thise broke.\n\n> Thinking about that afresh, I think that we got that wrong here on\n> three points:\n> - If attempting to use OpenSSL on Windows, let's just bite the bullet\n> and use OpenSSL as random source, using Windows as source only when\n> not building with OpenSSL.\n> - Instead of using a call to RAND_poll() that we know will never work,\n> let's just issue a compilation failure if attempting to use\n> USE_OPENSSL_RANDOM without USE_OPENSSL.\n\nTaking a step back, what is the usecase of USE_OPENSSL_RANDOM if we force it to\nbe equal to USE_OPENSSL? Shouldn't we in that case just have USE_OPENSSL,\nadjust the logic and remove the below comment from configure.ac which isn't\nreally telling the truth?\n\n # Select random number source\n #\n # You can override this logic by setting the appropriate USE_*RANDOM flag to 1\n # in the template or configure command line.\n\nI might be thick but I'm struggling to see the use for complications when we\ndon't support any pluggability. Having said that, it might be the sane way in\nthe end to forcibly use the TLS library as a randomness source should there be\none (FIPS compliance comes to mind), but then we might as well spell that out.\n\n> - rand.h needs to be included under USE_OPENSSL.\n\n\nIt needs to be included for both USE_OPENSSL and USE_OPENSSL_RANDOM unless we\ncombine them as per the above.\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 16 Nov 2020 10:19:41 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Mon, Nov 16, 2020 at 10:19 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 16 Nov 2020, at 01:20, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Sun, Nov 15, 2020 at 12:16:56PM -0500, Tom Lane wrote:\n> >> The obvious problem with this is that if !USE_OPENSSL, we will not have\n> >> pulled in openssl's headers.\n> >\n> > FWIW, I argued upthread against including this part because it is\n> > useless: if not building with OpenSSL, we'll never have the base to be\n> > able to use RAND_poll().\n>\n> How do you mean? The OpenSSL PRNG can be used without setting up a context\n> etc, the code in pg_strong_random is all we need to use it without\n> USE_OPENSSL\n> (whether we'd like to is another story) or am I missing something?\n>\n> >> However ... all these machines are pointing at line 96, which is not\n> >> that one but the one under \"#if defined(USE_OPENSSL)\". So I'm not sure\n> >> what to make of that, except that a bit more finesse seems required.\n> >\n> > The build scripts of src/tools/msvc/ choose to not use OpenSSL as\n> > strong random source even if building with OpenSSL. The top of the\n> > file only includes openssl/rand.h if using USE_OPENSSL_RANDOM.\n>\n> The fallout here is precisely the reason why I argued for belts and\n> suspenders\n> such that PRNG init is performed for (USE_OPENSSL || USE_OPENSSL_RANDOM).\n> I\n> don't trust the assumption that if one is there other will always be there\n> as\n> well as long as they are disjoint. Since we expose this PRNG to users,\n> there\n> is a vector for spooling the rand state via UUID generation in case the\n> PRNG is\n> faulty and have predictability, so failing to protect the after-fork case\n> can\n> be expensive. Granted, such vulnerabilities are rare but not\n> inconcievable.\n>\n> Now, this patch didn't get the header inclusion right which is why thise\n> broke.\n>\n\n> > Thinking about that afresh, I think that we got that wrong here on\n> > three points:\n> > - If attempting to use OpenSSL on Windows, let's just bite the bullet\n> > and use OpenSSL as random source, using Windows as source only when\n> > not building with OpenSSL.\n> > - Instead of using a call to RAND_poll() that we know will never work,\n> > let's just issue a compilation failure if attempting to use\n> > USE_OPENSSL_RANDOM without USE_OPENSSL.\n>\n> Taking a step back, what is the usecase of USE_OPENSSL_RANDOM if we force\n> it to\n> be equal to USE_OPENSSL? Shouldn't we in that case just have USE_OPENSSL,\n> adjust the logic and remove the below comment from configure.ac which\n> isn't\n> really telling the truth?\n\n\n> # Select random number source\n> #\n> # You can override this logic by setting the appropriate USE_*RANDOM\n> flag to 1\n> # in the template or configure command line.\n>\n> I might be thick but I'm struggling to see the use for complications when\n> we\n> don't support any pluggability. Having said that, it might be the sane\n> way in\n> the end to forcibly use the TLS library as a randomness source should\n> there be\n> one (FIPS compliance comes to mind), but then we might as well spell that\n> out.\n>\n> > - rand.h needs to be included under USE_OPENSSL.\n>\n>\n> It needs to be included for both USE_OPENSSL and USE_OPENSSL_RANDOM unless\n> we\n> combine them as per the above.\n>\n\n\nI agree with those -- either we remove the ability to choose random source\nindependently of the SSL library (and then only use the windows crypto\nprovider or /dev/urandom as platform-specific choices when *no* SSL library\nis used), and in that case we should not have separate #ifdef's for them.\n\nOr we fix the includes. Which is obviously easier, but we should take the\ntime to do what we think is right long-term of course.\n\nKeeping two defines and an extra configure check when they mean the same\nthing seems like the worst combination of the two.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Nov 16, 2020 at 10:19 AM Daniel Gustafsson <daniel@yesql.se> wrote:> On 16 Nov 2020, at 01:20, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Sun, Nov 15, 2020 at 12:16:56PM -0500, Tom Lane wrote:\n>> The obvious problem with this is that if !USE_OPENSSL, we will not have\n>> pulled in openssl's headers.\n> \n> FWIW, I argued upthread against including this part because it is\n> useless: if not building with OpenSSL, we'll never have the base to be\n> able to use RAND_poll().\n\nHow do you mean? The OpenSSL PRNG can be used without setting up a context\netc, the code in pg_strong_random is all we need to use it without USE_OPENSSL\n(whether we'd like to is another story) or am I missing something?\n\n>> However ... all these machines are pointing at line 96, which is not\n>> that one but the one under \"#if defined(USE_OPENSSL)\". So I'm not sure\n>> what to make of that, except that a bit more finesse seems required.\n> \n> The build scripts of src/tools/msvc/ choose to not use OpenSSL as\n> strong random source even if building with OpenSSL. The top of the\n> file only includes openssl/rand.h if using USE_OPENSSL_RANDOM.\n\nThe fallout here is precisely the reason why I argued for belts and suspenders\nsuch that PRNG init is performed for (USE_OPENSSL || USE_OPENSSL_RANDOM). I\ndon't trust the assumption that if one is there other will always be there as\nwell as long as they are disjoint. Since we expose this PRNG to users, there\nis a vector for spooling the rand state via UUID generation in case the PRNG is\nfaulty and have predictability, so failing to protect the after-fork case can\nbe expensive. Granted, such vulnerabilities are rare but not inconcievable.\n\nNow, this patch didn't get the header inclusion right which is why thise broke.\n> Thinking about that afresh, I think that we got that wrong here on\n> three points:\n> - If attempting to use OpenSSL on Windows, let's just bite the bullet\n> and use OpenSSL as random source, using Windows as source only when\n> not building with OpenSSL.\n> - Instead of using a call to RAND_poll() that we know will never work,\n> let's just issue a compilation failure if attempting to use\n> USE_OPENSSL_RANDOM without USE_OPENSSL.\n\nTaking a step back, what is the usecase of USE_OPENSSL_RANDOM if we force it to\nbe equal to USE_OPENSSL? Shouldn't we in that case just have USE_OPENSSL,\nadjust the logic and remove the below comment from configure.ac which isn't\nreally telling the truth? \n\n # Select random number source\n #\n # You can override this logic by setting the appropriate USE_*RANDOM flag to 1\n # in the template or configure command line.\n\nI might be thick but I'm struggling to see the use for complications when we\ndon't support any pluggability. Having said that, it might be the sane way in\nthe end to forcibly use the TLS library as a randomness source should there be\none (FIPS compliance comes to mind), but then we might as well spell that out.\n\n> - rand.h needs to be included under USE_OPENSSL.\n\n\nIt needs to be included for both USE_OPENSSL and USE_OPENSSL_RANDOM unless we\ncombine them as per the above.I agree with those -- either we remove the ability to choose random source independently of the SSL library (and then only use the windows crypto provider or /dev/urandom as platform-specific choices when *no* SSL library is used), and in that case we should not have separate #ifdef's for them.Or we fix the includes. Which is obviously easier, but we should take the time to do what we think is right long-term of course.Keeping two defines and an extra configure check when they mean the same thing seems like the worst combination of the two.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 16 Nov 2020 10:45:06 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> I agree with those -- either we remove the ability to choose random source\n> independently of the SSL library (and then only use the windows crypto\n> provider or /dev/urandom as platform-specific choices when *no* SSL library\n> is used), and in that case we should not have separate #ifdef's for them.\n> Or we fix the includes. Which is obviously easier, but we should take the\n> time to do what we think is right long-term of course.\n\nFWIW, I'd vote for the former. I think the presumption that OpenSSL's\nrandom-number machinery can be used without any other initialization is\nshaky as heck.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Nov 2020 10:06:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "> On 16 Nov 2020, at 16:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Magnus Hagander <magnus@hagander.net> writes:\n>> I agree with those -- either we remove the ability to choose random source\n>> independently of the SSL library (and then only use the windows crypto\n>> provider or /dev/urandom as platform-specific choices when *no* SSL library\n>> is used), and in that case we should not have separate #ifdef's for them.\n>> Or we fix the includes. Which is obviously easier, but we should take the\n>> time to do what we think is right long-term of course.\n> \n> FWIW, I'd vote for the former. I think the presumption that OpenSSL's\n> random-number machinery can be used without any other initialization is\n> shaky as heck.\n\nI tend to agree, randomness is complicated enough without adding a compile time\nextensibility which few (if anyone) will ever use. Attached is an attempt at\nthis.\n\ncheers ./daniel",
"msg_date": "Tue, 17 Nov 2020 21:24:30 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Tue, Nov 17, 2020 at 09:24:30PM +0100, Daniel Gustafsson wrote:\n> I tend to agree, randomness is complicated enough without adding a compile time\n> extensibility which few (if anyone) will ever use. Attached is an attempt at\n> this.\n\nGoing down to that, it seems to me that we could just remove\nUSE_WIN32_RANDOM (as this is implied by WIN32), as well as\nUSE_DEV_URANDOM because configure.ac checks for the existence of\n/dev/urandom, no? In short, configure.ac could be changed to check\nafter /dev/urandom if not using OpenSSL and not being on Windows.\n\n-elif test x\"$USE_WIN32_RANDOM\" = x\"1\" ; then\n+elif test x\"$PORTANME\" = x\"win32\" ; then\nTypo here, s/PORTANME/PORTNAME.\n--\nMichael",
"msg_date": "Wed, 18 Nov 2020 10:31:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "> On 18 Nov 2020, at 02:31, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Nov 17, 2020 at 09:24:30PM +0100, Daniel Gustafsson wrote:\n>> I tend to agree, randomness is complicated enough without adding a compile time\n>> extensibility which few (if anyone) will ever use. Attached is an attempt at\n>> this.\n> \n> Going down to that, it seems to me that we could just remove\n> USE_WIN32_RANDOM (as this is implied by WIN32), as well as\n> USE_DEV_URANDOM because configure.ac checks for the existence of\n> /dev/urandom, no? In short, configure.ac could be changed to check\n> after /dev/urandom if not using OpenSSL and not being on Windows.\n\nTechnically that is what it does, except for setting the USE_*RANDOM variables\nfor non-OpenSSL builds. We could skip that too, which I think is what you're\nproposing, but it seems to me that we'll end up with another set of entangled\nlogic in pg_strong_random if we do since there then needs to be precedence in\nchecking (one might be on Windows with OpenSSL for example, where OpenSSL >\nWindows API).\n\n> -elif test x\"$USE_WIN32_RANDOM\" = x\"1\" ; then\n> +elif test x\"$PORTANME\" = x\"win32\" ; then\n> Typo here, s/PORTANME/PORTNAME.\n\nFixed.\n\ncheers ./daniel",
"msg_date": "Wed, 18 Nov 2020 09:25:44 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 09:25:44AM +0100, Daniel Gustafsson wrote:\n> Technically that is what it does, except for setting the USE_*RANDOM variables\n> for non-OpenSSL builds. We could skip that too, which I think is what you're\n> proposing, but it seems to me that we'll end up with another set of entangled\n> logic in pg_strong_random if we do since there then needs to be precedence in\n> checking (one might be on Windows with OpenSSL for example, where OpenSSL >\n> Windows API).\n\nYes, I am suggesting to just remove both USE_*_RANDOM flags, and use\nthe following structure instead in pg_strong_random.c for both the\ninit and main functions:\n#ifdef USE_OPENSSL\n\t/* foo */\n#elif WIN32\n\t/* bar*/\n#else\n\t/* hoge urandom */\n#endif\n\nAnd complain in configure.ac if we miss urandom for the fallback case.\n\nNow, it would not be the first time I suggest something on this thread\nthat nobody likes :)\n--\nMichael",
"msg_date": "Wed, 18 Nov 2020 17:54:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "> On 18 Nov 2020, at 09:54, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Nov 18, 2020 at 09:25:44AM +0100, Daniel Gustafsson wrote:\n>> Technically that is what it does, except for setting the USE_*RANDOM variables\n>> for non-OpenSSL builds. We could skip that too, which I think is what you're\n>> proposing, but it seems to me that we'll end up with another set of entangled\n>> logic in pg_strong_random if we do since there then needs to be precedence in\n>> checking (one might be on Windows with OpenSSL for example, where OpenSSL >\n>> Windows API).\n> \n> Yes, I am suggesting to just remove both USE_*_RANDOM flags, and use\n> the following structure instead in pg_strong_random.c for both the\n> init and main functions:\n> #ifdef USE_OPENSSL\n> \t/* foo */\n> #elif WIN32\n> \t/* bar*/\n> #else\n> \t/* hoge urandom */\n> #endif\n> \n> And complain in configure.ac if we miss urandom for the fallback case.\n> \n> Now, it would not be the first time I suggest something on this thread\n> that nobody likes :)\n\nWhile it does simplify configure.ac, I'm just not a fan of the strict ordering\nwhich is required without the labels even implying it. But that might just be\nmy personal preference.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 18 Nov 2020 10:43:35 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Wed, Nov 18, 2020 at 10:43:35AM +0100, Daniel Gustafsson wrote:\n> While it does simplify configure.ac, I'm just not a fan of the strict ordering\n> which is required without the labels even implying it. But that might just be\n> my personal preference.\n\nI just looked at that, and the attached seems more intuitive to me.\nThere is more code removed, but not that much either.\n--\nMichael",
"msg_date": "Thu, 19 Nov 2020 12:34:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "> On 19 Nov 2020, at 04:34, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Nov 18, 2020 at 10:43:35AM +0100, Daniel Gustafsson wrote:\n>> While it does simplify configure.ac, I'm just not a fan of the strict ordering\n>> which is required without the labels even implying it. But that might just be\n>> my personal preference.\n> \n> I just looked at that, and the attached seems more intuitive to me.\n\nOk. I would add a strongly worded comment about the importance of the ordering\nsince that is now crucial not to break.\n\n-#ifdef USE_WIN32_RANDOM\n+#ifdef WIN32\n #include <wincrypt.h>\n #endif\n \n-#ifdef USE_WIN32_RANDOM\n+#ifdef WIN32\n /*\n * Cache a global crypto provider that only gets freed when the process\n * exits, in case we need random numbers more than once.\n@@ -39,7 +39,7 @@\n static HCRYPTPROV hProvider = 0;\n #endif\n\nThis will pull in headers and define hProvider for all Windows builds even if\nthey use OpenSSL, but perhaps that doesn't matter?\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 19 Nov 2020 10:25:20 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 4:34 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Nov 18, 2020 at 10:43:35AM +0100, Daniel Gustafsson wrote:\n> > While it does simplify configure.ac, I'm just not a fan of the strict ordering\n> > which is required without the labels even implying it. But that might just be\n> > my personal preference.\n>\n> I just looked at that, and the attached seems more intuitive to me.\n> There is more code removed, but not that much either.\n\nFirst -- your patch mi9sses the comment in front of pg_strong_random\nwhich still says configure will set the USE_*_RANDOM macros.\n\nThat said, I agree with daniel that it's a bit \"scary\" that it's the\norder of ifdefs that now decide on the whole thing, especially since\nthere are two sets of those (one in the init function, one in the\nrandom one), which could potentially end up out of order if someone\nmakes a mistake.\n\nI'm thinking the code might get a lot cleaner if we just make a single\nset of ifdefs, even if that means repeating the function header. In\ntheory we could put them in different *.c files as well, but that\nseems overkill given how tiny they are.\n\nPatch is the same as your v3 in all parts except for the\npg_strong_random.c changes.\n\nIn summary, here's my suggestion for color of the bikeshed.\n\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 19 Nov 2020 11:00:40 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 11:00:40AM +0100, Magnus Hagander wrote:\n> I'm thinking the code might get a lot cleaner if we just make a single\n> set of ifdefs, even if that means repeating the function header. In\n> theory we could put them in different *.c files as well, but that\n> seems overkill given how tiny they are.\n\nIf you reorganize the code this way, I think that make coverage\n(genhtml mainly) would complain because the same function is defined\nmultiple times. I have fallen in this trap recently, with 2771fcee.\n--\nMichael",
"msg_date": "Thu, 19 Nov 2020 20:03:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 12:04 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Nov 19, 2020 at 11:00:40AM +0100, Magnus Hagander wrote:\n> > I'm thinking the code might get a lot cleaner if we just make a single\n> > set of ifdefs, even if that means repeating the function header. In\n> > theory we could put them in different *.c files as well, but that\n> > seems overkill given how tiny they are.\n>\n> If you reorganize the code this way, I think that make coverage\n> (genhtml mainly) would complain because the same function is defined\n> multiple times. I have fallen in this trap recently, with 2771fcee.\n\nUgh, that's pretty terrible.\n\nI guess the only way around that is then to split it up into separate\nfiles. And while I think this way makes the code a lot easier to read,\nand thereby safer, I'm not sure it's worth quite *that*.\n\nOr do you know of some other way to get around that?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 19 Nov 2020 21:49:05 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 09:49:05PM +0100, Magnus Hagander wrote:\n> Ugh, that's pretty terrible.\n\nI have spent some time testing this part this morning, and I can see\nthat genhtml does not complain with your patch. It looks like in the\ncase of 2771fce the tool got confused because the same function was\ngetting compiled twice for the backend and the frontend, but here you\nonly get one code path compiled depending on the option used.\n\n+#else /* not OPENSSL or WIN32 */\nI think you mean USE_OPENSSL or just OpenSSL here, but not \"OPENSSL\".\n\npg_strong_random.c needs a pgindent run, there are two inconsistent\ndiffs. Looks fine except for those nits.\n--\nMichael",
"msg_date": "Fri, 20 Nov 2020 11:31:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "> On 20 Nov 2020, at 03:31, Michael Paquier <michael@paquier.xyz> wrote\n\n> pg_strong_random.c needs a pgindent run, there are two inconsistent\n> diffs. Looks fine except for those nits.\n\nAgreed, this is the least complicated (and most readable) we can make this\nfile, especially if we add more providers. +1.\n\ncheers ./daniel\n\n\n",
"msg_date": "Fri, 20 Nov 2020 10:26:43 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
},
{
"msg_contents": "On Fri, Nov 20, 2020 at 3:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Nov 19, 2020 at 09:49:05PM +0100, Magnus Hagander wrote:\n> > Ugh, that's pretty terrible.\n>\n> I have spent some time testing this part this morning, and I can see\n> that genhtml does not complain with your patch. It looks like in the\n> case of 2771fce the tool got confused because the same function was\n> getting compiled twice for the backend and the frontend, but here you\n> only get one code path compiled depending on the option used.\n>\n> +#else /* not OPENSSL or WIN32 */\n> I think you mean USE_OPENSSL or just OpenSSL here, but not \"OPENSSL\".\n\nYeah. Well, I either meant \"OpenSSL or Win32\" or \"USE_OPENSSL or\nWIN32\", and ended up with some incorrect mix :) Thanks, fixed.\n\n\n> pg_strong_random.c needs a pgindent run, there are two inconsistent\n> diffs. Looks fine except for those nits.\n\nI saw only one after this, but maybe I ended up auto-fixing it whenI\nchanged that define.\n\nThat said, pgindent now run, and patch pushed.\n\nThanks!\n\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 20 Nov 2020 13:59:18 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Move OpenSSL random under USE_OPENSSL_RANDOM"
}
] |
[
{
"msg_contents": "Hi Tom,\n\nPer Coverity.\n\nThe SearchSysCache1 allows return NULL and at function AlterStatistics,\nhas one mistake, lack of, check of return, which enables a dereference NULL\npointer,\nat function heap_modify_tuple.\n\nWhile there is room for improvement.\nAvoid calling SearchSysCache1 and table_open if the user \"is not the owner\nof the existing statistics object\".\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 25 Aug 2020 12:42:17 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Dereference null return value (NULL_RETURNS)\n (src/backend/commands/statscmds.c)"
},
{
"msg_contents": "Em ter., 25 de ago. de 2020 às 12:42, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Hi Tom,\n>\n> Per Coverity.\n>\n> The SearchSysCache1 allows return NULL and at function AlterStatistics,\n> has one mistake, lack of, check of return, which enables a dereference\n> NULL pointer,\n> at function heap_modify_tuple.\n>\n> While there is room for improvement.\n> Avoid calling SearchSysCache1 and table_open if the user \"is not the owner\n> of the existing statistics object\".\n>\nAfter a long time, finally this bug has been fixed.\nhttps://github.com/postgres/postgres/commit/6d554e3fcd6fb8be2dbcbd3521e2947ed7a552cb\n\nregards,\nRanier Vilela\n\nEm ter., 25 de ago. de 2020 às 12:42, Ranier Vilela <ranier.vf@gmail.com> escreveu:Hi Tom,Per Coverity.The SearchSysCache1 allows return NULL and at function AlterStatistics,has one mistake, lack of, check of return, which enables a dereference NULL pointer,at function heap_modify_tuple.While there is room for improvement.Avoid calling \nSearchSysCache1 and table_open if the user \"is not the owner of the existing statistics object\".After a long time, finally this bug has been fixed.https://github.com/postgres/postgres/commit/6d554e3fcd6fb8be2dbcbd3521e2947ed7a552cbregards,Ranier Vilela",
"msg_date": "Sun, 13 Feb 2022 17:26:59 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Dereference null return value (NULL_RETURNS)\n (src/backend/commands/statscmds.c)"
}
] |
[
{
"msg_contents": "Hi Tom,\n\nPer Coverity.\n\nThe variable root_offsets is read at line 1641, but, at this point,\nthe content is unknown, so it is impossible to test works well.\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 25 Aug 2020 13:19:33 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
},
{
"msg_contents": "On 2020-Aug-25, Ranier Vilela wrote:\n\n> The variable root_offsets is read at line 1641, but, at this point,\n> the content is unknown, so it is impossible to test works well.\n\nSurely it is set by heap_get_root_tuples() in line 1347? The root_blkno\nvariable is used exclusively to know whether root_offsets has been\ninitialized for the current block.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 25 Aug 2020 17:06:34 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
},
{
"msg_contents": "Em ter., 25 de ago. de 2020 às 18:06, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Aug-25, Ranier Vilela wrote:\n>\n> > The variable root_offsets is read at line 1641, but, at this point,\n> > the content is unknown, so it is impossible to test works well.\n>\n> Surely it is set by heap_get_root_tuples() in line 1347? The root_blkno\n> variable is used exclusively to know whether root_offsets has been\n> initialized for the current block.\n>\nHi Álvaro,\n\n20. Condition hscan->rs_cblock != root_blkno, taking false branch.\n\nIf the variable hscan->rs_cblock is InvalidBlockNumber the test can\nprotect root_offsets fail.\n\nThe var root_blkno only is checked at line 1853.\n\nregards,\nRanier Vilela\n\nEm ter., 25 de ago. de 2020 às 18:06, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Aug-25, Ranier Vilela wrote:\n\n> The variable root_offsets is read at line 1641, but, at this point,\n> the content is unknown, so it is impossible to test works well.\n\nSurely it is set by heap_get_root_tuples() in line 1347? The root_blkno\nvariable is used exclusively to know whether root_offsets has been\ninitialized for the current block.Hi Álvaro,\n20. Condition hscan->rs_cblock != root_blkno, taking false branch.If the variable hscan->rs_cblock is InvalidBlockNumber the test canprotect root_offsets fail.The var root_blkno only is checked at line 1853.regards,Ranier Vilela",
"msg_date": "Tue, 25 Aug 2020 19:18:46 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
},
{
"msg_contents": "On 2020-Aug-25, Ranier Vilela wrote:\n\n> If the variable hscan->rs_cblock is InvalidBlockNumber the test can\n> protect root_offsets fail.\n\nWhen does that happen?\n\n> The var root_blkno only is checked at line 1853.\n\nThat's a different function.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 25 Aug 2020 18:45:10 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
},
{
"msg_contents": "Em ter., 25 de ago. de 2020 às 19:45, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Aug-25, Ranier Vilela wrote:\n>\n> > If the variable hscan->rs_cblock is InvalidBlockNumber the test can\n> > protect root_offsets fail.\n>\n> When does that happen?\n>\nAt first pass into the while loop?\nhscan->rs_cblock is InvalidBlockNumber, what happens?\n\n\n> > The var root_blkno only is checked at line 1853.\n>\n> That's a different function.\n>\nMy mistake. Find editor.\n\nregards,\nRanier Vilela\n\nEm ter., 25 de ago. de 2020 às 19:45, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Aug-25, Ranier Vilela wrote:\n\n> If the variable hscan->rs_cblock is InvalidBlockNumber the test can\n> protect root_offsets fail.\n\nWhen does that happen?At first pass into the while loop? hscan->rs_cblock is \nInvalidBlockNumber, what happens?\n\n> The var root_blkno only is checked at line 1853.\n\nThat's a different function.My mistake. Find editor.regards,Ranier Vilela",
"msg_date": "Tue, 25 Aug 2020 19:54:33 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
},
{
"msg_contents": "Em ter., 25 de ago. de 2020 às 19:54, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em ter., 25 de ago. de 2020 às 19:45, Alvaro Herrera <\n> alvherre@2ndquadrant.com> escreveu:\n>\n>> On 2020-Aug-25, Ranier Vilela wrote:\n>>\n>> > If the variable hscan->rs_cblock is InvalidBlockNumber the test can\n>> > protect root_offsets fail.\n>>\n>> When does that happen?\n>>\n> At first pass into the while loop?\n> hscan->rs_cblock is InvalidBlockNumber, what happens?\n>\n> Other things.\n1. Even heap_get_root_tuples at line 1347, be called.\nDoes it fill all roots_offsets?\nroot_offsets[offnum - 1] is secure at this point (line 1641 or is trash)?\n\n2. hscan->rs_cbuf is constant?\nif (hscan->rs_cblock != root_blkno)\n{\nPage page = BufferGetPage(hscan->rs_cbuf);\n\nLockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE);\nheap_get_root_tuples(page, root_offsets);\nLockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK);\n\nroot_blkno = hscan->rs_cblock;\n}\n\nThis can move outside while loop?\nAm I wrong or hscan do not change?\n\nregards,\nRanier Vilela\n\nEm ter., 25 de ago. de 2020 às 19:54, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em ter., 25 de ago. de 2020 às 19:45, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Aug-25, Ranier Vilela wrote:\n\n> If the variable hscan->rs_cblock is InvalidBlockNumber the test can\n> protect root_offsets fail.\n\nWhen does that happen?At first pass into the while loop? hscan->rs_cblock is \nInvalidBlockNumber, what happens?Other things.1. Even heap_get_root_tuples at line 1347, be called.Does it fill all roots_offsets?root_offsets[offnum - 1] is secure at this point (line 1641 or is trash)?2. hscan->rs_cbuf is constant?\t\tif (hscan->rs_cblock != root_blkno)\t\t{\t\t\tPage\t\tpage = BufferGetPage(hscan->rs_cbuf);\t\t\tLockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE);\t\t\theap_get_root_tuples(page, root_offsets);\t\t\tLockBuffer(hscan->rs_cbuf, BUFFER_LOCK_UNLOCK);\t\t\troot_blkno = hscan->rs_cblock;\t\t}This can move outside while loop?Am I wrong or hscan do not change?regards,Ranier Vilela",
"msg_date": "Tue, 25 Aug 2020 20:10:06 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
},
{
"msg_contents": "On 2020-Aug-25, Ranier Vilela wrote:\n\n> Em ter., 25 de ago. de 2020 �s 19:45, Alvaro Herrera <\n> alvherre@2ndquadrant.com> escreveu:\n> \n> > On 2020-Aug-25, Ranier Vilela wrote:\n> >\n> > > If the variable hscan->rs_cblock is InvalidBlockNumber the test can\n> > > protect root_offsets fail.\n> >\n> > When does that happen?\n>\n> At first pass into the while loop?\n> hscan->rs_cblock is InvalidBlockNumber, what happens?\n\nNo, it is set when the page is read.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 25 Aug 2020 19:13:20 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
},
{
"msg_contents": "Em ter., 25 de ago. de 2020 às 20:13, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Aug-25, Ranier Vilela wrote:\n>\n> > Em ter., 25 de ago. de 2020 às 19:45, Alvaro Herrera <\n> > alvherre@2ndquadrant.com> escreveu:\n> >\n> > > On 2020-Aug-25, Ranier Vilela wrote:\n> > >\n> > > > If the variable hscan->rs_cblock is InvalidBlockNumber the test can\n> > > > protect root_offsets fail.\n> > >\n> > > When does that happen?\n> >\n> > At first pass into the while loop?\n> > hscan->rs_cblock is InvalidBlockNumber, what happens?\n>\n> No, it is set when the page is read.\n>\nAnd it is guaranteed that, rs_cblock is not InvalidBlockNumber when the\npage is read?\n\nRanier Vilela\n\nEm ter., 25 de ago. de 2020 às 20:13, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Aug-25, Ranier Vilela wrote:\n\n> Em ter., 25 de ago. de 2020 às 19:45, Alvaro Herrera <\n> alvherre@2ndquadrant.com> escreveu:\n> \n> > On 2020-Aug-25, Ranier Vilela wrote:\n> >\n> > > If the variable hscan->rs_cblock is InvalidBlockNumber the test can\n> > > protect root_offsets fail.\n> >\n> > When does that happen?\n>\n> At first pass into the while loop?\n> hscan->rs_cblock is InvalidBlockNumber, what happens?\n\nNo, it is set when the page is read.And it is guaranteed that, rs_cblock is not InvalidBlockNumber when the page is read?Ranier Vilela",
"msg_date": "Tue, 25 Aug 2020 20:15:02 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
},
{
"msg_contents": "On 2020-Aug-25, Ranier Vilela wrote:\n\n> 1. Even heap_get_root_tuples at line 1347, be called.\n> Does it fill all roots_offsets?\n\nYes -- read the comments there.\n\n> 2. hscan->rs_cbuf is constant?\n> if (hscan->rs_cblock != root_blkno)\n\nIt is the buffer that contains the given block. Those two things move\nin unison. See heapgettup and heapgettup_pagemode.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 25 Aug 2020 19:18:08 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
},
{
"msg_contents": "On 2020-Aug-25, Ranier Vilela wrote:\n\n> And it is guaranteed that, rs_cblock is not InvalidBlockNumber when the\n> page is read?\n\nIt could be InvalidBlockNumber if sufficient neutrinos hit the memory\nbank and happen to set all the bits in the block number.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 25 Aug 2020 19:20:18 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
},
{
"msg_contents": "Em ter., 25 de ago. de 2020 às 20:20, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Aug-25, Ranier Vilela wrote:\n>\n> > And it is guaranteed that, rs_cblock is not InvalidBlockNumber when the\n> > page is read?\n>\n> It could be InvalidBlockNumber if sufficient neutrinos hit the memory\n> bank and happen to set all the bits in the block number.\n>\nkkk, I think it's enough for me.\nI believe that PostgreSQL will not run on the ISS yet.\n\nRanier Vilela\n\nEm ter., 25 de ago. de 2020 às 20:20, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Aug-25, Ranier Vilela wrote:\n\n> And it is guaranteed that, rs_cblock is not InvalidBlockNumber when the\n> page is read?\n\nIt could be InvalidBlockNumber if sufficient neutrinos hit the memory\nbank and happen to set all the bits in the block number.kkk, I think it's enough for me.I believe that PostgreSQL will not run on the ISS yet.Ranier Vilela",
"msg_date": "Tue, 25 Aug 2020 20:22:10 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
},
{
"msg_contents": "On 2020-Aug-25, Ranier Vilela wrote:\n\n> kkk, I think it's enough for me.\n> I believe that PostgreSQL will not run on the ISS yet.\n\nActually, I believe there are some satellites that run Postgres -- not\n100% sure about this.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 25 Aug 2020 19:29:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
},
{
"msg_contents": "Em ter., 25 de ago. de 2020 às 20:29, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Aug-25, Ranier Vilela wrote:\n>\n> > kkk, I think it's enough for me.\n> > I believe that PostgreSQL will not run on the ISS yet.\n>\n> Actually, I believe there are some satellites that run Postgres -- not\n> 100% sure about this.\n>\nYeah, ESA uses:\nhttps://resources.2ndquadrant.com/european-space-agency-case-study-download\n\nIn fact, Postgres is to be congratulated.\nGuess who didn't make any bug?\nhttps://changochen.github.io/publication/squirrel_ccs2020.pdf\n\"Sqirrel has successfully detected 63 bugs from tested DBMS,including 51\nbugs from SQLite, 7 from MySQL, and 5 from MariaDB.\"\n\nRanier Vilela\n\nEm ter., 25 de ago. de 2020 às 20:29, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Aug-25, Ranier Vilela wrote:\n\n> kkk, I think it's enough for me.\n> I believe that PostgreSQL will not run on the ISS yet.\n\nActually, I believe there are some satellites that run Postgres -- not\n100% sure about this.Yeah, ESA uses:https://resources.2ndquadrant.com/european-space-agency-case-study-downloadIn fact, Postgres is to be congratulated.Guess who didn't make any bug?https://changochen.github.io/publication/squirrel_ccs2020.pdf\"Sqirrel has successfully detected 63 bugs from tested DBMS,including 51 bugs from SQLite, 7 from MySQL, and 5 from MariaDB.\"Ranier Vilela",
"msg_date": "Tue, 25 Aug 2020 21:22:36 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix Uninitialized scalar variable (UNINIT)\n (src/backend/access/heap/heapam_handler.c)"
}
] |
[
{
"msg_contents": "Hi,\n\nPer Coverity.\n\nARRAY vs SINGLETON\n\nIf variable htids is accessed like array, but is a simple pointer, can be\n\"This might corrupt or misinterpret adjacent memory locations.\"\n\nat line 723:\n/* Form standard non-pivot tuple */\nitup->t_info &= ~INDEX_ALT_TID_MASK;\nhtids = &itup->t_tid;\n\n1. Here htids is a SINGLETON?\n\nSo:\n\nAt line 723:\nhtids[ui++] = *BTreeTupleGetPostingN(origtuple, i);\n\n2. htids is accessed how ARRAY?\n\nAnd is acessed at positions 0 and 1, according (nhtids == 1):\nAssert(ui == nhtids);\n\nThe htids[1] are destroying something at this memory position.\n\nregards,\nRanier Vilela\n\nHi,Per Coverity.ARRAY vs SINGLETONIf variable htids is accessed like array, but is a simple pointer, can be\"This might corrupt or misinterpret adjacent memory locations.\"at line 723:\t\t/* Form standard non-pivot tuple */\t\titup->t_info &= ~INDEX_ALT_TID_MASK;\t\thtids = &itup->t_tid;1. Here htids is a SINGLETON?So:At line 723:htids[ui++] = *BTreeTupleGetPostingN(origtuple, i);2. htids is accessed how ARRAY?And is acessed at positions 0 and 1, according (nhtids == 1):\tAssert(ui == nhtids);The htids[1] are destroying something at this memory position.regards,Ranier Vilela",
"msg_date": "Tue, 25 Aug 2020 14:13:42 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Out-of-bounds access (ARRAY_VS_SINGLETON)\n (src/backend/access/nbtree/nbtdedup.c)"
},
{
"msg_contents": "On Tue, Aug 25, 2020 at 10:15 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> If variable htids is accessed like array, but is a simple pointer, can be\n> \"This might corrupt or misinterpret adjacent memory locations.\"\n\nThis exact Coverity complaint has already been discussed, and marked\nas a false positive on the community's Coverity installation.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 25 Aug 2020 11:07:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-bounds access (ARRAY_VS_SINGLETON)\n (src/backend/access/nbtree/nbtdedup.c)"
}
] |
[
{
"msg_contents": "I see a compiler warning on git master:\n\n sharedfileset.c:288:8: warning: variable ‘found’ set but not used [-Wunused-but-set-variable]\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 26 Aug 2020 12:02:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Compiler warning"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I see a compiler warning on git master:\n> sharedfileset.c:288:8: warning: variable ‘found’ set but not used [-Wunused-but-set-variable]\n\nCould get rid of the variable entirely: change the \"break\" to \"return\"\nand then the final Assert can be \"Assert(false)\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Aug 2020 12:08:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Compiler warning"
}
] |
[
{
"msg_contents": "The comment in procarray.c that described GlobalVisDataRels instead \nmentioned GlobalVisCatalogRels a second time. Patch attached.",
"msg_date": "Wed, 26 Aug 2020 16:22:51 -0500",
"msg_from": "Jim Nasby <nasbyj@amazon.com>",
"msg_from_op": true,
"msg_subject": "Typo in procarray.c comment about GlobalVisDataRels"
},
{
"msg_contents": "On Wed, Aug 26, 2020 at 04:22:51PM -0500, Jim Nasby wrote:\n> The comment in procarray.c that described GlobalVisDataRels instead\n> mentioned GlobalVisCatalogRels a second time. Patch attached.\n\nThanks, Jim. Applied.\n--\nMichael",
"msg_date": "Thu, 27 Aug 2020 16:46:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Typo in procarray.c comment about GlobalVisDataRels"
}
] |
[
{
"msg_contents": "If action and qual reference same object in CREATE RULE, it results in\ncreating duplicate entries in pg_depend for it. Doesn't pose any harm, just\nunnecessarily bloats pg_depend. Reference InsertRule(). I think should be\nable to avoid adding duplicate entries.\n\nDon't know if this behaviour was discussed earlier, I didn't find it on\nsearch.\nWe accidentally encountered it while enhancing a catalog check tool for\nGreenplum Database.\n\nFor example (from rules test):\ncreate table rtest_t5 (a int4, b text);\ncreate table rtest_t7 (a int4, b text);\n\ncreate rule rtest_t5_ins as on insert to rtest_t5\nwhere new.a > 15 do\ninsert into rtest_t7 values (new.a, new.b);\n\n# select classid::regclass, refobjid::regclass,* from pg_depend where\nrefobjid='rtest_t5'::regclass and deptype = 'n';\n classid | refobjid | classid | objid | objsubid | refclassid | refobjid\n| refobjsubid | deptype\n------------+----------+---------+-------+----------+------------+----------+-------------+---------\n pg_rewrite | rtest_t5 | 2618 | 16457 | 0 | 1259 | 16445\n| 1 | n\n pg_rewrite | rtest_t5 | 2618 | 16457 | 0 | 1259 | 16445\n| 1 | n\n pg_rewrite | rtest_t5 | 2618 | 16457 | 0 | 1259 | 16445\n| 2 | n\n(3 rows)\n\n\n-- \n*Ashwin Agrawal (VMware)*\n\nIf action and qual reference same object in CREATE RULE, it results in creating duplicate entries in pg_depend for it. Doesn't pose any harm, just unnecessarily bloats pg_depend. Reference InsertRule(). I think should be able to avoid adding duplicate entries.Don't know if this behaviour was discussed earlier, I didn't find it on search.We accidentally encountered it while enhancing a catalog check tool for Greenplum Database.For example (from rules test):create table rtest_t5 (a int4, b text);create table rtest_t7 (a int4, b text);create rule rtest_t5_ins as on insert to rtest_t5\t\twhere new.a > 15 do\tinsert into rtest_t7 values (new.a, new.b);# select classid::regclass, refobjid::regclass,* from pg_depend where refobjid='rtest_t5'::regclass and deptype = 'n'; classid | refobjid | classid | objid | objsubid | refclassid | refobjid | refobjsubid | deptype ------------+----------+---------+-------+----------+------------+----------+-------------+--------- pg_rewrite | rtest_t5 | 2618 | 16457 | 0 | 1259 | 16445 | 1 | n pg_rewrite | rtest_t5 | 2618 | 16457 | 0 | 1259 | 16445 | 1 | n pg_rewrite | rtest_t5 | 2618 | 16457 | 0 | 1259 | 16445 | 2 | n(3 rows)-- Ashwin Agrawal (VMware)",
"msg_date": "Wed, 26 Aug 2020 14:39:53 -0700",
"msg_from": "Ashwin Agrawal <ashwinstar@gmail.com>",
"msg_from_op": true,
"msg_subject": "CREATE RULE may generate duplicate entries in pg_depend"
},
{
"msg_contents": "Ashwin Agrawal <ashwinstar@gmail.com> writes:\n> If action and qual reference same object in CREATE RULE, it results in\n> creating duplicate entries in pg_depend for it. Doesn't pose any harm, just\n> unnecessarily bloats pg_depend.\n\nYeah, we generally don't try that hard to prevent duplicate pg_depend\nentries. It's relatively easy to get rid of them in limited contexts\nlike a single expression, but over a wider scope, I doubt it's worth\nthe trouble.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Aug 2020 17:43:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE RULE may generate duplicate entries in pg_depend"
}
] |
[
{
"msg_contents": "Greeting.\n\nI do see the README says we support bushy plans and I also see bushy\nplans in real life (for example tpc-h Q20) like below. However I don't know\nhow it is generated with the algorithm in join_search_one_lev since it\nalways\nmake_rels_by_clause_join with joinrel[1] which is initial_rels which is\nbaserel.\nAm I missing something?\n\n===\n Sort\n Sort Key: supplier.s_name\n -> Nested Loop Semi Join\n -> Nested Loop\n Join Filter: (supplier.s_nationkey = nation.n_nationkey)\n -> Index Scan using supplier_pkey on supplier\n -> Materialize\n -> Seq Scan on nation\n Filter: (n_name = 'KENYA'::bpchar)\n -> Nested Loop\n -> Index Scan using idx_partsupp_suppkey on partsupp\n Index Cond: (ps_suppkey = supplier.s_suppkey)\n Filter: ((ps_availqty)::numeric > (SubPlan 1))\n SubPlan 1\n -> Aggregate\n -> Index Scan using idx_lineitem_part_supp on\nlineitem\n Index Cond: ((l_partkey =\npartsupp.ps_partkey) AND (l_suppkey = partsupp.ps_suppkey))\n Filter: ((l_shipdate >= '01-JAN-97\n00:00:00'::timestamp without time zone) AND (l_shipdate < '01-JAN-98\n00:00:00'::timestamp without time zone))\n -> Index Scan using part_pkey on part\n Index Cond: (p_partkey = partsupp.ps_partkey)\n Filter: ((p_name)::text ~~ 'lavender%'::text)\n(21 rows)\n\n\n-- \nBest Regards\nAndy Fan\n\nGreeting. I do see the README says we support bushy plans and I also see bushyplans in real life (for example tpc-h Q20) like below. However I don't knowhow it is generated with the algorithm in join_search_one_lev since it alwaysmake_rels_by_clause_join with joinrel[1] which is initial_rels which is baserel. Am I missing something?=== Sort Sort Key: supplier.s_name -> Nested Loop Semi Join -> Nested Loop Join Filter: (supplier.s_nationkey = nation.n_nationkey) -> Index Scan using supplier_pkey on supplier -> Materialize -> Seq Scan on nation Filter: (n_name = 'KENYA'::bpchar) -> Nested Loop -> Index Scan using idx_partsupp_suppkey on partsupp Index Cond: (ps_suppkey = supplier.s_suppkey) Filter: ((ps_availqty)::numeric > (SubPlan 1)) SubPlan 1 -> Aggregate -> Index Scan using idx_lineitem_part_supp on lineitem Index Cond: ((l_partkey = partsupp.ps_partkey) AND (l_suppkey = partsupp.ps_suppkey)) Filter: ((l_shipdate >= '01-JAN-97 00:00:00'::timestamp without time zone) AND (l_shipdate < '01-JAN-98 00:00:00'::timestamp without time zone)) -> Index Scan using part_pkey on part Index Cond: (p_partkey = partsupp.ps_partkey) Filter: ((p_name)::text ~~ 'lavender%'::text)(21 rows)-- Best RegardsAndy Fan",
"msg_date": "Thu, 27 Aug 2020 07:17:07 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "How is bushy plans generated in join_search_one_lev"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> I do see the README says we support bushy plans and I also see bushy\n> plans in real life (for example tpc-h Q20) like below. However I don't know\n> how it is generated with the algorithm in join_search_one_lev since it\n> always\n> make_rels_by_clause_join with joinrel[1] which is initial_rels which is\n> baserel.\n\nHmm? Bushy plans are created by the second loop in join_search_one_level,\nstarting about line 150 in joinrels.c.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Aug 2020 20:05:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How is bushy plans generated in join_search_one_lev"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 8:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > I do see the README says we support bushy plans and I also see bushy\n> > plans in real life (for example tpc-h Q20) like below. However I don't\n> know\n> > how it is generated with the algorithm in join_search_one_lev since it\n> > always\n> > make_rels_by_clause_join with joinrel[1] which is initial_rels which is\n> > baserel.\n>\n> Hmm? Bushy plans are created by the second loop in join_search_one_level,\n> starting about line 150 in joinrels.c.\n>\n> regards, tom lane\n>\n\nYes.. I missed the second loop:(:(:(\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, Aug 27, 2020 at 8:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> I do see the README says we support bushy plans and I also see bushy\n> plans in real life (for example tpc-h Q20) like below. However I don't know\n> how it is generated with the algorithm in join_search_one_lev since it\n> always\n> make_rels_by_clause_join with joinrel[1] which is initial_rels which is\n> baserel.\n\nHmm? Bushy plans are created by the second loop in join_search_one_level,\nstarting about line 150 in joinrels.c.\n\n regards, tom lane\nYes.. I missed the second loop:(:(:( -- Best RegardsAndy Fan",
"msg_date": "Thu, 27 Aug 2020 08:46:23 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How is bushy plans generated in join_search_one_lev"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile digging into a different patch involving DROP INDEX CONCURRENTLY\nand replica indexes, I have found that the handling of indisreplident\nis inconsistent for invalid indexes:\nhttps://www.postgresql.org/message-id/20200827022835.GM2017@paquier.xyz\n\nIn short, imagine the following sequence:\nCREATE TABLE test_replica_identity_4 (id int NOT NULL);\nCREATE UNIQUE INDEX test_replica_index_4 ON\n test_replica_identity_4(id);\nALTER TABLE test_replica_identity_4 REPLICA IDENTITY\n USING INDEX test_replica_index_4;\n-- imagine that this fails in the second transaction used in\n-- index_drop().\nDROP INDEX CONCURRENTLY test_replica_index_4;\n-- here the index still exists, is invalid, marked with\n-- indisreplident.\nCREATE UNIQUE INDEX test_replica_index_4_2 ON\n test_replica_identity_4(id);\nALTER TABLE test_replica_identity_4 REPLICA IDENTITY\n USING INDEX test_replica_index_4_2;\n-- set back the index to a valid state.\nREINDEX INDEX test_replica_index_4;\n-- And here we have two valid indexes usable as replica identities.\nSELECT indexrelid::regclass, indisvalid, indisreplident FROM pg_index\n WHERE indexrelid IN ('test_replica_index_4'::regclass,\n 'test_replica_index_4_2'::regclass);\n indexrelid | indisvalid | indisreplident\n------------------------+------------+----------------\n test_replica_index_4_2 | t | t\n test_replica_index_4 | t | t\n(2 rows)\n\t \nYou can just use the following trick to emulate a failure in DIC:\n@@ -2195,6 +2195,9 @@ index_drop(Oid indexId, bool concurrent, bool\nconcurrent_lock_mode)\n if (userIndexRelation->rd_rel->relkind != RELKIND_PARTITIONED_INDEX)\n RelationDropStorage(userIndexRelation);\n+ if (concurrent)\n+ elog(ERROR, \"burp\");\n\nThis results in some problems for ALTER TABLE in tablecmds.c, as it is\npossible to reach a state in the catalogs where we have *multiple*\nindexes marked with indisreplindex for REPLICA_IDENTITY_INDEX set on\nthe parent table. Even worse, this messes up with\nRelationGetIndexList() as it would set rd_replidindex in the relcache\nfor the last index found marked with indisreplident, depending on the\norder where the indexes are scanned, you may get a different replica\nindex loaded.\n\nI think that this problem is similar to indisclustered, and that we\nhad better set indisreplident to false when clearing indisvalid for an\nindex concurrently dropped. This would prevent problems with ALTER\nTABLE of course, but also the relcache.\n\nAny objections to the attached? I am not sure that this is worth a\nbackpatch as that's unlikely going to be a problem in the field, so\nI'd like to fix this issue only on HEAD. This exists since 9.4 and\nthe introduction of replica identities.\n--\nMichael",
"msg_date": "Thu, 27 Aug 2020 11:57:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "pg_index.indisreplident and invalid indexes"
},
{
"msg_contents": "> On Thu, Aug 27, 2020 at 11:57:21AM +0900, Michael Paquier wrote:\n>\n> I think that this problem is similar to indisclustered, and that we\n> had better set indisreplident to false when clearing indisvalid for an\n> index concurrently dropped. This would prevent problems with ALTER\n> TABLE of course, but also the relcache.\n>\n> Any objections to the attached? I am not sure that this is worth a\n> backpatch as that's unlikely going to be a problem in the field, so\n> I'd like to fix this issue only on HEAD. This exists since 9.4 and\n> the introduction of replica identities.\n\nThanks for the patch. It sounds right, so no objections from me. But I\nwonder if something similar has to be done also for\nindex_concurrently_swap function?\n\n\t/*\n\t * Mark the new index as valid, and the old index as invalid similarly to\n\t * what index_set_state_flags() does.\n\t */\n\tnewIndexForm->indisvalid = true;\n\toldIndexForm->indisvalid = false;\n\toldIndexForm->indisclustered = false;\n\n\n",
"msg_date": "Fri, 28 Aug 2020 10:15:37 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_index.indisreplident and invalid indexes"
},
{
"msg_contents": "On Fri, Aug 28, 2020 at 10:15:37AM +0200, Dmitry Dolgov wrote:\n> Thanks for the patch. It sounds right, so no objections from me. But I\n> wonder if something similar has to be done also for\n> index_concurrently_swap function?\n\nAs of index.c, this already happens:\n /* Preserve indisreplident in the new index */\n newIndexForm->indisreplident = oldIndexForm->indisreplident;\n oldIndexForm->indisreplident = false;\n\nIn short, the new concurrent index is created first with\nindisreplident = false, and when swapping the old and new indexes, the\nnew index inherits the setting of the old one, and the old one planned\nfor drop uses indisreplident = false when swapping.\n--\nMichael",
"msg_date": "Fri, 28 Aug 2020 17:21:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_index.indisreplident and invalid indexes"
}
] |
[
{
"msg_contents": "Hi,\n\nIs this something to worry about, or is it another problem with the\nanalysis tool, that nobody cares about?\nclang 10 (64 bits)\npostgres 14 (latest)\n\n31422==ERROR: LeakSanitizer: detected memory leaks\n\nDirect leak of 4560 byte(s) in 1 object(s) allocated from:\n #0 0x50e33d in malloc\n(/usr/src/postgres/tmp_install/usr/local/pgsql/bin/postgres+0x50e33d)\n #1 0x186d52f in ConvertTimeZoneAbbrevs\n/usr/src/postgres/src/backend/utils/adt/datetime.c:4511:8\n #2 0x1d9b5e9 in load_tzoffsets\n/usr/src/postgres/src/backend/utils/misc/tzparser.c:465:12\n #3 0x1d8ca3f in check_timezone_abbreviations\n/usr/src/postgres/src/backend/utils/misc/guc.c:11389:11\n #4 0x1d6a398 in call_string_check_hook\n/usr/src/postgres/src/backend/utils/misc/guc.c:11056:7\n #5 0x1d68f29 in parse_and_validate_value\n/usr/src/postgres/src/backend/utils/misc/guc.c:6870:10\n #6 0x1d6567d in set_config_option\n/usr/src/postgres/src/backend/utils/misc/guc.c:7473:11\n #7 0x1d7f8f4 in ProcessGUCArray\n/usr/src/postgres/src/backend/utils/misc/guc.c:10608:10\n #8 0x9d0c8d in ApplySetting\n/usr/src/postgres/src/backend/catalog/pg_db_role_setting.c:256:4\n #9 0x1d4ad93 in process_settings\n/usr/src/postgres/src/backend/utils/init/postinit.c:1174:2\n #10 0x1d48e39 in InitPostgres\n/usr/src/postgres/src/backend/utils/init/postinit.c:1059:2\n #11 0x14a2c1a in BackgroundWorkerInitializeConnectionByOid\n/usr/src/postgres/src/backend/postmaster/postmaster.c:5758:2\n #12 0x853feb in ParallelWorkerMain\n/usr/src/postgres/src/backend/access/transam/parallel.c:1373:2\n #13 0x146e5fb in StartBackgroundWorker\n/usr/src/postgres/src/backend/postmaster/bgworker.c:813:2\n #14 0x14af69b in do_start_bgworker\n/usr/src/postgres/src/backend/postmaster/postmaster.c:5879:4\n #15 0x14a1487 in maybe_start_bgworkers\n/usr/src/postgres/src/backend/postmaster/postmaster.c:6104:9\n #16 0x149e5aa in sigusr1_handler\n/usr/src/postgres/src/backend/postmaster/postmaster.c:5269:3\n #17 0x7fcffa75a3bf (/lib/x86_64-linux-gnu/libpthread.so.0+0x153bf)\n #18 0x149d655 in PostmasterMain\n/usr/src/postgres/src/backend/postmaster/postmaster.c:1414:11\n #19 0x108402e in main /usr/src/postgres/src/backend/main/main.c:209:3\n #20 0x7fcffa54e0b2 in __libc_start_main\n/build/glibc-YYA7BZ/glibc-2.31/csu/../csu/libc-start.c:308:16\n\nDirect leak of 1020 byte(s) in 15 object(s) allocated from:\n #0 0x4fa6e4 in strdup\n(/usr/src/postgres/tmp_install/usr/local/pgsql/bin/postgres+0x4fa6e4)\n #1 0x1d6a1c7 in guc_strdup\n/usr/src/postgres/src/backend/utils/misc/guc.c:4889:9\n #2 0x1d7efc7 in set_config_sourcefile\n/usr/src/postgres/src/backend/utils/misc/guc.c:7696:15\n #3 0x1d7c95e in ProcessConfigFileInternal\n/usr/src/postgres/src/backend/utils/misc/guc-file.l:478:4\n #4 0x1d5b33f in ProcessConfigFile\n/usr/src/postgres/src/backend/utils/misc/guc-file.l:156:9\n #5 0x1d5ae7d in SelectConfigFiles\n/usr/src/postgres/src/backend/utils/misc/guc.c:5674:2\n #6 0x149b6ce in PostmasterMain\n/usr/src/postgres/src/backend/postmaster/postmaster.c:884:7\n\nRanier Vilela\n\nHi,Is this something to worry about, or is it another problem with the analysis tool, that nobody cares about?clang 10 (64 bits)postgres 14 (latest)31422==ERROR: LeakSanitizer: detected memory leaksDirect leak of 4560 byte(s) in 1 object(s) allocated from: #0 0x50e33d in malloc (/usr/src/postgres/tmp_install/usr/local/pgsql/bin/postgres+0x50e33d) #1 0x186d52f in ConvertTimeZoneAbbrevs /usr/src/postgres/src/backend/utils/adt/datetime.c:4511:8 #2 0x1d9b5e9 in load_tzoffsets /usr/src/postgres/src/backend/utils/misc/tzparser.c:465:12 #3 0x1d8ca3f in check_timezone_abbreviations /usr/src/postgres/src/backend/utils/misc/guc.c:11389:11 #4 0x1d6a398 in call_string_check_hook /usr/src/postgres/src/backend/utils/misc/guc.c:11056:7 #5 0x1d68f29 in parse_and_validate_value /usr/src/postgres/src/backend/utils/misc/guc.c:6870:10 #6 0x1d6567d in set_config_option /usr/src/postgres/src/backend/utils/misc/guc.c:7473:11 #7 0x1d7f8f4 in ProcessGUCArray /usr/src/postgres/src/backend/utils/misc/guc.c:10608:10 #8 0x9d0c8d in ApplySetting /usr/src/postgres/src/backend/catalog/pg_db_role_setting.c:256:4 #9 0x1d4ad93 in process_settings /usr/src/postgres/src/backend/utils/init/postinit.c:1174:2 #10 0x1d48e39 in InitPostgres /usr/src/postgres/src/backend/utils/init/postinit.c:1059:2 #11 0x14a2c1a in BackgroundWorkerInitializeConnectionByOid /usr/src/postgres/src/backend/postmaster/postmaster.c:5758:2 #12 0x853feb in ParallelWorkerMain /usr/src/postgres/src/backend/access/transam/parallel.c:1373:2 #13 0x146e5fb in StartBackgroundWorker /usr/src/postgres/src/backend/postmaster/bgworker.c:813:2 #14 0x14af69b in do_start_bgworker /usr/src/postgres/src/backend/postmaster/postmaster.c:5879:4 #15 0x14a1487 in maybe_start_bgworkers /usr/src/postgres/src/backend/postmaster/postmaster.c:6104:9 #16 0x149e5aa in sigusr1_handler /usr/src/postgres/src/backend/postmaster/postmaster.c:5269:3 #17 0x7fcffa75a3bf (/lib/x86_64-linux-gnu/libpthread.so.0+0x153bf) #18 0x149d655 in PostmasterMain /usr/src/postgres/src/backend/postmaster/postmaster.c:1414:11 #19 0x108402e in main /usr/src/postgres/src/backend/main/main.c:209:3 #20 0x7fcffa54e0b2 in __libc_start_main /build/glibc-YYA7BZ/glibc-2.31/csu/../csu/libc-start.c:308:16Direct leak of 1020 byte(s) in 15 object(s) allocated from: #0 0x4fa6e4 in strdup (/usr/src/postgres/tmp_install/usr/local/pgsql/bin/postgres+0x4fa6e4) #1 0x1d6a1c7 in guc_strdup /usr/src/postgres/src/backend/utils/misc/guc.c:4889:9 #2 0x1d7efc7 in set_config_sourcefile /usr/src/postgres/src/backend/utils/misc/guc.c:7696:15 #3 0x1d7c95e in ProcessConfigFileInternal /usr/src/postgres/src/backend/utils/misc/guc-file.l:478:4 #4 0x1d5b33f in ProcessConfigFile /usr/src/postgres/src/backend/utils/misc/guc-file.l:156:9 #5 0x1d5ae7d in SelectConfigFiles /usr/src/postgres/src/backend/utils/misc/guc.c:5674:2 #6 0x149b6ce in PostmasterMain /usr/src/postgres/src/backend/postmaster/postmaster.c:884:7Ranier Vilela",
"msg_date": "Thu, 27 Aug 2020 00:44:30 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Clang Address Sanitizer (Postgres14) Detected Memory Leaks"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Is this something to worry about, or is it another problem with the\n> analysis tool, that nobody cares about?\n\nAs far as the first one goes, I'd bet on buggy analysis tool.\nThe complained-of allocation is evidently for the \"extra\" state\nassociated with the timezone GUC variable, and AFAICS guc.c is\nquite careful not to leak those. It is true that the block will\nstill be allocated at process exit, but that doesn't make it a leak.\n\nI did not trace the second one in any detail, but I don't believe\nguc.c leaks sourcefile strings either. There's only one place\nwhere it overwrites them, and that place frees the old value.\n\nIf these allocations do genuinely get leaked in some code path,\nthis report is of exactly zero help in finding where; and I'm\nafraid I'm not very motivated to go looking for a bug that probably\ndoesn't exist.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 Aug 2020 11:46:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Clang Address Sanitizer (Postgres14) Detected Memory Leaks"
},
{
"msg_contents": "Em qui., 27 de ago. de 2020 às 12:46, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Is this something to worry about, or is it another problem with the\n> > analysis tool, that nobody cares about?\n>\n> As far as the first one goes, I'd bet on buggy analysis tool.\n> The complained-of allocation is evidently for the \"extra\" state\n> associated with the timezone GUC variable, and AFAICS guc.c is\n> quite careful not to leak those. It is true that the block will\n> still be allocated at process exit, but that doesn't make it a leak.\n>\n> I did not trace the second one in any detail, but I don't believe\n> guc.c leaks sourcefile strings either. There's only one place\n> where it overwrites them, and that place frees the old value.\n>\n> If these allocations do genuinely get leaked in some code path,\n> this report is of exactly zero help in finding where; and I'm\n> afraid I'm not very motivated to go looking for a bug that probably\n> doesn't exist.\n>\nHi Tom,\nthanks for taking a look at this.\n\nI tried to find where the zone table is freed, without success.\nIt would be a big surprise for me, if this tool is buggy.\nAnyway, it's just a sample of the total report, which is 10 mb\n(postmaster.log), done with the regression tests.\n\nregards,\nRanier Vilela\n\nEm qui., 27 de ago. de 2020 às 12:46, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Is this something to worry about, or is it another problem with the\n> analysis tool, that nobody cares about?\n\nAs far as the first one goes, I'd bet on buggy analysis tool.\nThe complained-of allocation is evidently for the \"extra\" state\nassociated with the timezone GUC variable, and AFAICS guc.c is\nquite careful not to leak those. It is true that the block will\nstill be allocated at process exit, but that doesn't make it a leak.\n\nI did not trace the second one in any detail, but I don't believe\nguc.c leaks sourcefile strings either. There's only one place\nwhere it overwrites them, and that place frees the old value.\n\nIf these allocations do genuinely get leaked in some code path,\nthis report is of exactly zero help in finding where; and I'm\nafraid I'm not very motivated to go looking for a bug that probably\ndoesn't exist.Hi Tom,thanks for taking a look at this.I tried to find where the zone table is freed, without success.It would be a big surprise for me, if this tool is buggy.Anyway, it's just a sample of the total report, which is 10 mb (postmaster.log), done with the regression tests.regards,Ranier Vilela",
"msg_date": "Thu, 27 Aug 2020 13:54:19 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clang Address Sanitizer (Postgres14) Detected Memory Leaks"
},
{
"msg_contents": "More reports.\nMemory Sanitizer:\n\nrunning bootstrap script ... ==40179==WARNING: MemorySanitizer:\nuse-of-uninitialized-value\n #0 0x538cfc1 in pg_comp_crc32c_sb8\n/usr/src/postgres/src/port/pg_crc32c_sb8.c:80:4\n #1 0x533a0c0 in pg_comp_crc32c_choose\n/usr/src/postgres/src/port/pg_crc32c_sse42_choose.c:61:9\n #2 0xebbdae in BootStrapXLOG\n/usr/src/postgres/src/backend/access/transam/xlog.c:5293:2\n #3 0xfc5867 in AuxiliaryProcessMain\n/usr/src/postgres/src/backend/bootstrap/bootstrap.c:437:4\n #4 0x26a12c3 in main /usr/src/postgres/src/backend/main/main.c:201:3\n #5 0x7f035d0e90b2 in __libc_start_main\n/build/glibc-YYA7BZ/glibc-2.31/csu/../csu/libc-start.c:308:16\n #6 0x495afd in _start\n(/usr/src/postgres/tmp_install/usr/local/pgsql/bin/postgres+0x495afd)\n\n Uninitialized value was stored to memory at\n #0 0x538cbaa in pg_comp_crc32c_sb8\n/usr/src/postgres/src/port/pg_crc32c_sb8.c:72:15\n #1 0x533a0c0 in pg_comp_crc32c_choose\n/usr/src/postgres/src/port/pg_crc32c_sse42_choose.c:61:9\n #2 0xebbdae in BootStrapXLOG\n/usr/src/postgres/src/backend/access/transam/xlog.c:5293:2\n #3 0xfc5867 in AuxiliaryProcessMain\n/usr/src/postgres/src/backend/bootstrap/bootstrap.c:437:4\n #4 0x26a12c3 in main /usr/src/postgres/src/backend/main/main.c:201:3\n #5 0x7f035d0e90b2 in __libc_start_main\n/build/glibc-YYA7BZ/glibc-2.31/csu/../csu/libc-start.c:308:16\n\n Uninitialized value was stored to memory at\n #0 0x538c836 in pg_comp_crc32c_sb8\n/usr/src/postgres/src/port/pg_crc32c_sb8.c:57:11\n #1 0x533a0c0 in pg_comp_crc32c_choose\n/usr/src/postgres/src/port/pg_crc32c_sse42_choose.c:61:9\n #2 0xebbdae in BootStrapXLOG\n/usr/src/postgres/src/backend/access/transam/xlog.c:5293:2\n #3 0xfc5867 in AuxiliaryProcessMain\n/usr/src/postgres/src/backend/bootstrap/bootstrap.c:437:4\n #4 0x26a12c3 in main /usr/src/postgres/src/backend/main/main.c:201:3\n #5 0x7f035d0e90b2 in __libc_start_main\n/build/glibc-YYA7BZ/glibc-2.31/csu/../csu/libc-start.c:308:16\n\n Uninitialized value was stored to memory at\n #0 0x49b666 in __msan_memcpy\n(/usr/src/postgres/tmp_install/usr/local/pgsql/bin/postgres+0x49b666)\n #1 0xebbb70 in BootStrapXLOG\n/usr/src/postgres/src/backend/access/transam/xlog.c:5288:2\n #2 0xfc5867 in AuxiliaryProcessMain\n/usr/src/postgres/src/backend/bootstrap/bootstrap.c:437:4\n #3 0x26a12c3 in main /usr/src/postgres/src/backend/main/main.c:201:3\n #4 0x7f035d0e90b2 in __libc_start_main\n/build/glibc-YYA7BZ/glibc-2.31/csu/../csu/libc-start.c:308:16\n\n Uninitialized value was created by an allocation of 'checkPoint' in the\nstack frame of function 'BootStrapXLOG'\n #0 0xeb9f50 in BootStrapXLOG\n/usr/src/postgres/src/backend/access/transam/xlog.c:5194\n\nThis line solve the alert:\n(xlog.c) 5193:\nmemset(&checkPoint, 0, sizeof(checkPoint));\n\nI'm starting to doubt this tool.\n\nregards,\nRanier Vilela\n\nMore reports.Memory Sanitizer:running bootstrap script ... ==40179==WARNING: MemorySanitizer: use-of-uninitialized-value #0 0x538cfc1 in pg_comp_crc32c_sb8 /usr/src/postgres/src/port/pg_crc32c_sb8.c:80:4 #1 0x533a0c0 in pg_comp_crc32c_choose /usr/src/postgres/src/port/pg_crc32c_sse42_choose.c:61:9 #2 0xebbdae in BootStrapXLOG /usr/src/postgres/src/backend/access/transam/xlog.c:5293:2 #3 0xfc5867 in AuxiliaryProcessMain /usr/src/postgres/src/backend/bootstrap/bootstrap.c:437:4 #4 0x26a12c3 in main /usr/src/postgres/src/backend/main/main.c:201:3 #5 0x7f035d0e90b2 in __libc_start_main /build/glibc-YYA7BZ/glibc-2.31/csu/../csu/libc-start.c:308:16 #6 0x495afd in _start (/usr/src/postgres/tmp_install/usr/local/pgsql/bin/postgres+0x495afd) Uninitialized value was stored to memory at #0 0x538cbaa in pg_comp_crc32c_sb8 /usr/src/postgres/src/port/pg_crc32c_sb8.c:72:15 #1 0x533a0c0 in pg_comp_crc32c_choose /usr/src/postgres/src/port/pg_crc32c_sse42_choose.c:61:9 #2 0xebbdae in BootStrapXLOG /usr/src/postgres/src/backend/access/transam/xlog.c:5293:2 #3 0xfc5867 in AuxiliaryProcessMain /usr/src/postgres/src/backend/bootstrap/bootstrap.c:437:4 #4 0x26a12c3 in main /usr/src/postgres/src/backend/main/main.c:201:3 #5 0x7f035d0e90b2 in __libc_start_main /build/glibc-YYA7BZ/glibc-2.31/csu/../csu/libc-start.c:308:16 Uninitialized value was stored to memory at #0 0x538c836 in pg_comp_crc32c_sb8 /usr/src/postgres/src/port/pg_crc32c_sb8.c:57:11 #1 0x533a0c0 in pg_comp_crc32c_choose /usr/src/postgres/src/port/pg_crc32c_sse42_choose.c:61:9 #2 0xebbdae in BootStrapXLOG /usr/src/postgres/src/backend/access/transam/xlog.c:5293:2 #3 0xfc5867 in AuxiliaryProcessMain /usr/src/postgres/src/backend/bootstrap/bootstrap.c:437:4 #4 0x26a12c3 in main /usr/src/postgres/src/backend/main/main.c:201:3 #5 0x7f035d0e90b2 in __libc_start_main /build/glibc-YYA7BZ/glibc-2.31/csu/../csu/libc-start.c:308:16 Uninitialized value was stored to memory at #0 0x49b666 in __msan_memcpy (/usr/src/postgres/tmp_install/usr/local/pgsql/bin/postgres+0x49b666) #1 0xebbb70 in BootStrapXLOG /usr/src/postgres/src/backend/access/transam/xlog.c:5288:2 #2 0xfc5867 in AuxiliaryProcessMain /usr/src/postgres/src/backend/bootstrap/bootstrap.c:437:4 #3 0x26a12c3 in main /usr/src/postgres/src/backend/main/main.c:201:3 #4 0x7f035d0e90b2 in __libc_start_main /build/glibc-YYA7BZ/glibc-2.31/csu/../csu/libc-start.c:308:16 Uninitialized value was created by an allocation of 'checkPoint' in the stack frame of function 'BootStrapXLOG' #0 0xeb9f50 in BootStrapXLOG /usr/src/postgres/src/backend/access/transam/xlog.c:5194This line solve the alert:\t(xlog.c) 5193:memset(&checkPoint, 0, sizeof(checkPoint));I'm starting to doubt this tool.regards,Ranier Vilela",
"msg_date": "Thu, 27 Aug 2020 20:50:26 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clang Address Sanitizer (Postgres14) Detected Memory Leaks"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> More reports.\n> Memory Sanitizer:\n> running bootstrap script ... ==40179==WARNING: MemorySanitizer:\n> use-of-uninitialized-value\n\nIf you're going to run tests like that, you need to account for the\nknown exceptions shown in src/tools/valgrind.supp.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 Aug 2020 20:00:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Clang Address Sanitizer (Postgres14) Detected Memory Leaks"
}
] |
[
{
"msg_contents": "Hi,\n\nPer Clang UBSan\nClang 10 (64 bits)\nPostgres 14 (latest)\n\n2020-08-27 01:02:14.930 -03 client backend[42432] pg_regress/create_table\nSTATEMENT: create table defcheck_0 partition of defcheck for values in (0);\nindexcmds.c:1162:22: runtime error: null pointer passed as argument 2,\nwhich is declared to never be null\n/usr/include/string.h:44:28: note: nonnull attribute specified here\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior indexcmds.c:1162:22\nin\nclog.c:299:10: runtime error: null pointer passed as argument 1, which is\ndeclared to never be null\n/usr/include/string.h:65:33: note: nonnull attribute specified here\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior clog.c:299:10 in\n\nindexcmds.c (1162):\nmemcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);\n\nclog.c (299):\nmemcmp(subxids, MyProc->subxids.xids,\n nsubxids * sizeof(TransactionId)) == 0)\n\nxact.c (5285)\nmemcpy(&workspace[i], s->childXids,\n s->nChildXids * sizeof(TransactionId));\n\nsnapmgr.c (590)\nmemcpy(CurrentSnapshot->xip, sourcesnap->xip,\n sourcesnap->xcnt * sizeof(TransactionId));\nsnapmgr.c (594)\nmemcpy(CurrentSnapshot->subxip, sourcesnap->subxip,\n sourcesnap->subxcnt * sizeof(TransactionId));\n\ncopyfuncs.c:1190\nCOPY_POINTER_FIELD(uniqColIdx, from->uniqNumCols * sizeof(AttrNumber));\n\n1.STATEMENT: CREATE TABLESPACE regress_tblspacewith LOCATION\n'/usr/src/postgres/src/test/regress/testtablespace' WITH\n(some_nonexistent_parameter = true);\nclog.c:299:10: runtime error: null pointer passed as argument 1, which is\ndeclared to never be null\n2.STATEMENT: CREATE TABLE testschema.dflt (a int PRIMARY KEY USING INDEX\nTABLESPACE regress_tblspace) PARTITION BY LIST (a);\nindexcmds.c:1162:22: runtime error: null pointer passed as argument 2,\nwhich is declared to never be null\n3.STATEMENT: SELECT bool 'nay' AS error;\nclog.c:299:10: runtime error: null pointer passed as argument 1, which is\ndeclared to never be null\n4.STATEMENT: SELECT U&'wrong: +0061' UESCAPE '+';\nclog.c:299:10: runtime error: null pointer passed as argument 1, which is\ndeclared to never be null\n5. STATEMENT: ALTER TABLE circles ADD EXCLUDE USING gist\n (c1 WITH &&, (c2::circle) WITH &&);\nxact.c:5285:25: runtime error: null pointer passed as argument 2, which is\ndeclared to never be null\n6.STATEMENT: COMMENT ON CONSTRAINT the_constraint ON DOMAIN\nno_comments_dom IS 'another bad comment';\nsnapmgr.c:590:31: runtime error: null pointer passed as argument 2, which\nis declared to never be null\n7.STATEMENT: create trigger my_table_col_update_trig\n after update of b on my_table referencing new table as new_table\n for each statement execute procedure dump_insert();\nclog.c:299:10: runtime error: null pointer passed as argument 1, which is\ndeclared to never be null\n/usr/include/string.h:65:33: note: nonnull attribute specified here\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior clog.c:299:10 in\nxact.c:5285:25: runtime error: null pointer passed as argument 2, which is\ndeclared to never be null\n/usr/include/string.h:44:28: note: nonnull attribute specified here\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior xact.c:5285:25 in\nsnapmgr.c:590:31: runtime error: null pointer passed as argument 2, which\nis declared to never be null\n/usr/include/string.h:44:28: note: nonnull attribute specified here\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior snapmgr.c:590:31 in\nsnapmgr.c:594:34: runtime error: null pointer passed as argument 2, which\nis declared to never be null\n/usr/include/string.h:44:28: note: nonnull attribute specified here\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior snapmgr.c:594:34 in\nclog.c:299:10: runtime error: null pointer passed as argument 1, which is\ndeclared to never be null\n/usr/include/string.h:65:33: note: nonnull attribute specified here\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior clog.c:299:10 in\nclog.c:299:10: runtime error: null pointer passed as argument 1, which is\ndeclared to never be null\n/usr/include/string.h:65:33: note: nonnull attribute specified here\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior clog.c:299:10 in 8.\nclog.c:299:10: runtime error: null pointer passed as argument 1, which is\ndeclared to never be null\n/usr/include/string.h:65:33: note: nonnull attribute specified here\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior clog.c:299:10 in\nclog.c:299:10: runtime error: null pointer passed as argument 1, which is\ndeclared to never be null\n/usr/include/string.h:65:33: note: nonnull attribute specified here\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior clog.c:299:10 in\n8.STATEMENT: select array_fill(1, array[[1,2],[3,4]]);\ncopyfuncs.c:1190:2: runtime error: null pointer passed as argument 2, which\nis declared to never be null\n\nI stopped counting clog.c (299).\nIf anyone wants, the full report, it has 2mb.\n\nRanier Vilela\n\nHi,Per Clang UBSanClang 10 (64 bits)Postgres 14 (latest)2020-08-27 01:02:14.930 -03 client backend[42432] pg_regress/create_table STATEMENT: create table defcheck_0 partition of defcheck for values in (0);indexcmds.c:1162:22: runtime error: null pointer passed as argument 2, which is declared to never be null/usr/include/string.h:44:28: note: nonnull attribute specified hereSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior indexcmds.c:1162:22 in clog.c:299:10: runtime error: null pointer passed as argument 1, which is declared to never be null/usr/include/string.h:65:33: note: nonnull attribute specified hereSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior clog.c:299:10 in indexcmds.c (1162):\t\t\tmemcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);clog.c (299):\t\tmemcmp(subxids, MyProc->subxids.xids,\t\t\t nsubxids * sizeof(TransactionId)) == 0)xact.c (5285)\t\tmemcpy(&workspace[i], s->childXids,\t\t\t s->nChildXids * sizeof(TransactionId));snapmgr.c (590)\tmemcpy(CurrentSnapshot->xip, sourcesnap->xip,\t\t sourcesnap->xcnt * sizeof(TransactionId));snapmgr.c (594)\tmemcpy(CurrentSnapshot->subxip, sourcesnap->subxip,\t\t sourcesnap->subxcnt * sizeof(TransactionId)); copyfuncs.c:1190\tCOPY_POINTER_FIELD(uniqColIdx, from->uniqNumCols * sizeof(AttrNumber));1.STATEMENT: CREATE TABLESPACE regress_tblspacewith LOCATION '/usr/src/postgres/src/test/regress/testtablespace' WITH (some_nonexistent_parameter = true);clog.c:299:10: runtime error: null pointer passed as argument 1, which is declared to never be null2.STATEMENT: CREATE TABLE testschema.dflt (a int PRIMARY KEY USING INDEX TABLESPACE regress_tblspace) PARTITION BY LIST (a);indexcmds.c:1162:22: runtime error: null pointer passed as argument 2, which is declared to never be null3.STATEMENT: SELECT bool 'nay' AS error;clog.c:299:10: runtime error: null pointer passed as argument 1, which is declared to never be null4.STATEMENT: SELECT U&'wrong: +0061' UESCAPE '+';clog.c:299:10: runtime error: null pointer passed as argument 1, which is declared to never be null5. STATEMENT: ALTER TABLE circles ADD EXCLUDE USING gist\t (c1 WITH &&, (c2::circle) WITH &&);xact.c:5285:25: runtime error: null pointer passed as argument 2, which is declared to never be null6.STATEMENT: COMMENT ON CONSTRAINT the_constraint ON DOMAIN no_comments_dom IS 'another bad comment';snapmgr.c:590:31: runtime error: null pointer passed as argument 2, which is declared to never be null7.STATEMENT: create trigger my_table_col_update_trig\t after update of b on my_table referencing new table as new_table\t for each statement execute procedure dump_insert();clog.c:299:10: runtime error: null pointer passed as argument 1, which is declared to never be null/usr/include/string.h:65:33: note: nonnull attribute specified hereSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior clog.c:299:10 in xact.c:5285:25: runtime error: null pointer passed as argument 2, which is declared to never be null/usr/include/string.h:44:28: note: nonnull attribute specified hereSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior xact.c:5285:25 in snapmgr.c:590:31: runtime error: null pointer passed as argument 2, which is declared to never be null/usr/include/string.h:44:28: note: nonnull attribute specified hereSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior snapmgr.c:590:31 in snapmgr.c:594:34: runtime error: null pointer passed as argument 2, which is declared to never be null/usr/include/string.h:44:28: note: nonnull attribute specified hereSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior snapmgr.c:594:34 in clog.c:299:10: runtime error: null pointer passed as argument 1, which is declared to never be null/usr/include/string.h:65:33: note: nonnull attribute specified hereSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior clog.c:299:10 in clog.c:299:10: runtime error: null pointer passed as argument 1, which is declared to never be null/usr/include/string.h:65:33: note: nonnull attribute specified hereSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior clog.c:299:10 in 8.clog.c:299:10: runtime error: null pointer passed as argument 1, which is declared to never be null/usr/include/string.h:65:33: note: nonnull attribute specified hereSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior clog.c:299:10 in clog.c:299:10: runtime error: null pointer passed as argument 1, which is declared to never be null/usr/include/string.h:65:33: note: nonnull attribute specified hereSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior clog.c:299:10 in 8.STATEMENT: select array_fill(1, array[[1,2],[3,4]]);copyfuncs.c:1190:2: runtime error: null pointer passed as argument 2, which is declared to never be nullI stopped counting clog.c (299).If anyone wants, the full report, it has 2mb.Ranier Vilela",
"msg_date": "Thu, 27 Aug 2020 02:00:40 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "On 2020-Aug-27, Ranier Vilela wrote:\n\n> indexcmds.c (1162):\n> memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);\n\nLooks legit, and at least per commit 13bba02271dc we do fix such things,\neven if it's useless in practice.\n\nGiven that no buildfarm member has ever complained, this exercise seems\npretty pointless.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 27 Aug 2020 12:57:20 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Em qui., 27 de ago. de 2020 às 13:57, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Aug-27, Ranier Vilela wrote:\n>\n> > indexcmds.c (1162):\n> > memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);\n>\n> Looks legit, and at least per commit 13bba02271dc we do fix such things,\n> even if it's useless in practice.\n>\n> Given that no buildfarm member has ever complained, this exercise seems\n> pretty pointless.\n>\nHi Álvaro,\nIf we are passing a null pointer in these places and it should not be done,\nit is a sign that perhaps these calls should not or should not be made, and\nthey can be avoided.\nThis would eliminate undefined behavior and save some cycles?\n\nregards,\nRanier Vilela\n\nEm qui., 27 de ago. de 2020 às 13:57, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Aug-27, Ranier Vilela wrote:\n\n> indexcmds.c (1162):\n> memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);\n\nLooks legit, and at least per commit 13bba02271dc we do fix such things,\neven if it's useless in practice.\n\nGiven that no buildfarm member has ever complained, this exercise seems\npretty pointless.Hi Álvaro,If we are passing a null pointer in these places and it should not be done, it is a sign that perhaps these calls should not or should not be made, and they can be avoided.This would eliminate undefined behavior and save some cycles?regards,Ranier Vilela",
"msg_date": "Thu, 27 Aug 2020 14:05:30 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Em qui., 27 de ago. de 2020 às 13:57, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Aug-27, Ranier Vilela wrote:\n>\n> > indexcmds.c (1162):\n> > memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);\n>\n> Looks legit, and at least per commit 13bba02271dc we do fix such things,\n> even if it's useless in practice.\n>\n> Given that no buildfarm member has ever complained, this exercise seems\n> pretty pointless.\n>\nSee at:\nhttps://postgrespro.com/list/thread-id/1870065\n\"NULL passed as an argument to memcmp() in parse_func.c\n<https://postgrespro.com/list/id/BLU437-SMTP48A5B2099E7134AC6BE7C7F2A10@phx.gbl>\n\"\n\nregards,\nRanier Vilela\n\nEm qui., 27 de ago. de 2020 às 13:57, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Aug-27, Ranier Vilela wrote:\n\n> indexcmds.c (1162):\n> memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);\n\nLooks legit, and at least per commit 13bba02271dc we do fix such things,\neven if it's useless in practice.\n\nGiven that no buildfarm member has ever complained, this exercise seems\npretty pointless.See at:https://postgrespro.com/list/thread-id/1870065\"NULL passed as an argument to memcmp() in parse_func.c\"regards,Ranier Vilela",
"msg_date": "Thu, 27 Aug 2020 14:11:11 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "On 2020-Aug-27, Ranier Vilela wrote:\n\n> If we are passing a null pointer in these places and it should not be done,\n> it is a sign that perhaps these calls should not or should not be made, and\n> they can be avoided.\n\nFeel free to send a patch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 27 Aug 2020 13:20:11 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 12:57:20PM -0400, Alvaro Herrera wrote:\n> On 2020-Aug-27, Ranier Vilela wrote:\n> > indexcmds.c (1162):\n> > memcpy(part_oids, partdesc->oids, sizeof(Oid) * nparts);\n> \n> Looks legit, and at least per commit 13bba02271dc we do fix such things,\n> even if it's useless in practice.\n> \n> Given that no buildfarm member has ever complained, this exercise seems\n> pretty pointless.\n\nLater decision to stop changing such code:\nhttps://postgr.es/m/flat/e1a26ece-7057-a234-d87e-4ce1cdc9eaa0@2ndquadrant.com\n\n\n",
"msg_date": "Thu, 27 Aug 2020 19:42:08 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Thu, Aug 27, 2020 at 12:57:20PM -0400, Alvaro Herrera wrote:\n>> Looks legit, and at least per commit 13bba02271dc we do fix such things,\n>> even if it's useless in practice.\n>> Given that no buildfarm member has ever complained, this exercise seems\n>> pretty pointless.\n\n> Later decision to stop changing such code:\n> https://postgr.es/m/flat/e1a26ece-7057-a234-d87e-4ce1cdc9eaa0@2ndquadrant.com\n\nI agree that this seems academic for any sane implementation of memcmp\nand friends. If the function is not allowed to fetch or store any bytes\nwhen the length parameter is zero, which it certainly is not, then how\ncould it matter whether the pointer parameter is NULL? It would be\ninteresting to know the rationale behind the C standard's claim that\nthis case should be undefined.\n\nHaving said that, I think that the actual risk here has to do not with\nwhat memcmp() might do, but with what gcc might do in code surrounding\nthe call, once it's armed with the assumption that any pointer we pass\nto memcmp() could not be null. See\n\nhttps://www.postgresql.org/message-id/flat/BLU437-SMTP48A5B2099E7134AC6BE7C7F2A10%40phx.gbl\n\nIt's surely not hard to visualize cases where necessary code could\nbe optimized away if the compiler thinks it's entitled to assume\nsuch things.\n\nIn other words, the C standard made a damfool decision and now we need\nto deal with the consequences of that as perpetrated by other fools.\nStill, it's all hypothetical so far --- does anyone have examples of\nactual rather than theoretical issues?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 Aug 2020 23:11:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 11:11:47PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Thu, Aug 27, 2020 at 12:57:20PM -0400, Alvaro Herrera wrote:\n> >> Looks legit, and at least per commit 13bba02271dc we do fix such things,\n> >> even if it's useless in practice.\n> >> Given that no buildfarm member has ever complained, this exercise seems\n> >> pretty pointless.\n> \n> > Later decision to stop changing such code:\n> > https://postgr.es/m/flat/e1a26ece-7057-a234-d87e-4ce1cdc9eaa0@2ndquadrant.com\n\n> I think that the actual risk here has to do not with\n> what memcmp() might do, but with what gcc might do in code surrounding\n> the call, once it's armed with the assumption that any pointer we pass\n> to memcmp() could not be null. See\n> \n> https://www.postgresql.org/message-id/flat/BLU437-SMTP48A5B2099E7134AC6BE7C7F2A10%40phx.gbl\n> \n> It's surely not hard to visualize cases where necessary code could\n> be optimized away if the compiler thinks it's entitled to assume\n> such things.\n\nGood point. We could pick from a few levels of concern:\n\n- No concern: reject changes serving only to remove this class of deviation.\n This is today's policy.\n- Medium concern: accept fixes, but the buildfarm continues not to break in\n the face of new deviations. This will make some code uglier, but we'll be\n ready against some compiler growing the optimization you describe.\n- High concern: I remove -fno-sanitize=nonnull-attribute from buildfarm member\n thorntail. In addition to the drawback of the previous level, this will\n create urgent work for committers introducing new deviations (or introducing\n test coverage that unearths old deviations). This is our current response\n to Valgrind complaints, for example.\n\n\n",
"msg_date": "Thu, 27 Aug 2020 23:04:17 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Em sex., 28 de ago. de 2020 às 00:11, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n>\n> In other words, the C standard made a damfool decision and now we need\n> to deal with the consequences of that as perpetrated by other fools.\n> Still, it's all hypothetical so far --- does anyone have examples of\n> actual rather than theoretical issues?\n>\nI still think the value of this alert would be to avoid the call.\nWhy do memcmp have to deal with a NULL value?\nclog.c: 299, it is a case outside the curve, there are hundreds of calls in\nthe report.\nIt must be very difficult to correct, but if TransactionIdSetPageStatus was\nnot called in these cases, memcmp, it would not have to deal with the NULL\npointer.\n\nregards,\nRanier Vilela\n\nEm sex., 28 de ago. de 2020 às 00:11, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\nIn other words, the C standard made a damfool decision and now we need\nto deal with the consequences of that as perpetrated by other fools.\nStill, it's all hypothetical so far --- does anyone have examples of\nactual rather than theoretical issues?I still think the value of this alert would be to avoid the call.Why do memcmp have to deal with a NULL value?clog.c: 299, it is a case outside the curve, there are hundreds of calls in the report.It must be very difficult to correct, but if TransactionIdSetPageStatus was not called in these cases, memcmp, it would not have to deal with the NULL pointer.regards,Ranier Vilela",
"msg_date": "Fri, 28 Aug 2020 10:37:08 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 11:04 PM Noah Misch <noah@leadboat.com> wrote:\n> On Thu, Aug 27, 2020 at 11:11:47PM -0400, Tom Lane wrote:\n> > It's surely not hard to visualize cases where necessary code could\n> > be optimized away if the compiler thinks it's entitled to assume\n> > such things.\n>\n> Good point.\n\nI wonder if we should start using -fno-delete-null-pointer-checks:\n\nhttps://lkml.org/lkml/2018/4/4/601\n\nThis may not be strictly relevant to the discussion, but I was\nreminded of it just now and thought I'd mention it.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 28 Aug 2020 09:54:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Em sex., 28 de ago. de 2020 às 03:04, Noah Misch <noah@leadboat.com>\nescreveu:\n\n> On Thu, Aug 27, 2020 at 11:11:47PM -0400, Tom Lane wrote:\n> > Noah Misch <noah@leadboat.com> writes:\n> > > On Thu, Aug 27, 2020 at 12:57:20PM -0400, Alvaro Herrera wrote:\n> > >> Looks legit, and at least per commit 13bba02271dc we do fix such\n> things,\n> > >> even if it's useless in practice.\n> > >> Given that no buildfarm member has ever complained, this exercise\n> seems\n> > >> pretty pointless.\n> >\n> > > Later decision to stop changing such code:\n> > >\n> https://postgr.es/m/flat/e1a26ece-7057-a234-d87e-4ce1cdc9eaa0@2ndquadrant.com\n>\n> > I think that the actual risk here has to do not with\n> > what memcmp() might do, but with what gcc might do in code surrounding\n> > the call, once it's armed with the assumption that any pointer we pass\n> > to memcmp() could not be null. See\n> >\n> >\n> https://www.postgresql.org/message-id/flat/BLU437-SMTP48A5B2099E7134AC6BE7C7F2A10%40phx.gbl\n> >\n> > It's surely not hard to visualize cases where necessary code could\n> > be optimized away if the compiler thinks it's entitled to assume\n> > such things.\n>\n> Good point. We could pick from a few levels of concern:\n>\n> - No concern: reject changes serving only to remove this class of\n> deviation.\n> This is today's policy.\n> - Medium concern: accept fixes, but the buildfarm continues not to break in\n> the face of new deviations. This will make some code uglier, but we'll\n> be\n> ready against some compiler growing the optimization you describe.\n> - High concern: I remove -fno-sanitize=nonnull-attribute from buildfarm\n> member\n> thorntail. In addition to the drawback of the previous level, this will\n> create urgent work for committers introducing new deviations (or\n> introducing\n> test coverage that unearths old deviations). This is our current\n> response\n> to Valgrind complaints, for example.\n>\nMaybe in this specific case, the policy could be ignored, this change does\nnot hurt.\n\n--- a/src/backend/access/transam/clog.c\n+++ b/src/backend/access/transam/clog.c\n@@ -293,7 +293,7 @@ TransactionIdSetPageStatus(TransactionId xid, int\nnsubxids,\n * sub-XIDs and all of the XIDs for which we're adjusting clog should be\n * on the same page. Check those conditions, too.\n */\n- if (all_xact_same_page && xid == MyProc->xid &&\n+ if (all_xact_same_page && subxids && xid == MyProc->xid &&\n nsubxids <= THRESHOLD_SUBTRANS_CLOG_OPT &&\n nsubxids == MyProc->subxidStatus.count &&\n memcmp(subxids, MyProc->subxids.xids,\n\nregards,\nRanier Vilela\n\nEm sex., 28 de ago. de 2020 às 03:04, Noah Misch <noah@leadboat.com> escreveu:On Thu, Aug 27, 2020 at 11:11:47PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Thu, Aug 27, 2020 at 12:57:20PM -0400, Alvaro Herrera wrote:\n> >> Looks legit, and at least per commit 13bba02271dc we do fix such things,\n> >> even if it's useless in practice.\n> >> Given that no buildfarm member has ever complained, this exercise seems\n> >> pretty pointless.\n> \n> > Later decision to stop changing such code:\n> > https://postgr.es/m/flat/e1a26ece-7057-a234-d87e-4ce1cdc9eaa0@2ndquadrant.com\n\n> I think that the actual risk here has to do not with\n> what memcmp() might do, but with what gcc might do in code surrounding\n> the call, once it's armed with the assumption that any pointer we pass\n> to memcmp() could not be null. See\n> \n> https://www.postgresql.org/message-id/flat/BLU437-SMTP48A5B2099E7134AC6BE7C7F2A10%40phx.gbl\n> \n> It's surely not hard to visualize cases where necessary code could\n> be optimized away if the compiler thinks it's entitled to assume\n> such things.\n\nGood point. We could pick from a few levels of concern:\n\n- No concern: reject changes serving only to remove this class of deviation.\n This is today's policy.\n- Medium concern: accept fixes, but the buildfarm continues not to break in\n the face of new deviations. This will make some code uglier, but we'll be\n ready against some compiler growing the optimization you describe.\n- High concern: I remove -fno-sanitize=nonnull-attribute from buildfarm member\n thorntail. In addition to the drawback of the previous level, this will\n create urgent work for committers introducing new deviations (or introducing\n test coverage that unearths old deviations). This is our current response\n to Valgrind complaints, for example.Maybe in this specific case, the policy could be ignored, this change does not hurt.--- a/src/backend/access/transam/clog.c+++ b/src/backend/access/transam/clog.c@@ -293,7 +293,7 @@ TransactionIdSetPageStatus(TransactionId xid, int nsubxids, \t * sub-XIDs and all of the XIDs for which we're adjusting clog should be \t * on the same page. Check those conditions, too. \t */-\tif (all_xact_same_page && xid == MyProc->xid &&+\tif (all_xact_same_page && subxids && xid == MyProc->xid && \t\tnsubxids <= THRESHOLD_SUBTRANS_CLOG_OPT && \t\tnsubxids == MyProc->subxidStatus.count && \t\tmemcmp(subxids, MyProc->subxids.xids,regards,Ranier Vilela",
"msg_date": "Fri, 28 Aug 2020 15:54:57 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I wonder if we should start using -fno-delete-null-pointer-checks:\n> https://lkml.org/lkml/2018/4/4/601\n> This may not be strictly relevant to the discussion, but I was\n> reminded of it just now and thought I'd mention it.\n\nHmm. gcc 8.3 defines this as:\n\n Assume that programs cannot safely dereference null pointers, and\n that no code or data element resides at address zero. This option\n enables simple constant folding optimizations at all optimization\n levels. In addition, other optimization passes in GCC use this\n flag to control global dataflow analyses that eliminate useless\n checks for null pointers; these assume that a memory access to\n address zero always results in a trap, so that if a pointer is\n checked after it has already been dereferenced, it cannot be null.\n\nAFAICS, that's a perfectly valid assumption for our usage. I can see why\nthe kernel might not want it, but we set things up whenever possible to\nensure that dereferencing NULL would crash.\n\nHowever, while grepping the manual for that I also found\n\n'-Wnull-dereference'\n Warn if the compiler detects paths that trigger erroneous or\n undefined behavior due to dereferencing a null pointer. This\n option is only active when '-fdelete-null-pointer-checks' is\n active, which is enabled by optimizations in most targets. The\n precision of the warnings depends on the optimization options used.\n\nI wonder whether turning that on would find anything interesting.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 29 Aug 2020 12:36:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "More troubles with undefined-behavior.\n\nThis type of code can leaves overflow:\nvar = (cast) (expression);\ndiff = (int32) (id1 - id2);\n\nSee:\n diff64 = ((long int) d1 - (long int) d2);\n diff64=-4294901760\n\n#include <stdio.h>\n#include <stdint.h>\n\nint main()\n{\n unsigned int d1 = 3;\n unsigned int d2 = 4294901763;\n unsigned int diffu32 = 0;\n unsigned long int diffu64 = 0;\n unsigned long int diff64 = 0;\n int32_t diff = 0;\n\n diff = (int32_t) (d1 - d2);\n diff64 = ((long int) d1 - (long int) d2);\n diffu32 = (unsigned int) (d1 - d2);\n diffu64 = (unsigned long int) (d1 - d2);\nprintf(\"d1=%u\\n\", d1);\nprintf(\"d2=%u\\n\", d2);\nprintf(\"diff=%d\\n\", diff);\nprintf(\"diffu32=%u\\n\", diffu32);\nprintf(\"diff64=%ld\\n\", diff64);\nprintf(\"diffu64=%lu\\n\", diffu64);\n\n return 0;\n}\n\noutput:\nd1=3\nd2=4294901763\ndiff=65536\ndiffu32=65536\ndiff64=-4294901760\ndiffu64=65536\n\n(With Ubuntu 64 bits + clang 10)\ntransam.c:311:22: runtime error: unsigned integer overflow: 3 - 4294901763\ncannot be represented in type 'unsigned int'\nTransactionIdPrecedes(TransactionId id1, TransactionId id2)\n{\n/*\n* If either ID is a permanent XID then we can just do unsigned\n* comparison. If both are normal, do a modulo-2^32 comparison.\n*/\nint32 diff;\n\nif (!TransactionIdIsNormal(id1) || !TransactionIdIsNormal(id2))\nreturn (id1 < id2);\n\ndiff = (int32) (id1 - id2);\nreturn (diff < 0);\n}\n\nThis works, all time or really with bad numbers can break?\nI would like to know, why doesn't it work?\n\nWith Windows 10 (64 bits) + msvc 2019 (64 bits)\nbool\nTransactionIdPrecedes(TransactionId id1, TransactionId id2)\n{\n/*\n* If either ID is a permanent XID then we can just do unsigned\n* comparison. If both are normal, do a modulo-2^32 comparison.\n*/\nint32 diff;\nint64 diff64;\n\nif (!TransactionIdIsNormal(id1) || !TransactionIdIsNormal(id2))\nreturn (id1 < id2);\n\ndiff = (int32) (id1 - id2);\ndiff64 = ((int64) id1 - (int64) id2);\n fprintf(stderr, \"id1=%lu\\n\", id1);\n fprintf(stderr, \"id2=%lu\\n\", id1);\n fprintf(stderr, \"diff32=%ld\\n\", diff);\n fprintf(stderr, \"diff64=%lld\\n\", diff64);\nreturn (diff64 < 0);\n}\n\nid1=498\nid2=498\ndiff32=200000000\ndiff64=-4094967296\n2020-08-31 12:46:30.422 -03 [8908] WARNING: oldest xmin is far in the past\n2020-08-31 12:46:30.422 -03 [8908] HINT: Close open transactions soon to\navoid wraparound problems.\nYou might also need to commit or roll back old prepared transactions, or\ndrop stale replication slots.\n\nid1=4\nid2=4\ndiff32=-494\ndiff64=-494\n\nid1=4\nid2=4\ndiff32=50000000\ndiff64=-4244967296\n2020-08-31 12:46:30.423 -03 [8908] FATAL: found xmin 4 from before\nrelfrozenxid 4244967300\n2020-08-31 12:46:30.423 -03 [8908] CONTEXT: while scanning block 0 and\noffset 1 of relation \"pg_catalog.pg_depend\"\n2020-08-31 12:46:30.423 -03 [8908] STATEMENT: VACUUM FREEZE;\n\nMost of the time:\nid1=498\nid2=498\ndiff32=0\ndiff64=0\nid1=498\nid2=498\ndiff32=0\ndiff64=0\n\nregards,\nRanier Vilela\n\nMore troubles with undefined-behavior.This type of code can leaves overflow:var = (cast) (expression);\n\tdiff = (int32) (id1 - id2);\n\nSee:\n diff64 = ((long int) d1 - (long int) d2);\n\n diff64=-4294901760\n#include <stdio.h>#include <stdint.h>int main(){ unsigned int d1 = 3; unsigned int d2 = 4294901763; unsigned int diffu32 = 0; unsigned long int diffu64 = 0; unsigned long int diff64 = 0; int32_t diff = 0; diff = (int32_t) (d1 - d2); diff64 = ((long int) d1 - (long int) d2); diffu32 = (unsigned int) (d1 - d2); diffu64 = (unsigned long int) (d1 - d2);\tprintf(\"d1=%u\\n\", d1);\tprintf(\"d2=%u\\n\", d2);\tprintf(\"diff=%d\\n\", diff);\tprintf(\"diffu32=%u\\n\", diffu32);\tprintf(\"diff64=%ld\\n\", diff64);\tprintf(\"diffu64=%lu\\n\", diffu64); return 0;}output:d1=3\nd2=4294901763\ndiff=65536\ndiffu32=65536\ndiff64=-4294901760\ndiffu64=65536(With Ubuntu 64 bits + clang 10)\n\n\ntransam.c:311:22: runtime error: unsigned integer overflow: 3 - 4294901763 cannot be represented in type 'unsigned int'TransactionIdPrecedes(TransactionId id1, TransactionId id2){\t/*\t * If either ID is a permanent XID then we can just do unsigned\t * comparison. If both are normal, do a modulo-2^32 comparison.\t */\tint32\t\tdiff;\tif (!TransactionIdIsNormal(id1) || !TransactionIdIsNormal(id2))\t\treturn (id1 < id2);\tdiff = (int32) (id1 - id2);\treturn (diff < 0);}This works, all time or really with bad numbers can break?I would like to know, why doesn't it work?With Windows 10 (64 bits) + msvc 2019 (64 bits)boolTransactionIdPrecedes(TransactionId id1, TransactionId id2){\t/*\t * If either ID is a permanent XID then we can just do unsigned\t * comparison. If both are normal, do a modulo-2^32 comparison.\t */\tint32\t\tdiff;\tint64 diff64;\tif (!TransactionIdIsNormal(id1) || !TransactionIdIsNormal(id2))\t\treturn (id1 < id2);\tdiff = (int32) (id1 - id2);\tdiff64 = ((int64) id1 - (int64) id2); fprintf(stderr, \"id1=%lu\\n\", id1); fprintf(stderr, \"id2=%lu\\n\", id1); fprintf(stderr, \"diff32=%ld\\n\", diff); fprintf(stderr, \"diff64=%lld\\n\", diff64);\treturn (diff64 < 0);}id1=498id2=498diff32=200000000diff64=-40949672962020-08-31 12:46:30.422 -03 [8908] WARNING: oldest xmin is far in the past2020-08-31 12:46:30.422 -03 [8908] HINT: Close open transactions soon to avoid wraparound problems.\tYou might also need to commit or roll back old prepared transactions, or drop stale replication slots.id1=4id2=4diff32=-494diff64=-494id1=4id2=4diff32=50000000diff64=-42449672962020-08-31 12:46:30.423 -03 [8908] FATAL: found xmin 4 from before relfrozenxid 42449673002020-08-31 12:46:30.423 -03 [8908] CONTEXT: while scanning block 0 and offset 1 of relation \"pg_catalog.pg_depend\"2020-08-31 12:46:30.423 -03 [8908] STATEMENT: VACUUM FREEZE;Most of the time:id1=498id2=498diff32=0diff64=0id1=498id2=498diff32=0diff64=0regards,Ranier Vilela",
"msg_date": "Mon, 31 Aug 2020 13:02:41 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "On 2020-Aug-31, Ranier Vilela wrote:\n\n> More troubles with undefined-behavior.\n> \n> This type of code can leaves overflow:\n> var = (cast) (expression);\n> diff = (int32) (id1 - id2);\n> \n> See:\n> diff64 = ((long int) d1 - (long int) d2);\n> diff64=-4294901760\n\nDid you compile this with gcc -fwrapv?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 31 Aug 2020 13:00:50 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Em seg., 31 de ago. de 2020 às 14:00, Alvaro Herrera <\nalvherre@2ndquadrant.com> escreveu:\n\n> On 2020-Aug-31, Ranier Vilela wrote:\n>\n> > More troubles with undefined-behavior.\n> >\n> > This type of code can leaves overflow:\n> > var = (cast) (expression);\n> > diff = (int32) (id1 - id2);\n> >\n> > See:\n> > diff64 = ((long int) d1 - (long int) d2);\n> > diff64=-4294901760\n>\n> Did you compile this with gcc -fwrapv?\n>\ngcc 10.2 -O2 -fwrapv\nbool test1()\n{\n unsigned int d1 = 3;\n unsigned int d2 = 4294901763;\n long int diff64 = 0;\n\n diff64 = ((long int) d1 - (long int) d2);\n\n return (diff64 < 0);\n}\n\noutput:\nmov eax, 1\n ret\n\nWhat is a workaround for msvc 2019 (64 bits) and clang 64 bits (linux)?\ntransam.c:311:22: runtime error: unsigned integer overflow: 3 - 4294901763\ncannot be represented in type 'unsigned int'\n\nRanier Vilela\n\nEm seg., 31 de ago. de 2020 às 14:00, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Aug-31, Ranier Vilela wrote:\n\n> More troubles with undefined-behavior.\n> \n> This type of code can leaves overflow:\n> var = (cast) (expression);\n> diff = (int32) (id1 - id2);\n> \n> See:\n> diff64 = ((long int) d1 - (long int) d2);\n> diff64=-4294901760\n\nDid you compile this with gcc -fwrapv?gcc 10.2 -O2 -fwrapv \nbool test1(){ unsigned int d1 = 3; unsigned int d2 = 4294901763; long int diff64 = 0; diff64 = ((long int) d1 - (long int) d2); return (diff64 < 0);} output:\nmov eax, 1 ret\nWhat is a workaround for msvc 2019 (64 bits) and clang 64 bits (linux)?\ntransam.c:311:22: runtime error: unsigned integer overflow: 3 - 4294901763 cannot be represented in type 'unsigned int' Ranier Vilela",
"msg_date": "Mon, 31 Aug 2020 14:43:56 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Em seg., 31 de ago. de 2020 às 14:43, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em seg., 31 de ago. de 2020 às 14:00, Alvaro Herrera <\n> alvherre@2ndquadrant.com> escreveu:\n>\n>> On 2020-Aug-31, Ranier Vilela wrote:\n>>\n>> > More troubles with undefined-behavior.\n>> >\n>> > This type of code can leaves overflow:\n>> > var = (cast) (expression);\n>> > diff = (int32) (id1 - id2);\n>> >\n>> > See:\n>> > diff64 = ((long int) d1 - (long int) d2);\n>> > diff64=-4294901760\n>>\n>> Did you compile this with gcc -fwrapv?\n>>\n> gcc 10.2 -O2 -fwrapv\n> bool test1()\n> {\n> unsigned int d1 = 3;\n> unsigned int d2 = 4294901763;\n> long int diff64 = 0;\n>\n> diff64 = ((long int) d1 - (long int) d2);\n>\n> return (diff64 < 0);\n> }\n>\n> output:\n> mov eax, 1\n> ret\n>\n> What is a workaround for msvc 2019 (64 bits) and clang 64 bits (linux)?\n> transam.c:311:22: runtime error: unsigned integer overflow: 3 - 4294901763\n> cannot be represented in type 'unsigned int'\n>\n\nwith Debug:\n#include <stdio.h>\n#include <stdint.h>\n\nbool test1(void)\n{\n unsigned int d1 = 3;\n unsigned int d2 = 4294901763;\n int32_t diff;\n\n diff = (int32_t) (d1 - d2);\n\n return (diff < 0);\n}\n\ngcc 10.2 -g\noutput:\n push rbp\n mov rbp, rsp\n mov DWORD PTR [rbp-4], 3\n mov DWORD PTR [rbp-8], -65533\n mov eax, DWORD PTR [rbp-4]\n sub eax, DWORD PTR [rbp-8]\n mov DWORD PTR [rbp-12], eax\n mov eax, DWORD PTR [rbp-12]\n shr eax, 31\n pop rbp\n ret\n\nit is possible to conclude that:\n1. TransactionIdPrecedes works in release mode, because the compiler treats\nundefined-behavior and corrects it,\ntreating a possible overflow.\n2. TransactionIdPrecedes does not work in debug mode, and overflow occurs.\n3. TransactionID cannot contain the largest possible ID or an invalid ID\n(4294901763) has been generated and passed to TransactionIdPrecedes.\n\nRanier Vilela\n\nEm seg., 31 de ago. de 2020 às 14:43, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em seg., 31 de ago. de 2020 às 14:00, Alvaro Herrera <alvherre@2ndquadrant.com> escreveu:On 2020-Aug-31, Ranier Vilela wrote:\n\n> More troubles with undefined-behavior.\n> \n> This type of code can leaves overflow:\n> var = (cast) (expression);\n> diff = (int32) (id1 - id2);\n> \n> See:\n> diff64 = ((long int) d1 - (long int) d2);\n> diff64=-4294901760\n\nDid you compile this with gcc -fwrapv?gcc 10.2 -O2 -fwrapv \nbool test1(){ unsigned int d1 = 3; unsigned int d2 = 4294901763; long int diff64 = 0; diff64 = ((long int) d1 - (long int) d2); return (diff64 < 0);} output:\nmov eax, 1 ret\nWhat is a workaround for msvc 2019 (64 bits) and clang 64 bits (linux)?\ntransam.c:311:22: runtime error: unsigned integer overflow: 3 - 4294901763 cannot be represented in type 'unsigned int' \nwith Debug:\n\n\n#include <stdio.h>#include <stdint.h>bool test1(void){ unsigned int d1 = 3; unsigned int d2 = 4294901763; int32_t diff; diff = (int32_t) (d1 - d2); return (diff < 0);} gcc 10.2 -goutput:\n push rbp mov rbp, rsp mov DWORD PTR [rbp-4], 3 mov DWORD PTR [rbp-8], -65533 mov eax, DWORD PTR [rbp-4] sub eax, DWORD PTR [rbp-8] mov DWORD PTR [rbp-12], eax mov eax, DWORD PTR [rbp-12] shr eax, 31 pop rbp ret\n it is possible to conclude that:1. TransactionIdPrecedes works in release mode, because the compiler treats undefined-behavior and corrects it,treating a possible overflow.2. TransactionIdPrecedes does not work in debug mode, and overflow occurs.3. TransactionID cannot contain the largest possible ID or an invalid ID (4294901763) has been generated and passed to TransactionIdPrecedes.Ranier Vilela",
"msg_date": "Mon, 31 Aug 2020 15:08:49 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Hi, \n\nOn August 31, 2020 11:08:49 AM PDT, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>Em seg., 31 de ago. de 2020 às 14:43, Ranier Vilela\n><ranier.vf@gmail.com>\n>escreveu:\n>\n>> Em seg., 31 de ago. de 2020 às 14:00, Alvaro Herrera <\n>> alvherre@2ndquadrant.com> escreveu:\n>>\n>>> On 2020-Aug-31, Ranier Vilela wrote:\n>>>\n>>> > More troubles with undefined-behavior.\n>>> >\n>>> > This type of code can leaves overflow:\n>>> > var = (cast) (expression);\n>>> > diff = (int32) (id1 - id2);\n>>> >\n>>> > See:\n>>> > diff64 = ((long int) d1 - (long int) d2);\n>>> > diff64=-4294901760\n>>>\n>>> Did you compile this with gcc -fwrapv?\n>>>\n>> gcc 10.2 -O2 -fwrapv\n>> bool test1()\n>> {\n>> unsigned int d1 = 3;\n>> unsigned int d2 = 4294901763;\n>> long int diff64 = 0;\n>>\n>> diff64 = ((long int) d1 - (long int) d2);\n>>\n>> return (diff64 < 0);\n>> }\n>>\n>> output:\n>> mov eax, 1\n>> ret\n>>\n>> What is a workaround for msvc 2019 (64 bits) and clang 64 bits\n>(linux)?\n>> transam.c:311:22: runtime error: unsigned integer overflow: 3 -\n>4294901763\n>> cannot be represented in type 'unsigned int'\n\nUnsigned integer overflow is well defined in the standard. So I don't understand what this is purporting to warn about.\n\nAndres\n\nRegards,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Mon, 31 Aug 2020 11:42:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "On Mon, Aug 31, 2020 at 11:42 AM Andres Freund <andres@anarazel.de> wrote:\n> Unsigned integer overflow is well defined in the standard. So I don't understand what this is purporting to warn about.\n\nPresumably it's simply warning that the value -4294901760 (i.e. the\nresult of 3 - 4294901763) cannot be faithfully represented as an\nunsigned int. This is true, of course. It's just not relevant.\n\nI'm pretty sure that UBSan does not actually state that this is\nundefined behavior. At least Ranier's sample output didn't seem to\nindicate it.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 31 Aug 2020 12:38:51 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-31 12:38:51 -0700, Peter Geoghegan wrote:\n> On Mon, Aug 31, 2020 at 11:42 AM Andres Freund <andres@anarazel.de> wrote:\n> > Unsigned integer overflow is well defined in the standard. So I don't understand what this is purporting to warn about.\n> \n> Presumably it's simply warning that the value -4294901760 (i.e. the\n> result of 3 - 4294901763) cannot be faithfully represented as an\n> unsigned int. This is true, of course. It's just not relevant.\n> \n> I'm pretty sure that UBSan does not actually state that this is\n> undefined behavior. At least Ranier's sample output didn't seem to\n> indicate it.\n\nWell, my point is that there's no point in discussing unsigned integer\noverflow, since it's precisely specified. And hence I don't understand\nwhat we're discussing in this sub-thread.\n\nhttps://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html says:\n\n> -fsanitize=unsigned-integer-overflow: Unsigned integer overflow, where\n> the result of an unsigned integer computation cannot be represented in\n> its type. Unlike signed integer overflow, this is not undefined\n> behavior, but it is often unintentional. This sanitizer does not check\n> for lossy implicit conversions performed before such a computation\n> (see -fsanitize=implicit-conversion).\n\nSo it seems Rainier needs to turn this test off, because it actually is\nintentional.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 31 Aug 2020 13:05:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Em seg., 31 de ago. de 2020 às 16:39, Peter Geoghegan <pg@bowt.ie> escreveu:\n\n> On Mon, Aug 31, 2020 at 11:42 AM Andres Freund <andres@anarazel.de> wrote:\n> > Unsigned integer overflow is well defined in the standard. So I don't\n> understand what this is purporting to warn about.\n>\n> Presumably it's simply warning that the value -4294901760 (i.e. the\n> result of 3 - 4294901763) cannot be faithfully represented as an\n> unsigned int. This is true, of course. It's just not relevant.\n>\n> I'm pretty sure that UBSan does not actually state that this is\n> undefined behavior. At least Ranier's sample output didn't seem to\n> indicate it.\n>\n4294901763 can not store at unsigned int (TransactionID is uint32_t).\nTransactionId id2 at TransactionIdPrecedes already has an overflow, before\nanything is done.\n\nRanier Vilela\n\nEm seg., 31 de ago. de 2020 às 16:39, Peter Geoghegan <pg@bowt.ie> escreveu:On Mon, Aug 31, 2020 at 11:42 AM Andres Freund <andres@anarazel.de> wrote:\n> Unsigned integer overflow is well defined in the standard. So I don't understand what this is purporting to warn about.\n\nPresumably it's simply warning that the value -4294901760 (i.e. the\nresult of 3 - 4294901763) cannot be faithfully represented as an\nunsigned int. This is true, of course. It's just not relevant.\n\nI'm pretty sure that UBSan does not actually state that this is\nundefined behavior. At least Ranier's sample output didn't seem to\nindicate it.\n4294901763 can not store at unsigned int (TransactionID is uint32_t).\nTransactionId id2 at \nTransactionIdPrecedes\n\nalready has an overflow, before anything is done.Ranier Vilela",
"msg_date": "Mon, 31 Aug 2020 17:28:58 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Em seg., 31 de ago. de 2020 às 17:05, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2020-08-31 12:38:51 -0700, Peter Geoghegan wrote:\n> > On Mon, Aug 31, 2020 at 11:42 AM Andres Freund <andres@anarazel.de>\n> wrote:\n> > > Unsigned integer overflow is well defined in the standard. So I don't\n> understand what this is purporting to warn about.\n> >\n> > Presumably it's simply warning that the value -4294901760 (i.e. the\n> > result of 3 - 4294901763) cannot be faithfully represented as an\n> > unsigned int. This is true, of course. It's just not relevant.\n> >\n> > I'm pretty sure that UBSan does not actually state that this is\n> > undefined behavior. At least Ranier's sample output didn't seem to\n> > indicate it.\n>\n> Well, my point is that there's no point in discussing unsigned integer\n> overflow, since it's precisely specified. And hence I don't understand\n> what we're discussing in this sub-thread.\n>\n> https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html says:\n>\n> > -fsanitize=unsigned-integer-overflow: Unsigned integer overflow, where\n> > the result of an unsigned integer computation cannot be represented in\n> > its type. Unlike signed integer overflow, this is not undefined\n> > behavior, but it is often unintentional. This sanitizer does not check\n> > for lossy implicit conversions performed before such a computation\n> > (see -fsanitize=implicit-conversion).\n>\n> So it seems Rainier needs to turn this test off, because it actually is\n> intentional.\n>\nNo problem.\nIf intentional, the code at TransactionIdPrecedes, already knows that\noverflow can occur\nand trusts that the compiler will save it.\n\nRanier Vilela\n\nEm seg., 31 de ago. de 2020 às 17:05, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\nOn 2020-08-31 12:38:51 -0700, Peter Geoghegan wrote:\n> On Mon, Aug 31, 2020 at 11:42 AM Andres Freund <andres@anarazel.de> wrote:\n> > Unsigned integer overflow is well defined in the standard. So I don't understand what this is purporting to warn about.\n> \n> Presumably it's simply warning that the value -4294901760 (i.e. the\n> result of 3 - 4294901763) cannot be faithfully represented as an\n> unsigned int. This is true, of course. It's just not relevant.\n> \n> I'm pretty sure that UBSan does not actually state that this is\n> undefined behavior. At least Ranier's sample output didn't seem to\n> indicate it.\n\nWell, my point is that there's no point in discussing unsigned integer\noverflow, since it's precisely specified. And hence I don't understand\nwhat we're discussing in this sub-thread.\n\nhttps://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html says:\n\n> -fsanitize=unsigned-integer-overflow: Unsigned integer overflow, where\n> the result of an unsigned integer computation cannot be represented in\n> its type. Unlike signed integer overflow, this is not undefined\n> behavior, but it is often unintentional. This sanitizer does not check\n> for lossy implicit conversions performed before such a computation\n> (see -fsanitize=implicit-conversion).\n\nSo it seems Rainier needs to turn this test off, because it actually is\nintentional.No problem.If intentional, the code at \nTransactionIdPrecedes, already knows that overflow can occurand trusts that the compiler will save it.Ranier Vilela",
"msg_date": "Mon, 31 Aug 2020 17:35:14 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Hi,\n\nOn 2020-08-31 17:35:14 -0300, Ranier Vilela wrote:\n> Em seg., 31 de ago. de 2020 �s 17:05, Andres Freund <andres@anarazel.de>\n> escreveu:\n> > So it seems Rainier needs to turn this test off, because it actually is\n> > intentional.\n> >\n> No problem.\n> If intentional, the code at TransactionIdPrecedes, already knows that\n> overflow can occur\n> and trusts that the compiler will save it.\n\nI don't know what you mean with \"saving\" it. Again, unsigned integer\noverflow is well specified in C. All that's needed is for the compiler\nto implement normal C.\n\n\n",
"msg_date": "Mon, 31 Aug 2020 13:50:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "On Sat, Aug 29, 2020 at 12:36:42PM -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > I wonder if we should start using -fno-delete-null-pointer-checks:\n> > https://lkml.org/lkml/2018/4/4/601\n> > This may not be strictly relevant to the discussion, but I was\n> > reminded of it just now and thought I'd mention it.\n> \n> Hmm. gcc 8.3 defines this as:\n> \n> Assume that programs cannot safely dereference null pointers, and\n> that no code or data element resides at address zero. This option\n> enables simple constant folding optimizations at all optimization\n> levels. In addition, other optimization passes in GCC use this\n> flag to control global dataflow analyses that eliminate useless\n> checks for null pointers; these assume that a memory access to\n> address zero always results in a trap, so that if a pointer is\n> checked after it has already been dereferenced, it cannot be null.\n> \n> AFAICS, that's a perfectly valid assumption for our usage. I can see why\n> the kernel might not want it, but we set things up whenever possible to\n> ensure that dereferencing NULL would crash.\n\nWe do assume dereferencing NULL would crash, but we also assume this\noptimization doesn't happen:\n\n=== opt-null.c\n#include <string.h>\n#include <unistd.h>\n\nint my_memcpy(void *dest, const void *src, size_t n)\n{\n#ifndef REMOVE_MEMCPY\n memcpy(dest, src, n);\n#endif\n if (src)\n\tpause();\n return 0;\n}\n===\n\n$ gcc --version\ngcc (Debian 8.3.0-6) 8.3.0\nCopyright (C) 2018 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\n$ diff -sU1 <(gcc -O2 -fno-delete-null-pointer-checks -S -o- opt-null.c) <(gcc -O2 -S -o- opt-null.c)\n--- /dev/fd/63 2020-09-03 19:23:53.206864378 -0700\n+++ /dev/fd/62 2020-09-03 19:23:53.206864378 -0700\n@@ -8,13 +8,8 @@\n .cfi_startproc\n- pushq %rbx\n+ subq $8, %rsp\n .cfi_def_cfa_offset 16\n- .cfi_offset 3, -16\n- movq %rsi, %rbx\n call memcpy@PLT\n- testq %rbx, %rbx\n- je .L2\n call pause@PLT\n-.L2:\n xorl %eax, %eax\n- popq %rbx\n+ addq $8, %rsp\n .cfi_def_cfa_offset 8\n$ diff -sU1 <(gcc -DREMOVE_MEMCPY -O2 -fno-delete-null-pointer-checks -S -o- opt-null.c) <(gcc -DREMOVE_MEMCPY -O2 -S -o- opt-null.c)\nFiles /dev/fd/63 and /dev/fd/62 are identical\n\n\nSo yes, it would be reasonable to adopt -fno-delete-null-pointer-checks and/or\nremove -fno-sanitize=nonnull-attribute from buildfarm member thorntail.\n\n> However, while grepping the manual for that I also found\n> \n> '-Wnull-dereference'\n> Warn if the compiler detects paths that trigger erroneous or\n> undefined behavior due to dereferencing a null pointer. This\n> option is only active when '-fdelete-null-pointer-checks' is\n> active, which is enabled by optimizations in most targets. The\n> precision of the warnings depends on the optimization options used.\n> \n> I wonder whether turning that on would find anything interesting.\n\nPromising. Sadly, it doesn't warn for the above test case.\n\n\n",
"msg_date": "Thu, 3 Sep 2020 19:36:48 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> We do assume dereferencing NULL would crash, but we also assume this\n> optimization doesn't happen:\n\n> #ifndef REMOVE_MEMCPY\n> memcpy(dest, src, n);\n> #endif\n> if (src)\n> \tpause();\n\n> [ gcc believes the if-test is unnecessary ]\n\nHm. I would not blame that on -fdelete-null-pointer-checks per se.\nRather the problem is what we were touching on before: the dubious\nbut standard-approved assumption that memcpy's arguments cannot be\nnull.\n\nIf there actually are places where this is a problem, I think we\nneed to fix it by doing\n\n\tif (n > 0)\n\t memcpy(dest, src, n);\n\nso that the compiler can no longer assume that src!=NULL even\nwhen n is zero. I'd still leave -fdelete-null-pointer-checks\nenabled, because it can make valid and useful optimizations in\nother cases. (Besides that, it's far from clear that disabling\nthat flag would suppress all bad consequences of the assumption\nthat memcpy's arguments aren't null.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Sep 2020 22:53:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "On Thu, Sep 3, 2020 at 7:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hm. I would not blame that on -fdelete-null-pointer-checks per se.\n> Rather the problem is what we were touching on before: the dubious\n> but standard-approved assumption that memcpy's arguments cannot be\n> null.\n\nIsn't it both, together? That is, it's the combination of that\nassumption alongside -fdelete-null-pointer-checks's actual willingness\nto propagate the assumption.\n\n> I'd still leave -fdelete-null-pointer-checks\n> enabled, because it can make valid and useful optimizations in\n> other cases.\n\nIs there any evidence that that's true? I wouldn't assume that the gcc\npeople exercised good judgement here.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 3 Sep 2020 20:01:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Thu, Sep 3, 2020 at 7:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd still leave -fdelete-null-pointer-checks\n>> enabled, because it can make valid and useful optimizations in\n>> other cases.\n\n> Is there any evidence that that's true? I wouldn't assume that the gcc\n> people exercised good judgement here.\n\nI have not actually dug for examples, but the sort of situation where\nI think it would help us is that macros or static inlines could contain\nnull tests that can be proven useless at particular call sites due to\nsurrounding code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Sep 2020 23:06:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
},
{
"msg_contents": "On Thu, Sep 03, 2020 at 10:53:37PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > We do assume dereferencing NULL would crash, but we also assume this\n> > optimization doesn't happen:\n> \n> > #ifndef REMOVE_MEMCPY\n> > memcpy(dest, src, n);\n> > #endif\n> > if (src)\n> > \tpause();\n> \n> > [ gcc believes the if-test is unnecessary ]\n> \n> > So yes, it would be reasonable to adopt -fno-delete-null-pointer-checks and/or\n> > remove -fno-sanitize=nonnull-attribute from buildfarm member thorntail.\n\n> If there actually are places where this is a problem, I think we\n> need to fix it by doing\n> \n> \tif (n > 0)\n> \t memcpy(dest, src, n);\n> \n> so that the compiler can no longer assume that src!=NULL even\n> when n is zero. I'd still leave -fdelete-null-pointer-checks\n> enabled, because it can make valid and useful optimizations in\n> other cases. (Besides that, it's far from clear that disabling\n> that flag would suppress all bad consequences of the assumption\n> that memcpy's arguments aren't null.)\n\nYour proposal is what I had in mind when I wrote \"remove\n-fno-sanitize=nonnull-attribute from buildfarm member thorntail\", and I agree\nit's attractive. In particular, gcc is not likely to be the last compiler to\nattempt such an optimization, and other compilers may not offer\n-fno-delete-null-pointer-checks or equivalent.\n\nOne might argue for -fno-delete-null-pointer-checks in addition, because it\nwould defend against cases that sanitizers miss. I tend to think that's\noverkill, but maybe not. I suppose one could do an audit, diffing the\ngenerated code with and without the option.\n\n\n",
"msg_date": "Thu, 3 Sep 2020 20:10:49 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected\n undefined-behavior"
}
] |
[
{
"msg_contents": "I am getting following error in configuration.log of installation . Please help\n\nIs there any pkg-config path that needs to be configured/set in environment for this to work.\n\nconfigure:8172: checking for libxml-2.0 >= 2.6.23\nconfigure:8179: $PKG_CONFIG --exists --print-errors \"libxml-2.0 >= 2.6.23\"\nPackage libxml-2.0 was not found in the pkg-config search path.\nPerhaps you should add the directory containing `libxml-2.0.pc'\nto the PKG_CONFIG_PATH environment variable\nNo package 'libxml-2.0' found\nconfigure:8182: $? = 1\nconfigure:8196: $PKG_CONFIG --exists --print-errors \"libxml-2.0 >= 2.6.23\"\nPackage libxml-2.0 was not found in the pkg-config search path.\nPerhaps you should add the directory containing `libxml-2.0.pc'\nto the PKG_CONFIG_PATH environment variable\nNo package 'libxml-2.0' found\nconfigure:8199: $? = 1\nconfigure:8213: result: no\nNo package 'libxml-2.0' found\n\nI later tried setting the some environment variables like\n\n\nexport XML2_CFLAGS='/usr/lib64/'\nexport XML2_CONFIG='/usr/lib64/'\nexport XML2_LIBS='/usr/lib64/libxml2\n\nCreated soft link as well libxml2.so -> /usr/lib64/libxml2.so.2.9.1 but now I getting the below error .\n\nconfigure: error: header file <libxml/parser.h> is required for XML support\n\n From config.log\n\nconfigure:13266: checking libxml/parser.h usability\nconfigure:13266: gcc -std=gnu99 -c -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -D_GNU_SOURCE conftest.c >&5\nconftest.c:96:27: fatal error: libxml/parser.h: No such file or directory\n#include <libxml/parser.h>\n ^\ncompilation terminated.\nconfigure:13266: $? = 1\nconfigure: failed program was:\n\n\n\n\nThanks and Regards,\nSACHIN KHANNA\n212 Basis Offshore DBA\nOffice : 204058624\nCell : 9049522511\nEmail: khanna.sachin@corp.sysco.com<mailto:khanna.sachin@corp.sysco.com>\n\nITIL V3 (F), AWS Certified Solution Archtect\nInfosys Technologies Limited (r) | PUNE\n\n\n\n\n\n\n\n\n\n\n \n \nI am getting following error in configuration.log of installation . Please help\n \nIs there any pkg-config path that needs to be configured/set in environment for this to work.\n\n \nconfigure:8172: checking for libxml-2.0 >= 2.6.23\nconfigure:8179: $PKG_CONFIG --exists --print-errors \"libxml-2.0 >= 2.6.23\"\nPackage libxml-2.0 was not found in the pkg-config search path.\nPerhaps you should add the directory containing `libxml-2.0.pc'\nto the PKG_CONFIG_PATH environment variable\nNo package 'libxml-2.0' found\nconfigure:8182: $? = 1\nconfigure:8196: $PKG_CONFIG --exists --print-errors \"libxml-2.0 >= 2.6.23\"\nPackage libxml-2.0 was not found in the pkg-config search path.\nPerhaps you should add the directory containing `libxml-2.0.pc'\nto the PKG_CONFIG_PATH environment variable\nNo package 'libxml-2.0' found\nconfigure:8199: $? = 1\nconfigure:8213: result: no\nNo package 'libxml-2.0' found\n \nI later tried setting the some environment variables like \n\n \n \nexport XML2_CFLAGS='/usr/lib64/'\nexport XML2_CONFIG='/usr/lib64/'\nexport XML2_LIBS='/usr/lib64/libxml2\n \nCreated soft link as well libxml2.so -> /usr/lib64/libxml2.so.2.9.1 but now I getting the below error .\n\n \nconfigure: error: header file <libxml/parser.h> is required for XML support\n \nFrom config.log \n \nconfigure:13266: checking libxml/parser.h usability\nconfigure:13266: gcc -std=gnu99 -c -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security\n -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -D_GNU_SOURCE conftest.c >&5\nconftest.c:96:27: fatal error: libxml/parser.h: No such file or directory\n#include <libxml/parser.h>\n ^\ncompilation terminated.\nconfigure:13266: $? = 1\nconfigure: failed program was:\n \n \n \n \nThanks and Regards, \nSACHIN KHANNA\n212 Basis Offshore DBA\nOffice : 204058624\nCell : 9049522511\nEmail: khanna.sachin@corp.sysco.com\n\n\nITIL V3 (F), AWS Certified Solution Archtect\nInfosys Technologies Limited ® | PUNE",
"msg_date": "Thu, 27 Aug 2020 08:09:33 +0000",
"msg_from": "\"Khanna, Sachin 000\" <Sachin.Khanna@sysco.com>",
"msg_from_op": true,
"msg_subject": "Help needed configuring postgreSQL with xml support "
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 8:17 PM Khanna, Sachin 000\n<Sachin.Khanna@sysco.com> wrote:\n> I am getting following error in configuration.log of installation . Please help\n\nYou didn't mention what operating system this is, but, for example, if\nit's Debian, Ubuntu or similar you might need to install libxml2-dev\nand pkg-config for --with-libxml to work.\n\n\n",
"msg_date": "Thu, 27 Aug 2020 20:25:42 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Help needed configuring postgreSQL with xml support"
},
{
"msg_contents": "In addition to what Thomas said, I would also recommend you to refer\nto the description of --with-libxml command line option provided in\nthe postgres installation-procedure page - [1].\n\n[1] - https://www.postgresql.org/docs/12/install-procedure.html\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Thu, Aug 27, 2020 at 1:56 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Thu, Aug 27, 2020 at 8:17 PM Khanna, Sachin 000\n> <Sachin.Khanna@sysco.com> wrote:\n> > I am getting following error in configuration.log of installation . Please help\n>\n> You didn't mention what operating system this is, but, for example, if\n> it's Debian, Ubuntu or similar you might need to install libxml2-dev\n> and pkg-config for --with-libxml to work.\n>\n>\n\n\n",
"msg_date": "Thu, 27 Aug 2020 14:12:07 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Help needed configuring postgreSQL with xml support"
},
{
"msg_contents": "Thanks for the quick response. I am using RED HAT linux ( ppc64-le ).\r\n\r\n\r\n Operating System: Red Hat Enterprise Linux Server 7.9 Beta (Maipo)\r\n CPE OS Name: cpe:/o:redhat:enterprise_linux:7.9:beta:server\r\n Kernel: Linux 3.10.0-1136.el7.ppc64le\r\n Architecture: ppc64-le\r\nYou have new mail in /var/spool/mail/root\r\n\r\nI have gone into the details given for xml support and have installed the required rpm and tried to setup the env variables as well like \r\n\r\nexport XML2_CFLAGS='/usr/lib64/'\r\nexport XML2_CONFIG='/usr/lib64/'\r\nexport XML2_LIBS='/usr/lib64/libxml2.so.2.9.1'\r\n\r\nrpm -qi libxml2-2.9.1-6.el7.4.ppc64le\r\nName : libxml2\r\nVersion : 2.9.1\r\nRelease : 6.el7.4\r\nArchitecture: ppc64le\r\nInstall Date: Thu 09 Jul 2020 02:55:33 PM CDT\r\nGroup : Development/Libraries\r\nSize : 2594518\r\n\r\nyum list installed |grep xml\r\nlibxml2.ppc64le 2.9.1-6.el7.4 @anaconda/7.9\r\nlibxml2-python.ppc64le 2.9.1-6.el7.4 @anaconda/7.9\r\npython-lxml.ppc64le 3.2.1-4.el7 @anaconda/7.9\r\nxml-common.noarch 0.6.3-39.el7 @anaconda/7.9\r\nxml2.ppc64le 0.5-7.el7 @epel\r\nxmlrpc-c.ppc64le 1.32.5-1905.svn2451.el7 @anaconda/7.9\r\nxmlrpc-c-client.ppc64le 1.32.5-1905.svn2451.el7 @anaconda/7.9\r\n\r\n\r\n--with-libxml\r\nBuild with libxml2, enabling SQL/XML support. Libxml2 version 2.6.23 or later is required for this feature.\r\n\r\nTo detect the required compiler and linker options, PostgreSQL will query pkg-config, if that is installed and knows about libxml2. Otherwise the program xml2-config, which is installed by libxml2, will be used if it is found. Use of pkg-config is preferred, because it can deal with multi-architecture installations better.\r\n\r\nTo use a libxml2 installation that is in an unusual location, you can set pkg-config-related environment variables (see its documentation), or set the environment variable XML2_CONFIG to point to the xml2-config program belonging to the libxml2 installation, or set the variables XML2_CFLAGS and XML2_LIBS. (If pkg-config is installed, then to override its idea of where libxml2 is you must either set XML2_CONFIG or set both XML2_CFLAGS and XML2_LIBS to nonempty strings.)\r\n\r\n\r\nThanks and Regards,\r\nSACHIN KHANNA\r\n212 BASIS DBA TEAM OFFSHORE\r\nOffice : 204058624\r\nCell : 9049522511\r\n\r\n-----Original Message-----\r\nFrom: Ashutosh Sharma <ashu.coek88@gmail.com> \r\nSent: Thursday, August 27, 2020 2:12 PM\r\nTo: Khanna, Sachin 000 <Sachin.Khanna@sysco.com>\r\nCc: pgsql-hackers@postgresql.org; Thomas Munro <thomas.munro@gmail.com>\r\nSubject: Re: Help needed configuring postgreSQL with xml support\r\n\r\nEXTERNAL EMAIL: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe\r\n\r\nIn addition to what Thomas said, I would also recommend you to refer to the description of --with-libxml command line option provided in the postgres installation-procedure page - [1].\r\n\r\n[1] - https://urldefense.proofpoint.com/v2/url?u=https-3A__www.postgresql.org_docs_12_install-2Dprocedure.html&d=DwIBaQ&c=Iej4I5bEYPmgv5l2sS6i8A&r=RbbcFo0QGVsZI41vIxHdvJHDEomAuf-kGUMMC7Yn2sE&m=D5QMYkp-FOKiI-htZxXqsxkhULT3OKhPaZbElnB9CWY&s=oDlVjDkRb48Sqx0fUgv6hoehucADn74yxrut4uiL50E&e=\r\n\r\n--\r\nWith Regards,\r\nAshutosh Sharma\r\nEnterpriseDB:https://urldefense.proofpoint.com/v2/url?u=http-3A__www.enterprisedb.com&d=DwIBaQ&c=Iej4I5bEYPmgv5l2sS6i8A&r=RbbcFo0QGVsZI41vIxHdvJHDEomAuf-kGUMMC7Yn2sE&m=D5QMYkp-FOKiI-htZxXqsxkhULT3OKhPaZbElnB9CWY&s=vhT8MXXxy_b8UWkK9HhFKfoRwA8BRWxLEk0s0QcCk_I&e=\r\n\r\nOn Thu, Aug 27, 2020 at 1:56 PM Thomas Munro <thomas.munro@gmail.com> wrote:\r\n>\r\n> On Thu, Aug 27, 2020 at 8:17 PM Khanna, Sachin 000 \r\n> <Sachin.Khanna@sysco.com> wrote:\r\n> > I am getting following error in configuration.log of installation . \r\n> > Please help\r\n>\r\n> You didn't mention what operating system this is, but, for example, if \r\n> it's Debian, Ubuntu or similar you might need to install libxml2-dev \r\n> and pkg-config for --with-libxml to work.\r\n>\r\n>\r\n",
"msg_date": "Thu, 27 Aug 2020 09:55:52 +0000",
"msg_from": "Sachin Khanna <Sachin_Khanna@infosys.com>",
"msg_from_op": false,
"msg_subject": "RE: Help needed configuring postgreSQL with xml support"
}
] |
[
{
"msg_contents": "Procedures currently don't allow OUT parameters. The reason for this is \nthat at the time procedures were added (PG11), some of the details of \nhow this should work were unclear and the issue was postponed. I am now \nintending to resolve this.\n\nAFAICT, OUT parameters in _functions_ are not allowed per the SQL \nstandard, so whatever PostgreSQL is doing there at the moment is mostly \nour own invention. By contrast, I am here intending to make OUT \nparameters in procedures work per SQL standard and be compatible with \nthe likes of PL/SQL.\n\nThe main difference is that for procedures, OUT parameters are part of \nthe signature and need to be specified as part of the call. This makes \nsense for nested calls in PL/pgSQL like this:\n\nCREATE PROCEDURE test_proc(IN a int, OUT b int)\nLANGUAGE plpgsql\nAS $$\nBEGIN\n b := a * 2;\nEND;\n$$;\n\nDO $$\nDECLARE _a int; _b int;\nBEGIN\n _a := 10;\n CALL test_proc(_a, _b);\n RAISE NOTICE '_a: %, _b: %', _a, _b;\nEND\n$$;\n\nFor a top-level direct call, you can pass whatever you want, since all \nOUT parameters are presented as initially NULL to the procedure code. \nSo you could just pass NULL, as in CALL test_proc(5, NULL).\n\nThe code changes to make this happen are not as significant as I had \ninitially feared. Most of the patch is expanded documentation and \nadditional tests. In some cases, I changed the terminology from \"input \nparameters\" to \"signature parameters\" to make the difference clearer. \nOverall, while this introduces some additional conceptual complexity, \nthe way it works is pretty obvious in the end, and people porting from \nother systems will find it working as expected.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 27 Aug 2020 10:34:09 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Support for OUT parameters in procedures"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 4:34 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> For a top-level direct call, you can pass whatever you want, since all\n> OUT parameters are presented as initially NULL to the procedure code.\n> So you could just pass NULL, as in CALL test_proc(5, NULL).\n\nIs that actually how other systems work? I would think that people\nwould expect to pass, say, a package variable, and expect that it will\nget updated.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 27 Aug 2020 09:56:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for OUT parameters in procedures"
},
{
"msg_contents": "On 2020-08-27 15:56, Robert Haas wrote:\n> On Thu, Aug 27, 2020 at 4:34 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> For a top-level direct call, you can pass whatever you want, since all\n>> OUT parameters are presented as initially NULL to the procedure code.\n>> So you could just pass NULL, as in CALL test_proc(5, NULL).\n> \n> Is that actually how other systems work? I would think that people\n> would expect to pass, say, a package variable, and expect that it will\n> get updated.\n\nThe handling of results of SQL statements executed at the top level \n(a.k.a. direct SQL) is implementation-specific and varies widely in \npractice. More interesting in practice, in terms of functionality and \nalso compatibility, are nested calls in PL/pgSQL as well as integration \nin JDBC.\n\nWe already support INOUT parameters in procedures, so the method of \nreturning the value of output parameters after the CALL already exists. \nThis patch doesn't touch that at all, really. If we had or would add \nother places to put those results, such as package variables, then they \ncould be added independently of this patch.\n\nOf course, feedback from those more knowledgeable in other systems than \nme would be welcome.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 28 Aug 2020 08:04:04 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Support for OUT parameters in procedures"
},
{
"msg_contents": "On Fri, Aug 28, 2020 at 2:04 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> The handling of results of SQL statements executed at the top level\n> (a.k.a. direct SQL) is implementation-specific and varies widely in\n> practice. More interesting in practice, in terms of functionality and\n> also compatibility, are nested calls in PL/pgSQL as well as integration\n> in JDBC.\n\nI agree that driver integration, and in particular JDBC integration,\nis important and needs some thought. I don't think it horribly\nmatters, with a feature like this, what shows up when people type\nstuff into psql. Whatever it is, people will get used to it. But when\nthey interact through a driver, it's different. It is no good\ninventing things, either in PostgreSQL or in the JDBC driver for\nPostgreSQL, that make PostgreSQL behave differently from every other\ndatabase they use. I don't know exactly how we get to a good outcome\nhere, but I think it's worth some careful consideration.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 28 Aug 2020 09:30:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for OUT parameters in procedures"
},
{
"msg_contents": "On 8/27/20 4:34 AM, Peter Eisentraut wrote:\n> Procedures currently don't allow OUT parameters. The reason for this\n> is that at the time procedures were added (PG11), some of the details\n> of how this should work were unclear and the issue was postponed. I\n> am now intending to resolve this.\n>\n> AFAICT, OUT parameters in _functions_ are not allowed per the SQL\n> standard, so whatever PostgreSQL is doing there at the moment is\n> mostly our own invention. By contrast, I am here intending to make\n> OUT parameters in procedures work per SQL standard and be compatible\n> with the likes of PL/SQL.\n>\n> The main difference is that for procedures, OUT parameters are part of\n> the signature and need to be specified as part of the call. This\n> makes sense for nested calls in PL/pgSQL like this:\n>\n> CREATE PROCEDURE test_proc(IN a int, OUT b int)\n> LANGUAGE plpgsql\n> AS $$\n> BEGIN\n> b := a * 2;\n> END;\n> $$;\n>\n> DO $$\n> DECLARE _a int; _b int;\n> BEGIN\n> _a := 10;\n> CALL test_proc(_a, _b);\n> RAISE NOTICE '_a: %, _b: %', _a, _b;\n> END\n> $$;\n>\n> For a top-level direct call, you can pass whatever you want, since all\n> OUT parameters are presented as initially NULL to the procedure code.\n> So you could just pass NULL, as in CALL test_proc(5, NULL).\n>\n> The code changes to make this happen are not as significant as I had\n> initially feared. Most of the patch is expanded documentation and\n> additional tests. In some cases, I changed the terminology from\n> \"input parameters\" to \"signature parameters\" to make the difference\n> clearer. Overall, while this introduces some additional conceptual\n> complexity, the way it works is pretty obvious in the end, and people\n> porting from other systems will find it working as expected.\n>\n\n\nI've reviewed this, and I think it's basically fine. I've made an\naddition that adds a test module that shows how this can be called from\nlibpq - that should be helpful (I hope) for driver writers.\n\n\nA combined patch with the original plus my test suite is attached.\n\n\nI think this can be marked RFC.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 28 Sep 2020 12:43:39 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for OUT parameters in procedures"
},
{
"msg_contents": "po 28. 9. 2020 v 18:43 odesílatel Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> napsal:\n\n>\n> On 8/27/20 4:34 AM, Peter Eisentraut wrote:\n> > Procedures currently don't allow OUT parameters. The reason for this\n> > is that at the time procedures were added (PG11), some of the details\n> > of how this should work were unclear and the issue was postponed. I\n> > am now intending to resolve this.\n> >\n> > AFAICT, OUT parameters in _functions_ are not allowed per the SQL\n> > standard, so whatever PostgreSQL is doing there at the moment is\n> > mostly our own invention. By contrast, I am here intending to make\n> > OUT parameters in procedures work per SQL standard and be compatible\n> > with the likes of PL/SQL.\n> >\n> > The main difference is that for procedures, OUT parameters are part of\n> > the signature and need to be specified as part of the call. This\n> > makes sense for nested calls in PL/pgSQL like this:\n> >\n> > CREATE PROCEDURE test_proc(IN a int, OUT b int)\n> > LANGUAGE plpgsql\n> > AS $$\n> > BEGIN\n> > b := a * 2;\n> > END;\n> > $$;\n> >\n> > DO $$\n> > DECLARE _a int; _b int;\n> > BEGIN\n> > _a := 10;\n> > CALL test_proc(_a, _b);\n> > RAISE NOTICE '_a: %, _b: %', _a, _b;\n> > END\n> > $$;\n> >\n> > For a top-level direct call, you can pass whatever you want, since all\n> > OUT parameters are presented as initially NULL to the procedure code.\n> > So you could just pass NULL, as in CALL test_proc(5, NULL).\n>\n\nThis was an important issue if I remember well. Passing mandatory NULL as\nOUT arguments solves this issue.\nI fully agree so OUT arguments are part of the procedure's signature.\nUnfortunately, there is another difference\nfrom functions, but I don't think so there is a better solution, and we\nshould live with it. I think it can work well.\n\n>\n> > The code changes to make this happen are not as significant as I had\n> > initially feared. Most of the patch is expanded documentation and\n> > additional tests. In some cases, I changed the terminology from\n> > \"input parameters\" to \"signature parameters\" to make the difference\n> > clearer. Overall, while this introduces some additional conceptual\n> > complexity, the way it works is pretty obvious in the end, and people\n> > porting from other systems will find it working as expected.\n> >\n>\n>\n> I've reviewed this, and I think it's basically fine. I've made an\n> addition that adds a test module that shows how this can be called from\n> libpq - that should be helpful (I hope) for driver writers.\n>\n>\n> A combined patch with the original plus my test suite is attached.\n>\n>\nI found one issue. The routine for selecting function or procedure based on\nsignature should be fixed.\n\nCREATE OR REPLACE PROCEDURE public.procp(OUT integer)\n LANGUAGE plpgsql\nAS $procedure$\nBEGIN\n $1 := 10;\nEND;\n$procedure$\n\nDO\n$$\nDECLARE n numeric;\nBEGIN\n CALL procp(n);\n RAISE NOTICE '%', n;\nEND;\n$$;\nERROR: procedure procp(numeric) does not exist\nLINE 1: CALL procp(n)\n ^\nHINT: No procedure matches the given name and argument types. You might\nneed to add explicit type casts.\nQUERY: CALL procp(n)\nCONTEXT: PL/pgSQL function inline_code_block line 4 at CALL\n\nI think this example should work.\n\nBut it doesn't work now for INOUT, and this fix will not be easy, so it\nshould be solved as a separate issue. This features are complete and useful\nnow, and it can be fixed later without problems with compatibility issues.\n\nAnother issue are using polymorphic arguments\n\npostgres=# create or replace procedure px(anyelement, out anyelement)\nas $$\nbegin\n $2 := $1;\nend;\n$$ language plpgsql;\n\npostgres=# call px(10, null);\nERROR: cannot display a value of type anyelement\n\nbut inside plpgsql it works\ndo $$\ndeclare xx int;\nbegin\n call px(10, xx);\n raise notice '%', xx;\nend;\n$$;\n\n\n> I think this can be marked RFC.\n>\n\n+1\n\nPavel\n\n\n\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n>\n> --\n> Andrew Dunstan https://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\npo 28. 9. 2020 v 18:43 odesílatel Andrew Dunstan <andrew.dunstan@2ndquadrant.com> napsal:\nOn 8/27/20 4:34 AM, Peter Eisentraut wrote:\n> Procedures currently don't allow OUT parameters. The reason for this\n> is that at the time procedures were added (PG11), some of the details\n> of how this should work were unclear and the issue was postponed. I\n> am now intending to resolve this.\n>\n> AFAICT, OUT parameters in _functions_ are not allowed per the SQL\n> standard, so whatever PostgreSQL is doing there at the moment is\n> mostly our own invention. By contrast, I am here intending to make\n> OUT parameters in procedures work per SQL standard and be compatible\n> with the likes of PL/SQL.\n>\n> The main difference is that for procedures, OUT parameters are part of\n> the signature and need to be specified as part of the call. This\n> makes sense for nested calls in PL/pgSQL like this:\n>\n> CREATE PROCEDURE test_proc(IN a int, OUT b int)\n> LANGUAGE plpgsql\n> AS $$\n> BEGIN\n> b := a * 2;\n> END;\n> $$;\n>\n> DO $$\n> DECLARE _a int; _b int;\n> BEGIN\n> _a := 10;\n> CALL test_proc(_a, _b);\n> RAISE NOTICE '_a: %, _b: %', _a, _b;\n> END\n> $$;\n>\n> For a top-level direct call, you can pass whatever you want, since all\n> OUT parameters are presented as initially NULL to the procedure code.\n> So you could just pass NULL, as in CALL test_proc(5, NULL).This was an important issue if I remember well. Passing mandatory NULL as OUT arguments solves this issue.I fully agree so OUT arguments are part of the procedure's signature. Unfortunately, there is another differencefrom functions, but I don't think so there is a better solution, and we should live with it. I think it can work well.\n>\n> The code changes to make this happen are not as significant as I had\n> initially feared. Most of the patch is expanded documentation and\n> additional tests. In some cases, I changed the terminology from\n> \"input parameters\" to \"signature parameters\" to make the difference\n> clearer. Overall, while this introduces some additional conceptual\n> complexity, the way it works is pretty obvious in the end, and people\n> porting from other systems will find it working as expected.\n>\n\n\nI've reviewed this, and I think it's basically fine. I've made an\naddition that adds a test module that shows how this can be called from\nlibpq - that should be helpful (I hope) for driver writers.\n\n\nA combined patch with the original plus my test suite is attached.\n\nI found one issue. The routine for selecting function or procedure based on signature should be fixed. CREATE OR REPLACE PROCEDURE public.procp(OUT integer) LANGUAGE plpgsqlAS $procedure$BEGIN $1 := 10;END;$procedure$DO $$DECLARE n numeric;BEGIN CALL procp(n); RAISE NOTICE '%', n;END;$$;ERROR: procedure procp(numeric) does not existLINE 1: CALL procp(n) ^HINT: No procedure matches the given name and argument types. You might need to add explicit type casts.QUERY: CALL procp(n)CONTEXT: PL/pgSQL function inline_code_block line 4 at CALLI think this example should work.But it doesn't work now for INOUT, and this fix will not be easy, so it should be solved as a separate issue. This features are complete and useful now, and it can be fixed later without problems with compatibility issues.Another issue are using polymorphic arguments postgres=# create or replace procedure px(anyelement, out anyelement)as $$begin $2 := $1;end;$$ language plpgsql;postgres=# call px(10, null);ERROR: cannot display a value of type anyelementbut inside plpgsql it worksdo $$ declare xx int;begin call px(10, xx); raise notice '%', xx;end;$$;\nI think this can be marked RFC.+1Pavel\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 29 Sep 2020 08:23:20 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for OUT parameters in procedures"
},
{
"msg_contents": "On 2020-09-29 08:23, Pavel Stehule wrote:\n> This was an important issue if I remember well. Passing mandatory NULL \n> as OUT arguments solves this issue.\n> I fully agree so OUT arguments are part of the procedure's signature. \n> Unfortunately, there is another difference\n> from functions, but I don't think so there is a better solution, and we \n> should live with it. I think it can work well.\n\nThis has been committed.\n\n> I found one issue. The routine for selecting function or procedure based \n> on signature should be fixed.\n> \n> CREATE OR REPLACE PROCEDURE public.procp(OUT integer)\n> LANGUAGE plpgsql\n> AS $procedure$\n> BEGIN\n> $1 := 10;\n> END;\n> $procedure$\n> \n> DO\n> $$\n> DECLARE n numeric;\n> BEGIN\n> CALL procp(n);\n> RAISE NOTICE '%', n;\n> END;\n> $$;\n> ERROR: procedure procp(numeric) does not exist\n> LINE 1: CALL procp(n)\n> ^\n> HINT: No procedure matches the given name and argument types. You might \n> need to add explicit type casts.\n> QUERY: CALL procp(n)\n> CONTEXT: PL/pgSQL function inline_code_block line 4 at CALL\n\nThis is normal; there is no implicit cast from numeric to int. The same \nerror happens if you call a function foo(int) with foo(42::numeric).\n\n> postgres=# create or replace procedure px(anyelement, out anyelement)\n> as $$\n> begin\n> $2 := $1;\n> end;\n> $$ language plpgsql;\n> \n> postgres=# call px(10, null);\n> ERROR: cannot display a value of type anyelement\n> \n> but inside plpgsql it works\n> do $$\n> declare xx int;\n> begin\n> call px(10, xx);\n> raise notice '%', xx;\n> end;\n> $$;\n\nThis might be worth further investigation, but since it happens also \nwith INOUT parameters, it seems orthogonal to this patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 5 Oct 2020 11:46:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Support for OUT parameters in procedures"
},
{
"msg_contents": "Just saw this on hackers. Anyon care to comment ?\n\nDave Cramer\nwww.postgres.rocks\n\n\n---------- Forwarded message ---------\nFrom: Robert Haas <robertmhaas@gmail.com>\nDate: Fri, 28 Aug 2020 at 09:31\nSubject: Re: Support for OUT parameters in procedures\nTo: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>\nCc: pgsql-hackers <pgsql-hackers@postgresql.org>\n\n\nOn Fri, Aug 28, 2020 at 2:04 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> The handling of results of SQL statements executed at the top level\n> (a.k.a. direct SQL) is implementation-specific and varies widely in\n> practice. More interesting in practice, in terms of functionality and\n> also compatibility, are nested calls in PL/pgSQL as well as integration\n> in JDBC.\n\nI agree that driver integration, and in particular JDBC integration,\nis important and needs some thought. I don't think it horribly\nmatters, with a feature like this, what shows up when people type\nstuff into psql. Whatever it is, people will get used to it. But when\nthey interact through a driver, it's different. It is no good\ninventing things, either in PostgreSQL or in the JDBC driver for\nPostgreSQL, that make PostgreSQL behave differently from every other\ndatabase they use. I don't know exactly how we get to a good outcome\nhere, but I think it's worth some careful consideration.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nJust saw this on hackers. Anyon care to comment ?Dave Cramerwww.postgres.rocks---------- Forwarded message ---------From: Robert Haas <robertmhaas@gmail.com>Date: Fri, 28 Aug 2020 at 09:31Subject: Re: Support for OUT parameters in proceduresTo: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>Cc: pgsql-hackers <pgsql-hackers@postgresql.org>On Fri, Aug 28, 2020 at 2:04 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> The handling of results of SQL statements executed at the top level\n> (a.k.a. direct SQL) is implementation-specific and varies widely in\n> practice. More interesting in practice, in terms of functionality and\n> also compatibility, are nested calls in PL/pgSQL as well as integration\n> in JDBC.\n\nI agree that driver integration, and in particular JDBC integration,\nis important and needs some thought. I don't think it horribly\nmatters, with a feature like this, what shows up when people type\nstuff into psql. Whatever it is, people will get used to it. But when\nthey interact through a driver, it's different. It is no good\ninventing things, either in PostgreSQL or in the JDBC driver for\nPostgreSQL, that make PostgreSQL behave differently from every other\ndatabase they use. I don't know exactly how we get to a good outcome\nhere, but I think it's worth some careful consideration.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 5 Oct 2020 06:54:23 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Fwd: Support for OUT parameters in procedures"
},
{
"msg_contents": "po 5. 10. 2020 v 11:46 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2020-09-29 08:23, Pavel Stehule wrote:\n> > This was an important issue if I remember well. Passing mandatory NULL\n> > as OUT arguments solves this issue.\n> > I fully agree so OUT arguments are part of the procedure's signature.\n> > Unfortunately, there is another difference\n> > from functions, but I don't think so there is a better solution, and we\n> > should live with it. I think it can work well.\n>\n> This has been committed.\n>\n> > I found one issue. The routine for selecting function or procedure based\n> > on signature should be fixed.\n> >\n> > CREATE OR REPLACE PROCEDURE public.procp(OUT integer)\n> > LANGUAGE plpgsql\n> > AS $procedure$\n> > BEGIN\n> > $1 := 10;\n> > END;\n> > $procedure$\n> >\n> > DO\n> > $$\n> > DECLARE n numeric;\n> > BEGIN\n> > CALL procp(n);\n> > RAISE NOTICE '%', n;\n> > END;\n> > $$;\n> > ERROR: procedure procp(numeric) does not exist\n> > LINE 1: CALL procp(n)\n> > ^\n> > HINT: No procedure matches the given name and argument types. You might\n> > need to add explicit type casts.\n> > QUERY: CALL procp(n)\n> > CONTEXT: PL/pgSQL function inline_code_block line 4 at CALL\n>\n> This is normal; there is no implicit cast from numeric to int. The same\n> error happens if you call a function foo(int) with foo(42::numeric).\n>\n\nthis is OUT argument - so direction is reversed - and implicit cast from\nint to numeric exists.\n\n\n> > postgres=# create or replace procedure px(anyelement, out anyelement)\n> > as $$\n> > begin\n> > $2 := $1;\n> > end;\n> > $$ language plpgsql;\n> >\n> > postgres=# call px(10, null);\n> > ERROR: cannot display a value of type anyelement\n> >\n> > but inside plpgsql it works\n> > do $$\n> > declare xx int;\n> > begin\n> > call px(10, xx);\n> > raise notice '%', xx;\n> > end;\n> > $$;\n>\n> This might be worth further investigation, but since it happens also\n> with INOUT parameters, it seems orthogonal to this patch.\n>\n\nyes - this breaks using varchar against text argument, although these types\nare almost identical.\n\n\n\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npo 5. 10. 2020 v 11:46 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2020-09-29 08:23, Pavel Stehule wrote:\n> This was an important issue if I remember well. Passing mandatory NULL \n> as OUT arguments solves this issue.\n> I fully agree so OUT arguments are part of the procedure's signature. \n> Unfortunately, there is another difference\n> from functions, but I don't think so there is a better solution, and we \n> should live with it. I think it can work well.\n\nThis has been committed.\n\n> I found one issue. The routine for selecting function or procedure based \n> on signature should be fixed.\n> \n> CREATE OR REPLACE PROCEDURE public.procp(OUT integer)\n> LANGUAGE plpgsql\n> AS $procedure$\n> BEGIN\n> $1 := 10;\n> END;\n> $procedure$\n> \n> DO\n> $$\n> DECLARE n numeric;\n> BEGIN\n> CALL procp(n);\n> RAISE NOTICE '%', n;\n> END;\n> $$;\n> ERROR: procedure procp(numeric) does not exist\n> LINE 1: CALL procp(n)\n> ^\n> HINT: No procedure matches the given name and argument types. You might \n> need to add explicit type casts.\n> QUERY: CALL procp(n)\n> CONTEXT: PL/pgSQL function inline_code_block line 4 at CALL\n\nThis is normal; there is no implicit cast from numeric to int. The same \nerror happens if you call a function foo(int) with foo(42::numeric).this is OUT argument - so direction is reversed - and implicit cast from int to numeric exists.\n\n> postgres=# create or replace procedure px(anyelement, out anyelement)\n> as $$\n> begin\n> $2 := $1;\n> end;\n> $$ language plpgsql;\n> \n> postgres=# call px(10, null);\n> ERROR: cannot display a value of type anyelement\n> \n> but inside plpgsql it works\n> do $$\n> declare xx int;\n> begin\n> call px(10, xx);\n> raise notice '%', xx;\n> end;\n> $$;\n\nThis might be worth further investigation, but since it happens also \nwith INOUT parameters, it seems orthogonal to this patch.yes - this breaks using varchar against text argument, although these types are almost identical. \n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 5 Oct 2020 13:39:05 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for OUT parameters in procedures"
},
{
"msg_contents": "Jdbi got a feature request for such parameters a while back:\nhttps://github.com/jdbi/jdbi/issues/1606\n\nThe user uses Oracle which I don't really care to install. When I tried to\nimplement the feature using Postgres,\nI found the driver support too lacking to proceed.\n\nSo there's some interest out there in making it work, and I can volunteer\nto at least smoke test it with my test cases...\n\nOn Mon, Oct 5, 2020 at 3:54 AM Dave Cramer <davecramer@postgres.rocks>\nwrote:\n\n> Just saw this on hackers. Anyon care to comment ?\n>\n> Dave Cramer\n> www.postgres.rocks\n>\n>\n> ---------- Forwarded message ---------\n> From: Robert Haas <robertmhaas@gmail.com>\n> Date: Fri, 28 Aug 2020 at 09:31\n> Subject: Re: Support for OUT parameters in procedures\n> To: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>\n> Cc: pgsql-hackers <pgsql-hackers@postgresql.org>\n>\n>\n> On Fri, Aug 28, 2020 at 2:04 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > The handling of results of SQL statements executed at the top level\n> > (a.k.a. direct SQL) is implementation-specific and varies widely in\n> > practice. More interesting in practice, in terms of functionality and\n> > also compatibility, are nested calls in PL/pgSQL as well as integration\n> > in JDBC.\n>\n> I agree that driver integration, and in particular JDBC integration,\n> is important and needs some thought. I don't think it horribly\n> matters, with a feature like this, what shows up when people type\n> stuff into psql. Whatever it is, people will get used to it. But when\n> they interact through a driver, it's different. It is no good\n> inventing things, either in PostgreSQL or in the JDBC driver for\n> PostgreSQL, that make PostgreSQL behave differently from every other\n> database they use. I don't know exactly how we get to a good outcome\n> here, but I think it's worth some careful consideration.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\nJdbi got a feature request for such parameters a while back:https://github.com/jdbi/jdbi/issues/1606The user uses Oracle which I don't really care to install. When I tried to implement the feature using Postgres,I found the driver support too lacking to proceed.So there's some interest out there in making it work, and I can volunteer to at least smoke test it with my test cases...On Mon, Oct 5, 2020 at 3:54 AM Dave Cramer <davecramer@postgres.rocks> wrote:Just saw this on hackers. Anyon care to comment ?Dave Cramerwww.postgres.rocks---------- Forwarded message ---------From: Robert Haas <robertmhaas@gmail.com>Date: Fri, 28 Aug 2020 at 09:31Subject: Re: Support for OUT parameters in proceduresTo: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>Cc: pgsql-hackers <pgsql-hackers@postgresql.org>On Fri, Aug 28, 2020 at 2:04 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> The handling of results of SQL statements executed at the top level\n> (a.k.a. direct SQL) is implementation-specific and varies widely in\n> practice. More interesting in practice, in terms of functionality and\n> also compatibility, are nested calls in PL/pgSQL as well as integration\n> in JDBC.\n\nI agree that driver integration, and in particular JDBC integration,\nis important and needs some thought. I don't think it horribly\nmatters, with a feature like this, what shows up when people type\nstuff into psql. Whatever it is, people will get used to it. But when\nthey interact through a driver, it's different. It is no good\ninventing things, either in PostgreSQL or in the JDBC driver for\nPostgreSQL, that make PostgreSQL behave differently from every other\ndatabase they use. I don't know exactly how we get to a good outcome\nhere, but I think it's worth some careful consideration.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 5 Oct 2020 09:16:54 -0700",
"msg_from": "Steven Schlansker <stevenschlansker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for OUT parameters in procedures"
},
{
"msg_contents": "On Mon, 5 Oct 2020 at 12:17, Steven Schlansker <stevenschlansker@gmail.com>\nwrote:\n\n> Jdbi got a feature request for such parameters a while back:\n> https://github.com/jdbi/jdbi/issues/1606\n>\n> The user uses Oracle which I don't really care to install. When I tried\n> to implement the feature using Postgres,\n> I found the driver support too lacking to proceed.\n>\n> So there's some interest out there in making it work, and I can volunteer\n> to at least smoke test it with my test cases...\n>\n\nSure, lets see how broken it is right now.\n\n\nDave Cramer\nwww.postgres.rocks\n\n\n>\n> On Mon, Oct 5, 2020 at 3:54 AM Dave Cramer <davecramer@postgres.rocks>\n> wrote:\n>\n>> Just saw this on hackers. Anyon care to comment ?\n>>\n>> Dave Cramer\n>> www.postgres.rocks\n>>\n>>\n>> ---------- Forwarded message ---------\n>> From: Robert Haas <robertmhaas@gmail.com>\n>> Date: Fri, 28 Aug 2020 at 09:31\n>> Subject: Re: Support for OUT parameters in procedures\n>> To: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>\n>> Cc: pgsql-hackers <pgsql-hackers@postgresql.org>\n>>\n>>\n>> On Fri, Aug 28, 2020 at 2:04 AM Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>> > The handling of results of SQL statements executed at the top level\n>> > (a.k.a. direct SQL) is implementation-specific and varies widely in\n>> > practice. More interesting in practice, in terms of functionality and\n>> > also compatibility, are nested calls in PL/pgSQL as well as integration\n>> > in JDBC.\n>>\n>> I agree that driver integration, and in particular JDBC integration,\n>> is important and needs some thought. I don't think it horribly\n>> matters, with a feature like this, what shows up when people type\n>> stuff into psql. Whatever it is, people will get used to it. But when\n>> they interact through a driver, it's different. It is no good\n>> inventing things, either in PostgreSQL or in the JDBC driver for\n>> PostgreSQL, that make PostgreSQL behave differently from every other\n>> database they use. I don't know exactly how we get to a good outcome\n>> here, but I think it's worth some careful consideration.\n>>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>>\n>>\n\nOn Mon, 5 Oct 2020 at 12:17, Steven Schlansker <stevenschlansker@gmail.com> wrote:Jdbi got a feature request for such parameters a while back:https://github.com/jdbi/jdbi/issues/1606The user uses Oracle which I don't really care to install. When I tried to implement the feature using Postgres,I found the driver support too lacking to proceed.So there's some interest out there in making it work, and I can volunteer to at least smoke test it with my test cases...Sure, lets see how broken it is right now.Dave Cramerwww.postgres.rocks On Mon, Oct 5, 2020 at 3:54 AM Dave Cramer <davecramer@postgres.rocks> wrote:Just saw this on hackers. Anyon care to comment ?Dave Cramerwww.postgres.rocks---------- Forwarded message ---------From: Robert Haas <robertmhaas@gmail.com>Date: Fri, 28 Aug 2020 at 09:31Subject: Re: Support for OUT parameters in proceduresTo: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>Cc: pgsql-hackers <pgsql-hackers@postgresql.org>On Fri, Aug 28, 2020 at 2:04 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> The handling of results of SQL statements executed at the top level\n> (a.k.a. direct SQL) is implementation-specific and varies widely in\n> practice. More interesting in practice, in terms of functionality and\n> also compatibility, are nested calls in PL/pgSQL as well as integration\n> in JDBC.\n\nI agree that driver integration, and in particular JDBC integration,\nis important and needs some thought. I don't think it horribly\nmatters, with a feature like this, what shows up when people type\nstuff into psql. Whatever it is, people will get used to it. But when\nthey interact through a driver, it's different. It is no good\ninventing things, either in PostgreSQL or in the JDBC driver for\nPostgreSQL, that make PostgreSQL behave differently from every other\ndatabase they use. I don't know exactly how we get to a good outcome\nhere, but I think it's worth some careful consideration.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 5 Oct 2020 12:24:15 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: Support for OUT parameters in procedures"
},
{
"msg_contents": "\nOn 10/5/20 12:24 PM, Dave Cramer wrote:\n>\n>\n> On Mon, 5 Oct 2020 at 12:17, Steven Schlansker\n> <stevenschlansker@gmail.com <mailto:stevenschlansker@gmail.com>> wrote:\n>\n> Jdbi got a feature request for such parameters a while back:\n> https://github.com/jdbi/jdbi/issues/1606\n>\n> The user uses Oracle which I don't really care to install. When I\n> tried to implement the feature using Postgres,\n> I found the driver support too lacking to proceed.\n>\n> So there's some interest out there in making it work, and I can\n> volunteer to at least smoke test it with my test cases...\n>\n>\n> Sure, lets see how broken it is right now.\n>\n>\n>\n\nWe're working on it. It's a bit tricky, but we need to get it working,\nfor sure. The main thing is that the driver needs to send some type\nother than VOID for the OUT param. Minimally that can be UNKNOWN, but it\nshould probably reflect the type set in registerOutParameter().\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Mon, 5 Oct 2020 14:59:32 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Support for OUT parameters in procedures"
},
{
"msg_contents": "On Mon, 5 Oct 2020 at 14:59, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 10/5/20 12:24 PM, Dave Cramer wrote:\n> >\n> >\n> > On Mon, 5 Oct 2020 at 12:17, Steven Schlansker\n> > <stevenschlansker@gmail.com <mailto:stevenschlansker@gmail.com>> wrote:\n> >\n> > Jdbi got a feature request for such parameters a while back:\n> > https://github.com/jdbi/jdbi/issues/1606\n> >\n> > The user uses Oracle which I don't really care to install. When I\n> > tried to implement the feature using Postgres,\n> > I found the driver support too lacking to proceed.\n> >\n> > So there's some interest out there in making it work, and I can\n> > volunteer to at least smoke test it with my test cases...\n> >\n> >\n> > Sure, lets see how broken it is right now.\n> >\n> >\n> >\n>\n> We're working on it. It's a bit tricky, but we need to get it working,\n> for sure. The main thing is that the driver needs to send some type\n> other than VOID for the OUT param. Minimally that can be UNKNOWN, but it\n> should probably reflect the type set in registerOutParameter().\n>\n> I would think we run into the normal issues with things like timestamps\nand dates with and without time zones\n\nThanks,\n\nDave Cramer\nwww.postgres.rocks\n\nOn Mon, 5 Oct 2020 at 14:59, Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 10/5/20 12:24 PM, Dave Cramer wrote:\n>\n>\n> On Mon, 5 Oct 2020 at 12:17, Steven Schlansker\n> <stevenschlansker@gmail.com <mailto:stevenschlansker@gmail.com>> wrote:\n>\n> Jdbi got a feature request for such parameters a while back:\n> https://github.com/jdbi/jdbi/issues/1606\n>\n> The user uses Oracle which I don't really care to install. When I\n> tried to implement the feature using Postgres,\n> I found the driver support too lacking to proceed.\n>\n> So there's some interest out there in making it work, and I can\n> volunteer to at least smoke test it with my test cases...\n>\n>\n> Sure, lets see how broken it is right now.\n>\n>\n>\n\nWe're working on it. It's a bit tricky, but we need to get it working,\nfor sure. The main thing is that the driver needs to send some type\nother than VOID for the OUT param. Minimally that can be UNKNOWN, but it\nshould probably reflect the type set in registerOutParameter().\nI would think we run into the normal issues with things like timestamps and dates with and without time zonesThanks,Dave Cramerwww.postgres.rocks",
"msg_date": "Mon, 5 Oct 2020 18:21:21 -0400",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: Support for OUT parameters in procedures"
}
] |
[
{
"msg_contents": "Hello hackers,\r\n\r\nWhile working on two phase related issues, I found something related to two phase could be optimized.\r\n\r\n1. The current implementation decouples PREPRE and COMMIT/ABORT PREPARE a lot. This is flexible, but if\r\n PREPARE & COMMIT/ABORT mostly happens on the same backend we could use the cache mechanism to\r\n speed up, e.g.\r\n\r\n a. FinishPreparedTransaction()->LockGXact(gid, user)\r\n for (i = 0; i < TwoPhaseState->numPrepXacts; i++)\r\n find the gxact that matches gid \r\n \r\n For this we can cache the gxact during PREPARE and use that for a fast path, i.e. if the cached gxact\r\n matches gid we do not need to walk through the gxact array. By the way, if the gxact array is large this\r\n will be a separate performance issue (use shared-memory hash table if needed?).\r\n\r\n b. FinishPreparedTransaction() reads the PREPARE information from either state file (stored during checkpoint)\r\n or wal file. We could cache the content during PREPARE, i.e. in EndPrepare() then in FinishPreparedTransaction()\r\n we can avoid reading the state file or the wal file.\r\n\r\n It is possible that some databases based on Postgres two phase might not want the cache, e.g. if PREPARE\r\n backend is always different than the COMMIT/ABORT PREPARE backend (I do not know what database is\r\n designing like this though), but gxact caching is almost no overhead and for b we could use ifdef to guard the\r\n PREPARE wal data copying code.\r\n\r\n The two optimizations are easy and small. I've verified on Greenplum database (based on Postgres 12).\r\n\r\n2. wal content duplication between PREPARE and COMMT/ABORT PREPARE\r\n \r\n See the below COMMIT PREPARE function call. Those hdr->* have existed in PREPARE wal also. We do\r\n not need them in the COMMIT PREPARE wal also. During recovery, we could load these information (both\r\n COMMIT and ABORT) into memory and in COMMIT/ABORT PREPARE redo we use the corresponding data.\r\n\r\n RecordTransactionCommitPrepared(xid,\r\n hdr->nsubxacts, children,\r\n hdr->ncommitrels, commitrels,\r\n hdr->ninvalmsgs, invalmsgs,\r\n hdr->initfileinval, gid);\r\n\r\n One drawback of the change is this might involve non-trivial change.\r\n\r\n3. About gid, current gid is defined as a char[]. I'm wondering if we should define an opaque type and let some\r\n Databases implement their own gid types using callbacks. Typically if I want to use 64-bit distributed xid as gid,\r\n current code is not that performance & storage friendly (e.g. still need to use strcmp to find gxact in LockGXact,).\r\n We may implement a default implementation as char[]. gid is not widely used so the change seems to\r\n be small (interfaces of copy, comparison, conversion from string to internal gid type for the PREPARE statement, etc)\r\n\r\nAny thoughts?\r\n\r\nRegards,\r\nPaul\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Thu, 27 Aug 2020 08:39:01 +0000",
"msg_from": "Paul Guo <guopa@vmware.com>",
"msg_from_op": true,
"msg_subject": "Some two phase optimization ideas"
}
] |
[
{
"msg_contents": "Hi All,\n\nWhile reviewing the patch for pg_surgery contrib module - [1], Asim\nPraveen suggested that it would be better to replace the check for\naccess method OID with handler OID. Otherwise, if someone creates a\nnew AM using the AM handler that is originally supported for e.g.\n\"heap_tableam_handler\" and if this new AM is used to create a table,\nthen one cannot perform surgery on such tables because we have checks\nfor access method OID which would reject this new AM as we only allow\nheap AM. For e.g. if we do this:\n\ncreate access method myam type table handler heap_tableam_handler;\ncreate table mytable (…) using myam;\n\nAnd use an access method OID check, we won't be able to perform\nsurgery on mytable created above because it isn't the heap table\nalthough its table structure is actually heap.\n\nThis problem won't be there if the check for access method OID is\nreplaced with handler OID. I liked this suggestion from Asim and did\nthe changes accordingly. However, while browsing the code for other\ncontrib modules, I could find such checks present in some of the\ncontrib modules like pgstattuple, pageinspect and pgrowlocks as well.\nSo, just wondering if we should be doing similar changes in these\ncontrib modules also.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/1D56CEFD-E195-4E6B-B870-3383E3E8C65E%40vmware.com\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 27 Aug 2020 15:07:31 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Should we replace the checks for access method OID with handler OID?"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 5:37 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> While reviewing the patch for pg_surgery contrib module - [1], Asim\n> Praveen suggested that it would be better to replace the check for\n> access method OID with handler OID. Otherwise, if someone creates a\n> new AM using the AM handler that is originally supported for e.g.\n> \"heap_tableam_handler\" and if this new AM is used to create a table,\n> then one cannot perform surgery on such tables because we have checks\n> for access method OID which would reject this new AM as we only allow\n> heap AM. For e.g. if we do this:\n>\n> create access method myam type table handler heap_tableam_handler;\n> create table mytable (…) using myam;\n>\n> And use an access method OID check, we won't be able to perform\n> surgery on mytable created above because it isn't the heap table\n> although its table structure is actually heap.\n\nThe only reason I can see why it would make sense to do this sort of\nthing is if you wanted to create a new AM for testing purposes which\nbehaves like some existing AM but is technically a different AM. And\nif you did that, then I guess the change you are proposing would make\nit behave more like it's the same thing after all, which seems like it\nmight be missing the point.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 27 Aug 2020 11:50:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we replace the checks for access method OID with handler\n OID?"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 9:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Aug 27, 2020 at 5:37 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > While reviewing the patch for pg_surgery contrib module - [1], Asim\n> > Praveen suggested that it would be better to replace the check for\n> > access method OID with handler OID. Otherwise, if someone creates a\n> > new AM using the AM handler that is originally supported for e.g.\n> > \"heap_tableam_handler\" and if this new AM is used to create a table,\n> > then one cannot perform surgery on such tables because we have checks\n> > for access method OID which would reject this new AM as we only allow\n> > heap AM. For e.g. if we do this:\n> >\n> > create access method myam type table handler heap_tableam_handler;\n> > create table mytable (…) using myam;\n> >\n> > And use an access method OID check, we won't be able to perform\n> > surgery on mytable created above because it isn't the heap table\n> > although its table structure is actually heap.\n>\n> The only reason I can see why it would make sense to do this sort of\n> thing is if you wanted to create a new AM for testing purposes which\n> behaves like some existing AM but is technically a different AM. And\n> if you did that, then I guess the change you are proposing would make\n> it behave more like it's the same thing after all, which seems like it\n> might be missing the point.\n>\n\nOkay, understood.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 28 Aug 2020 09:10:07 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we replace the checks for access method OID with handler\n OID?"
}
] |
[
{
"msg_contents": "Hi\n\nI am new to postgreSQL , I am trying to install the same with XML support but it is giving below error on configuration.\n\n./configure --prefix=/opt/postgresql-12.3/pqsql --with-libxml --datadir=/home/postgres/ --with-includes=/usr/lib64/\n\nchecking for libxml/parser.h... no\nconfigure: error: header file <libxml/parser.h> is required for XML support\n\n\n\nI am using below RHEL flavor and have installed below xml package for support\n\n Operating System: Red Hat Enterprise Linux Server 7.9 Beta (Maipo)\n CPE OS Name: cpe:/o:redhat:enterprise_linux:7.9:beta:server\n Kernel: Linux 3.10.0-1136.el7.ppc64le\n Architecture: ppc64-le\n\n\nrpm -qi libxml2-2.9.1-6.el7.4.ppc64le\nName : libxml2\nVersion : 2.9.1\nRelease : 6.el7.4\nArchitecture: ppc64le\nInstall Date: Thu 09 Jul 2020 02:55:33 PM CDT\nGroup : Development/Libraries\nSize : 2594518\nLicense : MIT\nSignature : RSA/SHA256, Tue 26 Nov 2019 08:05:06 AM CST, Key ID 199e2f91fd431d51\nSource RPM : libxml2-2.9.1-6.el7.4.src.rpm\nBuild Date : Tue 26 Nov 2019 07:21:34 AM CST\nBuild Host : ppc-029.build.eng.bos.redhat.com\nRelocations : (not relocatable)\nPackager : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>\nVendor : Red Hat, Inc.\nURL : http://xmlsoft.org/\nSummary : Library providing XML and HTML support\nDescription :\nThis library allows to manipulate XML files. It includes support\nto read, modify and write XML and HTML files. There is DTDs support\nthis includes parsing and validation even with complex DtDs, either\nat parse time or later once the document has been modified. The output\ncan be a simple SAX stream or and in-memory DOM like representations.\nIn this case one can use the built-in XPath and XPointer implementation\nto select sub nodes or ranges. A flexible Input/Output mechanism is\navailable, with existing HTTP and FTP modules and combined to an\nURI library.\n\n\nls -ltr /usr/lib64/ | grep -i xml\n-rwxr-xr-x. 1 root root 68528 May 19 2015 libxmlrpc_util.so.3.32\n-rwxr-xr-x. 1 root root 136488 May 19 2015 libxmlrpc.so.3.32\n-rwxr-xr-x. 1 root root 68488 May 19 2015 libxmlrpc_server.so.3.32\n-rwxr-xr-x. 1 root root 68336 May 19 2015 libxmlrpc_server_cgi.so.3.32\n-rwxr-xr-x. 1 root root 68416 May 19 2015 libxmlrpc_server_abyss.so.3.32\n-rwxr-xr-x. 1 root root 69720 May 19 2015 libxmlrpc_client.so.3.32\n-rwxr-xr-x. 1 root root 138240 May 19 2015 libxmlrpc_abyss.so.3.32\n-rwxr-xr-x. 1 root root 2271136 Nov 26 2019 libxml2.so.2.9.1\n-rwxr-xr-x. 1 root root 403952 Dec 6 2019 libQtXml.so.4.8.7\n-rwxr-xr-x. 1 root root 5621696 Dec 6 2019 libQtXmlPatterns.so.4.8.7\n-rwxr-xr-x. 1 root root 336816 Feb 19 2020 libQt5Xml.so.5.9.7\nlrwxrwxrwx. 1 root root 16 Jul 9 14:55 libxml2.so.2 -> libxml2.so.2.9.1\nlrwxrwxrwx. 1 root root 17 Jul 9 14:55 libxmlrpc.so.3 -> libxmlrpc.so.3.32\nlrwxrwxrwx. 1 root root 23 Jul 9 14:55 libxmlrpc_abyss.so.3 -> libxmlrpc_abyss.so.3.32\nlrwxrwxrwx. 1 root root 22 Jul 9 14:55 libxmlrpc_util.so.3 -> libxmlrpc_util.so.3.32\nlrwxrwxrwx. 1 root root 24 Jul 9 14:55 libxmlrpc_server.so.3 -> libxmlrpc_server.so.3.32\nlrwxrwxrwx. 1 root root 28 Jul 9 14:55 libxmlrpc_server_cgi.so.3 -> libxmlrpc_server_cgi.so.3.32\nlrwxrwxrwx. 1 root root 30 Jul 9 14:55 libxmlrpc_server_abyss.so.3 -> libxmlrpc_server_abyss.so.3.32\nlrwxrwxrwx. 1 root root 24 Jul 9 14:56 libxmlrpc_client.so.3 -> libxmlrpc_client.so.3.32\nlrwxrwxrwx. 1 root root 18 Jul 9 14:56 libQt5Xml.so.5.9 -> libQt5Xml.so.5.9.7\nlrwxrwxrwx. 1 root root 18 Jul 9 14:56 libQt5Xml.so.5 -> libQt5Xml.so.5.9.7\nlrwxrwxrwx. 1 root root 17 Jul 21 15:51 libQtXml.so.4.8 -> libQtXml.so.4.8.7\nlrwxrwxrwx. 1 root root 17 Jul 21 15:51 libQtXml.so.4 -> libQtXml.so.4.8.7\nlrwxrwxrwx. 1 root root 25 Jul 21 15:51 libQtXmlPatterns.so.4.8 -> libQtXmlPatterns.so.4.8.7\nlrwxrwxrwx. 1 root root 25 Jul 21 15:51 libQtXmlPatterns.so.4 -> libQtXmlPatterns.so.4.8.7\nlrwxrwxrwx. 1 root root 27 Aug 27 02:52 libxml2.so -> /usr/lib64/libxml2.so.2.9.1\n\n\nThanks and Regards,\nSACHIN KHANNA\n212 BASIS DBA TEAM OFFSHORE\nOffice : 204058624\nCell : 9049522511\nEmail: sachin.khanna@sysco.com<mailto:sachin.khanna@sysco.com>\nInfosys Technologies Limited (r) | PUNE\n\n\n\n\n\n\n\n\n\n\nHi \n \nI am new to postgreSQL , I am trying to install the same with XML support but it is giving below error on configuration.\n\n \n./configure --prefix=/opt/postgresql-12.3/pqsql --with-libxml --datadir=/home/postgres/ --with-includes=/usr/lib64/\n \nchecking for libxml/parser.h... no\nconfigure: error: header file <libxml/parser.h> is required for XML support\n \n \n \nI am using below RHEL flavor and have installed below xml package for support\n\n \n Operating System: Red Hat Enterprise Linux Server 7.9 Beta (Maipo)\n CPE OS Name: cpe:/o:redhat:enterprise_linux:7.9:beta:server\n Kernel: Linux 3.10.0-1136.el7.ppc64le\n Architecture: ppc64-le\n \n \nrpm -qi libxml2-2.9.1-6.el7.4.ppc64le\nName : libxml2\nVersion : 2.9.1\nRelease : 6.el7.4\nArchitecture: ppc64le\nInstall Date: Thu 09 Jul 2020 02:55:33 PM CDT\nGroup : Development/Libraries\nSize : 2594518\nLicense : MIT\nSignature : RSA/SHA256, Tue 26 Nov 2019 08:05:06 AM CST, Key ID 199e2f91fd431d51\nSource RPM : libxml2-2.9.1-6.el7.4.src.rpm\nBuild Date : Tue 26 Nov 2019 07:21:34 AM CST\nBuild Host : ppc-029.build.eng.bos.redhat.com\nRelocations : (not relocatable)\nPackager : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>\nVendor : Red Hat, Inc.\nURL : http://xmlsoft.org/\nSummary : Library providing XML and HTML support\nDescription :\nThis library allows to manipulate XML files. It includes support\nto read, modify and write XML and HTML files. There is DTDs support\nthis includes parsing and validation even with complex DtDs, either\nat parse time or later once the document has been modified. The output\ncan be a simple SAX stream or and in-memory DOM like representations.\nIn this case one can use the built-in XPath and XPointer implementation\nto select sub nodes or ranges. A flexible Input/Output mechanism is\navailable, with existing HTTP and FTP modules and combined to an\nURI library.\n \n \nls -ltr /usr/lib64/ | grep -i xml\n-rwxr-xr-x. 1 root root 68528 May 19 2015 libxmlrpc_util.so.3.32\n-rwxr-xr-x. 1 root root 136488 May 19 2015 libxmlrpc.so.3.32\n-rwxr-xr-x. 1 root root 68488 May 19 2015 libxmlrpc_server.so.3.32\n-rwxr-xr-x. 1 root root 68336 May 19 2015 libxmlrpc_server_cgi.so.3.32\n-rwxr-xr-x. 1 root root 68416 May 19 2015 libxmlrpc_server_abyss.so.3.32\n-rwxr-xr-x. 1 root root 69720 May 19 2015 libxmlrpc_client.so.3.32\n-rwxr-xr-x. 1 root root 138240 May 19 2015 libxmlrpc_abyss.so.3.32\n-rwxr-xr-x. 1 root root 2271136 Nov 26 2019 libxml2.so.2.9.1\n-rwxr-xr-x. 1 root root 403952 Dec 6 2019 libQtXml.so.4.8.7\n-rwxr-xr-x. 1 root root 5621696 Dec 6 2019 libQtXmlPatterns.so.4.8.7\n-rwxr-xr-x. 1 root root 336816 Feb 19 2020 libQt5Xml.so.5.9.7\nlrwxrwxrwx. 1 root root 16 Jul 9 14:55 libxml2.so.2 -> libxml2.so.2.9.1\nlrwxrwxrwx. 1 root root 17 Jul 9 14:55 libxmlrpc.so.3 -> libxmlrpc.so.3.32\nlrwxrwxrwx. 1 root root 23 Jul 9 14:55 libxmlrpc_abyss.so.3 -> libxmlrpc_abyss.so.3.32\nlrwxrwxrwx. 1 root root 22 Jul 9 14:55 libxmlrpc_util.so.3 -> libxmlrpc_util.so.3.32\nlrwxrwxrwx. 1 root root 24 Jul 9 14:55 libxmlrpc_server.so.3 -> libxmlrpc_server.so.3.32\nlrwxrwxrwx. 1 root root 28 Jul 9 14:55 libxmlrpc_server_cgi.so.3 -> libxmlrpc_server_cgi.so.3.32\nlrwxrwxrwx. 1 root root 30 Jul 9 14:55 libxmlrpc_server_abyss.so.3 -> libxmlrpc_server_abyss.so.3.32\nlrwxrwxrwx. 1 root root 24 Jul 9 14:56 libxmlrpc_client.so.3 -> libxmlrpc_client.so.3.32\nlrwxrwxrwx. 1 root root 18 Jul 9 14:56 libQt5Xml.so.5.9 -> libQt5Xml.so.5.9.7\nlrwxrwxrwx. 1 root root 18 Jul 9 14:56 libQt5Xml.so.5 -> libQt5Xml.so.5.9.7\nlrwxrwxrwx. 1 root root 17 Jul 21 15:51 libQtXml.so.4.8 -> libQtXml.so.4.8.7\nlrwxrwxrwx. 1 root root 17 Jul 21 15:51 libQtXml.so.4 -> libQtXml.so.4.8.7\nlrwxrwxrwx. 1 root root 25 Jul 21 15:51 libQtXmlPatterns.so.4.8 -> libQtXmlPatterns.so.4.8.7\nlrwxrwxrwx. 1 root root 25 Jul 21 15:51 libQtXmlPatterns.so.4 -> libQtXmlPatterns.so.4.8.7\nlrwxrwxrwx. 1 root root 27 Aug 27 02:52 libxml2.so -> /usr/lib64/libxml2.so.2.9.1\n \n \nThanks and Regards,\nSACHIN KHANNA\n212 BASIS DBA TEAM OFFSHORE\nOffice : 204058624\nCell : 9049522511\nEmail: sachin.khanna@sysco.com\nInfosys Technologies Limited ® | PUNE",
"msg_date": "Thu, 27 Aug 2020 11:51:09 +0000",
"msg_from": "Sachin Khanna <Sachin_Khanna@infosys.com>",
"msg_from_op": true,
"msg_subject": "Please help for error ( file <libxml/parser.h> is required for XML\n support )"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 6:30 PM Sachin Khanna <Sachin_Khanna@infosys.com> wrote:\n>\n> Hi\n>\n>\n>\n> I am new to postgreSQL , I am trying to install the same with XML support but it is giving below error on configuration.\n>\n>\n>\n> ./configure --prefix=/opt/postgresql-12.3/pqsql --with-libxml --datadir=/home/postgres/ --with-includes=/usr/lib64/\n>\n\nIt seems like your include path is pointing to \"/usr/lib64/\" which\nbasically contains the libraries and not the header files, I guess. To\ninclude libraries you should be using --with-libraries option.\n\nAlso, I feel that these types of questions are not for hackers\nmailing-list. These are configuration related issues and should\nprobably be raised in pgsql-general mailing list\n(pgsql-general.postgresql.org).\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 27 Aug 2020 19:00:33 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Please help for error ( file <libxml/parser.h> is required for\n XML support )"
}
] |
[
{
"msg_contents": "Hi,\n\nIn good written query IS NULL and IS NOT NULL check on primary and non null\nconstraints columns should not happen but if it is mentioned PostgreSQL\nhave to be smart enough for not checking every return result about null\nvalue on primary key column. Instead it can be evaluate its truth value and\nset the result only once. The attached patch evaluate and set the truth\nvalue for null and not null check on primary column on planning time if the\nrelation attribute is not mention on nullable side of outer join.\n\nThought?\n\nregards\n\nSurafel",
"msg_date": "Thu, 27 Aug 2020 15:31:15 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": true,
"msg_subject": "Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "Hi Surafel,\n\nOn Thu, Aug 27, 2020 at 6:01 PM Surafel Temesgen <surafel3000@gmail.com> wrote:\n>\n> Hi,\n>\n> In good written query IS NULL and IS NOT NULL check on primary and non null constraints columns should not happen but if it is mentioned PostgreSQL have to be smart enough for not checking every return result about null value on primary key column. Instead it can be evaluate its truth value and set the result only once. The attached patch evaluate and set the truth value for null and not null check on primary column on planning time if the relation attribute is not mention on nullable side of outer join.\n>\n> Thought?\n\nThanks for the patch. Such SQL may arise from not-so-smart SQL\ngeneration tools. It will be useful to have this optimization. Here\nare some comments on your patch.\n\n }\n else\n has_nonconst_input = true;\n@@ -3382,7 +3395,47 @@ eval_const_expressions_mutator(Node *node,\n\n+\n+ if (pkattnos != NULL &&\nbms_is_member(var->varattno - FirstLowInvalidHeapAttributeNumber,\npkattnos)\n+ && !check_null_side(context->root, relid))\n\nSince this is working on parse->rtable this will work only for top level tables\nas against the inherited tables or partitions which may have their own primary\nkey constraints if the parent doesn't have those.\n\nThis better be done when planning individual relations, plain or join or upper,\nwhere all the necessary information is already available with each of the\nrelations and also the quals, derived as well as user specified, are\ndistributed to individual relations where they should be evalutated. My memory\nis hazy but it might be possible do this while distributing the quals\nthemselves (distribute_qual_to_rels()).\n\nSaid that, to me, this looks more like something we should be able to do at the\ntime of constraint exclusion. But IIRC, we just prove whether constraints\nrefute a qual and not necessarily whether constraints imply a qual, making it\nredundant, as is required here. E.g. primary key constraint implies key NOT\nNULL rendering a \"key IS NOT NULL\" qual redundant. It might be better to test\nthe case when col IS NOT NULL is specified on a column which already has a NOT\nNULL constraint. That may be another direction to take. We may require much\nlesser code.\n\nWith either of these two approaches, the amount of code changes might\nbe justified.\n\n+explain (costs off)\n+SELECT * FROM b RIGHT JOIN a ON (b.a_id = a.id) WHERE (a.id IS NULL\nOR a.id > 0);\n+ QUERY PLAN\n+-----------------------------------------------\n+ Hash Right Join\n+ Hash Cond: (b.a_id = a.id)\n+ -> Seq Scan on b\n+ -> Hash\n+ -> Bitmap Heap Scan on a\n+ Recheck Cond: (id > 0)\n+ -> Bitmap Index Scan on a_pkey\n+ Index Cond: (id > 0)\n+(8 rows)\n\nThanks for the tests.\n\nPlease add the patch to the next commitfest https://commitfest.postgresql.org/.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 28 Aug 2020 12:18:35 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "Hi ,\n\nThank you for looking into this\n\nOn Fri, Aug 28, 2020 at 9:48 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> }\n> else\n> has_nonconst_input = true;\n> @@ -3382,7 +3395,47 @@ eval_const_expressions_mutator(Node *node,\n>\n> +\n> + if (pkattnos != NULL &&\n> bms_is_member(var->varattno - FirstLowInvalidHeapAttributeNumber,\n> pkattnos)\n> + && !check_null_side(context->root, relid))\n>\n> Since this is working on parse->rtable this will work only for top level\n> tables\n> as against the inherited tables or partitions which may have their own\n> primary\n> key constraints if the parent doesn't have those.\n>\n>\n\nIn that case the table have to be specified in from clause otherwise its\nerror\n\ne.g postgres=# CREATE TABLE cities (\n\nname text,\n\npopulation float,\n\naltitude int\n\n);\n\nCREATE TABLE\n\npostgres=# CREATE TABLE capitals (\n\nid serial primary key,\n\nstate char(2)\n\n) INHERITS (cities);\n\nCREATE TABLE\n\npostgres=# EXPLAIN SELECT * FROM cities WHERE id is not null;\n\nERROR: column \"id\" does not exist\n\nLINE 1: EXPLAIN SELECT * FROM cities WHERE id is not null;\n\n\nEven it will not work on the child table because the primary key constraint\non the parent table is not in-force in the child table.\n\n\n\n> This better be done when planning individual relations, plain or join or\n> upper,\n> where all the necessary information is already available with each of the\n> relations and also the quals, derived as well as user specified, are\n> distributed to individual relations where they should be evalutated. My\n> memory\n> is hazy but it might be possible do this while distributing the quals\n> themselves (distribute_qual_to_rels()).\n>\n>\nThe place where all the necessary information available is on\nreduce_outer_joins as the comment of the function states but the downside\nis its will only be inexpensive if the query contains outer join\n\n\n> Said that, to me, this looks more like something we should be able to do\n> at the\n> time of constraint exclusion. But IIRC, we just prove whether constraints\n> refute a qual and not necessarily whether constraints imply a qual, making\n> it\n> redundant, as is required here. E.g. primary key constraint implies key NOT\n> NULL rendering a \"key IS NOT NULL\" qual redundant. It might be better to\n> test\n> the case when col IS NOT NULL is specified on a column which already has a\n> NOT\n> NULL constraint. That may be another direction to take. We may require much\n> lesser code.\n>\n>\nI don’t add NOT NULL constraint optimization to the patch because cached\nplan is not invalidation in case of a change in NOT NULL constraint\n\n\n> Please add the patch to the next commitfest\n> https://commitfest.postgresql.org/.\n>\n>\nI add it is here https://commitfest.postgresql.org/29/2699/\nThank you\n\nregards\nSurafel\n\nHi ,Thank you for looking into this On Fri, Aug 28, 2020 at 9:48 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n }\n else\n has_nonconst_input = true;\n@@ -3382,7 +3395,47 @@ eval_const_expressions_mutator(Node *node,\n\n+\n+ if (pkattnos != NULL &&\nbms_is_member(var->varattno - FirstLowInvalidHeapAttributeNumber,\npkattnos)\n+ && !check_null_side(context->root, relid))\n\nSince this is working on parse->rtable this will work only for top level tables\nas against the inherited tables or partitions which may have their own primary\nkey constraints if the parent doesn't have those.\n\n\nIn that case the table have to be specified in from clause\notherwise its error\n e.g postgres=# CREATE TABLE cities (\nname text, \n\npopulation float,\naltitude int\n);\nCREATE TABLE\npostgres=# CREATE TABLE capitals (\nid serial primary key,\nstate char(2)\n) INHERITS (cities);\nCREATE TABLE\npostgres=# EXPLAIN SELECT * FROM cities WHERE id is not null;\nERROR: column \"id\" does not exist\nLINE 1: EXPLAIN SELECT * FROM cities WHERE id is not null;\nEven it will not work on the child table because the primary key\nconstraint on the parent table is not in-force in the child table.\n \nThis better be done when planning individual relations, plain or join or upper,\nwhere all the necessary information is already available with each of the\nrelations and also the quals, derived as well as user specified, are\ndistributed to individual relations where they should be evalutated. My memory\nis hazy but it might be possible do this while distributing the quals\nthemselves (distribute_qual_to_rels()).\n\n\nThe place where all the necessary information available is on\nreduce_outer_joins as the comment of the function states but the downside is its will only be inexpensive if the query contains outer join\n \nSaid that, to me, this looks more like something we should be able to do at the\ntime of constraint exclusion. But IIRC, we just prove whether constraints\nrefute a qual and not necessarily whether constraints imply a qual, making it\nredundant, as is required here. E.g. primary key constraint implies key NOT\nNULL rendering a \"key IS NOT NULL\" qual redundant. It might be better to test\nthe case when col IS NOT NULL is specified on a column which already has a NOT\nNULL constraint. That may be another direction to take. We may require much\nlesser code.\n\n\nI don’t add NOT NULL constraint optimization to the patch\nbecause cached plan is not invalidation in case of a change in NOT\nNULL constraint\n \nPlease add the patch to the next commitfest https://commitfest.postgresql.org/.\n I add it is here https://commitfest.postgresql.org/29/2699/Thank you regards Surafel",
"msg_date": "Tue, 1 Sep 2020 15:26:39 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "Surafel Temesgen <surafel3000@gmail.com> writes:\n> [ null_check_on_pkey_optimization_v1.patch ]\n\nI took a very brief look at this.\n\n> I don’t add NOT NULL constraint optimization to the patch because cached\n> plan is not invalidation in case of a change in NOT NULL constraint\n\nThat's actually not a problem, even though some people (including me)\nhave bandied about such suppositions in the past. Relying on attnotnull\nin the planner is perfectly safe [1]. Plus it'd likely be cheaper as\nwell as more general than looking up pkey information. If we did need\nto explicitly record the plan's dependency on a constraint, this patch\nwould be wrong anyhow because it fails to make any such notation about\nthe pkey constraint it relied on.\n\nThe \"check_null_side\" code you're proposing seems really horrid.\nFor one thing, it seems quite out of place for eval_const_expressions\nto be doing that. For another, it's wrong in principle because\neval_const_expressions doesn't know which part of the query tree\nit's being invoked on, so it cannot know whether outer-join\nnullability is an issue. For another, doing that work over again\nfrom scratch every time we see a potentially optimizable NullTest\nlooks expensive. (I wonder whether you have tried to measure the\nperformance penalty imposed by this patch in cases where it fails\nto make any proof.)\n\nI've been doing some handwaving about changing the representation\nof Vars, with an eye to making it clear by inspection whether a\ngiven Var is nullable by some lower outer join [2]. If that work\never comes to fruition then the need for \"check_null_side\" would\ngo away. So maybe we should put this idea on the back burner\nuntil that happens.\n\nI'm not sure what I think about Ashutosh's ideas about doing this\nsomewhere else than eval_const_expressions. I do not buy the argument\nthat it's interesting to do this separately for each child partition.\nChild partitions that have attnotnull constraints different from their\nparent's are at best a tiny minority use-case, if indeed we allow them\nat all (I tend to think we shouldn't). On the other hand it's possible\nthat postponing the check would allow bypassing the outer-join problem,\nie if we only do it for quals that have dropped down to the relation\nscan level then we don't need to worry about outer join effects.\n\nAnother angle here is that eval_const_expressions runs before\nreduce_outer_joins, meaning that if it's doing things that depend\non outer-join-ness then it will sometimes fail to optimize cases\nthat could be optimized. As a not incidental example, consider\n\n\tselect ... from t1 left join t2 on (...) where t2.x is not null;\n\nreduce_outer_joins will realize that the left join can be reduced\nto a plain join, whereupon (if t2.x is attnotnull) the WHERE clause\nreally is constant-true --- and this seems like a poster-child case\nfor it being useful to optimize away the WHERE clause. But\nwe won't be able to detect that if we apply the optimization during\neval_const_expressions. So maybe that's a good reason to do it\nsomewhere later.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/23564.1585885251%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/flat/15848.1576515643%40sss.pgh.pa.us\n\n\n",
"msg_date": "Mon, 07 Sep 2020 21:45:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "Hi Tom\n\nOn Tue, Sep 8, 2020 at 4:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Surafel Temesgen <surafel3000@gmail.com> writes:\n> > [ null_check_on_pkey_optimization_v1.patch ]\n>\n> I took a very brief look at this.\n>\n> > I don’t add NOT NULL constraint optimization to the patch because cached\n> > plan is not invalidation in case of a change in NOT NULL constraint\n>\n> That's actually not a problem, even though some people (including me)\n> have bandied about such suppositions in the past. Relying on attnotnull\n> in the planner is perfectly safe [1]. Plus it'd likely be cheaper as\n> well as more general than looking up pkey information. If we did need\n> to explicitly record the plan's dependency on a constraint, this patch\n> would be wrong anyhow because it fails to make any such notation about\n> the pkey constraint it relied on.\n>\n>\nok thank you. I will change my next patch accordingly\n\n\n> The \"check_null_side\" code you're proposing seems really horrid.\n> For one thing, it seems quite out of place for eval_const_expressions\n> to be doing that. For another, it's wrong in principle because\n> eval_const_expressions doesn't know which part of the query tree\n> it's being invoked on, so it cannot know whether outer-join\n> nullability is an issue. For another, doing that work over again\n> from scratch every time we see a potentially optimizable NullTest\n> looks expensive. (I wonder whether you have tried to measure the\n> performance penalty imposed by this patch in cases where it fails\n> to make any proof.)\n>\n>\nI was thinking about collecting data about joins only once at the start of\neval_const_expressions but I assume most queries don't have NULL check\nexpressions and postpone it until we find one. Thinking about it again I\nthink it can be done better by storing check_null_side_state into\neval_const_expressions_context to use it for subsequent evaluation.\n\n\nI'm not sure what I think about Ashutosh's ideas about doing this\n> somewhere else than eval_const_expressions. I do not buy the argument\n> that it's interesting to do this separately for each child partition.\n> Child partitions that have attnotnull constraints different from their\n> parent's are at best a tiny minority use-case, if indeed we allow them\n> at all (I tend to think we shouldn't). On the other hand it's possible\n> that postponing the check would allow bypassing the outer-join problem,\n> ie if we only do it for quals that have dropped down to the relation\n> scan level then we don't need to worry about outer join effects.\n>\n>\nAt eval_const_expressions we check every expression and optimize it if\npossible. Introducing other check and optimization mechanism to same other\nplace just for this optimization seems expensive with respect to\nperformance penalty to me\n\n\n> Another angle here is that eval_const_expressions runs before\n> reduce_outer_joins, meaning that if it's doing things that depend\n> on outer-join-ness then it will sometimes fail to optimize cases\n> that could be optimized. As a not incidental example, consider\n>\n> select ... from t1 left join t2 on (...) where t2.x is not null;\n>\n> reduce_outer_joins will realize that the left join can be reduced\n> to a plain join, whereupon (if t2.x is attnotnull) the WHERE clause\n> really is constant-true --- and this seems like a poster-child case\n> for it being useful to optimize away the WHERE clause. But\n> we won't be able to detect that if we apply the optimization during\n> eval_const_expressions. So maybe that's a good reason to do it\n> somewhere later.\n>\n\nIn this case the expression not changed to constant-true because the\nrelation is on nullable side of outer join\n\nregards\nSurafel\n\nHi TomOn Tue, Sep 8, 2020 at 4:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Surafel Temesgen <surafel3000@gmail.com> writes:\n> [ null_check_on_pkey_optimization_v1.patch ]\n\nI took a very brief look at this.\n\n> I don’t add NOT NULL constraint optimization to the patch because cached\n> plan is not invalidation in case of a change in NOT NULL constraint\n\nThat's actually not a problem, even though some people (including me)\nhave bandied about such suppositions in the past. Relying on attnotnull\nin the planner is perfectly safe [1]. Plus it'd likely be cheaper as\nwell as more general than looking up pkey information. If we did need\nto explicitly record the plan's dependency on a constraint, this patch\nwould be wrong anyhow because it fails to make any such notation about\nthe pkey constraint it relied on.\n\n\nok thank you. I will\nchange my next patch accordingly\n \nThe \"check_null_side\" code you're proposing seems really horrid.\nFor one thing, it seems quite out of place for eval_const_expressions\nto be doing that. For another, it's wrong in principle because\neval_const_expressions doesn't know which part of the query tree\nit's being invoked on, so it cannot know whether outer-join\nnullability is an issue. For another, doing that work over again\nfrom scratch every time we see a potentially optimizable NullTest\nlooks expensive. (I wonder whether you have tried to measure the\nperformance penalty imposed by this patch in cases where it fails\nto make any proof.)\n\n\nI was thinking about\ncollecting data about joins only once at the start of\neval_const_expressions but I assume most queries don't have NULL check\nexpressions and postpone it until we find one. Thinking about it again\nI think it can be done better by storing check_null_side_state\ninto eval_const_expressions_context to use it for subsequent\nevaluation. \n \nI'm not sure what I think about Ashutosh's ideas about doing this\nsomewhere else than eval_const_expressions. I do not buy the argument\nthat it's interesting to do this separately for each child partition.\nChild partitions that have attnotnull constraints different from their\nparent's are at best a tiny minority use-case, if indeed we allow them\nat all (I tend to think we shouldn't). On the other hand it's possible\nthat postponing the check would allow bypassing the outer-join problem,\nie if we only do it for quals that have dropped down to the relation\nscan level then we don't need to worry about outer join effects.\n\n\nAt\neval_const_expressions we check every expression and optimize it if\npossible. Introducing other check and optimization mechanism to\nsame other place just for this optimization seems expensive with respect to performance penalty to me\n \nAnother angle here is that eval_const_expressions runs before\nreduce_outer_joins, meaning that if it's doing things that depend\non outer-join-ness then it will sometimes fail to optimize cases\nthat could be optimized. As a not incidental example, consider\n\n select ... from t1 left join t2 on (...) where t2.x is not null;\n\nreduce_outer_joins will realize that the left join can be reduced\nto a plain join, whereupon (if t2.x is attnotnull) the WHERE clause\nreally is constant-true --- and this seems like a poster-child case\nfor it being useful to optimize away the WHERE clause. But\nwe won't be able to detect that if we apply the optimization during\neval_const_expressions. So maybe that's a good reason to do it\nsomewhere later.\n\nIn this case the\nexpression not changed to constant-true because the relation is on\nnullable side of outer join regardsSurafel",
"msg_date": "Tue, 8 Sep 2020 12:59:27 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "On Tue, 8 Sep 2020 at 07:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n>\n> I'm not sure what I think about Ashutosh's ideas about doing this\n> somewhere else than eval_const_expressions. I do not buy the argument\n> that it's interesting to do this separately for each child partition.\n> Child partitions that have attnotnull constraints different from their\n> parent's are at best a tiny minority use-case, if indeed we allow them\n> at all (I tend to think we shouldn't).\n\n\nI agree about partitions. But, IMO, a child having constraints different\nfrom that of a parent is more common in inheritance trees.\n\nAnother point I raised in my mail was about constraint exclusion. Why\naren't these clauses constant-folded by constraint exclusion? Sorry, I\nhaven't looked at the constraint exclusion code myself for this.\n\nAs a not incidental example, consider\n>\n> select ... from t1 left join t2 on (...) where t2.x is not null;\n>\n> reduce_outer_joins will realize that the left join can be reduced\n> to a plain join, whereupon (if t2.x is attnotnull) the WHERE clause\n> really is constant-true --- and this seems like a poster-child case\n> for it being useful to optimize away the WHERE clause. But\n> we won't be able to detect that if we apply the optimization during\n> eval_const_expressions. So maybe that's a good reason to do it\n> somewhere later.\n>\n\n+1\n\n-- \nBest Wishes,\nAshutosh\n\nOn Tue, 8 Sep 2020 at 07:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nI'm not sure what I think about Ashutosh's ideas about doing this\nsomewhere else than eval_const_expressions. I do not buy the argument\nthat it's interesting to do this separately for each child partition.\nChild partitions that have attnotnull constraints different from their\nparent's are at best a tiny minority use-case, if indeed we allow them\nat all (I tend to think we shouldn't). I agree about partitions. But, IMO, a child having constraints different from that of a parent is more common in inheritance trees.Another point I raised in my mail was about constraint exclusion. Why aren't these clauses constant-folded by constraint exclusion? Sorry, I haven't looked at the constraint exclusion code myself for this.As a not incidental example, consider\n\n select ... from t1 left join t2 on (...) where t2.x is not null;\n\nreduce_outer_joins will realize that the left join can be reduced\nto a plain join, whereupon (if t2.x is attnotnull) the WHERE clause\nreally is constant-true --- and this seems like a poster-child case\nfor it being useful to optimize away the WHERE clause. But\nwe won't be able to detect that if we apply the optimization during\neval_const_expressions. So maybe that's a good reason to do it\nsomewhere later.+1 -- Best Wishes,Ashutosh",
"msg_date": "Tue, 8 Sep 2020 17:29:47 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "On Tue, Sep 8, 2020 at 12:59 PM Surafel Temesgen <surafel3000@gmail.com>\nwrote:\n\n> Hi Tom\n>\n> On Tue, Sep 8, 2020 at 4:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>\n>> The \"check_null_side\" code you're proposing seems really horrid.\n>> For one thing, it seems quite out of place for eval_const_expressions\n>> to be doing that. For another, it's wrong in principle because\n>> eval_const_expressions doesn't know which part of the query tree\n>> it's being invoked on, so it cannot know whether outer-join\n>> nullability is an issue. For another, doing that work over again\n>> from scratch every time we see a potentially optimizable NullTest\n>> looks expensive. (I wonder whether you have tried to measure the\n>> performance penalty imposed by this patch in cases where it fails\n>> to make any proof.)\n>>\n>>\n> I was thinking about collecting data about joins only once at the start of\n> eval_const_expressions but I assume most queries don't have NULL check\n> expressions and postpone it until we find one. Thinking about it again I\n> think it can be done better by storing check_null_side_state into\n> eval_const_expressions_context to use it for subsequent evaluation.\n>\n>\n\nAttached patch does like the above and includes NOT NULL constraint column.\n\nregards\n\nSurafel",
"msg_date": "Thu, 10 Sep 2020 12:55:04 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "Thank you for working on this!\r\nI got slightly into this patch. I can be wrong, but my opinion is that planner/optimizer-related changes are not without dangers generally. So anyway, they should be justified by performance increase, or the previous behavior should be considered totally wrong. Patching the thing which is just a little sub-optimal seems for me seems not necessary.\r\n\r\nSo it would be very good to see measurements of a performance gain from this patch. And also I think tests with partitioned and inherited relations for demonstration of the right work in the cases discussed in the thread should be a must-do for this patch.\r\n\r\n-- \r\nBest regards,\r\nPavel Borisov\r\n\r\nPostgres Professional: http://postgrespro.com\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Mon, 16 Nov 2020 11:05:03 +0000",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "Hi Pavel Borisov,\nIt's always good to select the optimal way even if it didn't have\nperformance gain\nbut in case of this patch i see 4x speed up on my laptop and it will work\non any\ntable that have NULL constraint\n\nregards\nSurafel\n\nHi Pavel Borisov,It's always good to select the optimal way even if it didn't have performance gainbut in case of this patch i see 4x speed up on my laptop and it will work on anytable that have NULL constraintregardsSurafel",
"msg_date": "Tue, 24 Nov 2020 10:47:25 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "On Tue, Nov 24, 2020 at 12:47 PM Surafel Temesgen <surafel3000@gmail.com>\nwrote:\n\n> Hi Pavel Borisov,\n> It's always good to select the optimal way even if it didn't have\n> performance gain\n> but in case of this patch i see 4x speed up on my laptop and it will work\n> on any\n> table that have NULL constraint\n>\n> regards\n> Surafel\n>\n\nThe patch (null_check_on_pkey_optimization_v2.patch) does not apply\nsuccessfully.\nhttp://cfbot.cputube.org/patch_32_2699.log\n\n1 out of 10 hunks FAILED -- saving rejects to file\nsrc/backend/optimizer/util/clauses.c.rej\n\n\nIt was a minor change therefore I rebased the patch, please take a look.\n\n-- \nIbrar Ahmed",
"msg_date": "Mon, 8 Mar 2021 21:12:52 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "Hi Ibrar,\n\n\nOn Mon, Mar 8, 2021 at 8:13 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n> It was a minor change therefore I rebased the patch, please take a look.\n>\n\nIt is perfect thank you\n\nregards\nSurafel\n\nHi Ibrar,On Mon, Mar 8, 2021 at 8:13 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:It was a minor change therefore I rebased the patch, please take a look.It is perfect thank you regardsSurafel",
"msg_date": "Wed, 10 Mar 2021 08:58:16 -0800",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "On Tue, 8 Sept 2020 at 13:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I've been doing some handwaving about changing the representation\n> of Vars, with an eye to making it clear by inspection whether a\n> given Var is nullable by some lower outer join [2]. If that work\n> ever comes to fruition then the need for \"check_null_side\" would\n> go away. So maybe we should put this idea on the back burner\n> until that happens.\n\nI looked at this patch too. I agree that we should delay adding any\nnew smarts in regards to NULL or NOT NULL until we have some more\nrobust infrastructure to make this sort of patch easier and cheaper.\n\nMy vote is to just return this patch with feedback. Maybe Surafel\nwill be interested in pursuing this later when we have better\ninfrastructure or perhaps helping review the patch you're talking\nabout.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Jul 2021 13:02:28 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "On Tue, 9 Mar 2021 at 05:13, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> It was a minor change therefore I rebased the patch, please take a look.\n\nI only had a quick look at the v3 patch.\n\n+ rel = table_open(rte->relid, NoLock);\n+ att = TupleDescAttr(rel->rd_att, var->varattno - 1);\n\n+ if (att->attnotnull && !check_null_side(context->root, relid, context))\n\nThis is not really an acceptable way to determine the notnull\nattribute value. Andy Fan proposes a much better way in [1].\nRelOptInfo is meant to cache the required Relation data that we need\nduring query planning. IIRC, Andy's patch correctly uses this and does\nso in an efficient way.\n\nIn any case, as you can see there's a bit of other work going on to\nsmarten up the planner around NULL value detection. The UniqueKeys\npatch requires this and various other things have come up that really\nshould be solved.\n\nSurafel, I'd suggest we return this patch with feedback and maybe you\ncould instead help reviewing the other patches in regards to the NOT\nNULL tracking and maybe come back to this once the dust has settled\nand everyone is clear on how we determine if a column is NULL or not.\n\nLet me know your thoughts.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKU4AWpQjAqJwQ2X-aR9g3+ZHRzU1k8hNP7A+_mLuOv-n5aVKA@mail.gmail.com\n\n\n",
"msg_date": "Tue, 6 Jul 2021 13:09:56 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
},
{
"msg_contents": "On Tue, Jul 06, 2021 at 01:09:56PM +1200, David Rowley wrote:\n> On Tue, 9 Mar 2021 at 05:13, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> > It was a minor change therefore I rebased the patch, please take a look.\n> \n[...]\n> \n> This is not really an acceptable way to determine the notnull\n> attribute value. Andy Fan proposes a much better way in [1].\n> RelOptInfo is meant to cache the required Relation data that we need\n> during query planning. IIRC, Andy's patch correctly uses this and does\n> so in an efficient way.\n> \n> In any case, as you can see there's a bit of other work going on to\n> smarten up the planner around NULL value detection. The UniqueKeys\n> patch requires this and various other things have come up that really\n> should be solved.\n> \n> Surafel, I'd suggest we return this patch with feedback and maybe you\n> could instead help reviewing the other patches in regards to the NOT\n> NULL tracking and maybe come back to this once the dust has settled\n> and everyone is clear on how we determine if a column is NULL or not.\n> \n> Let me know your thoughts.\n> \n\nHi Surafel, \n\nWe haven't seen an answer from you on this.\nI'm marking the patch as \"Returned with feedback\" as was suggested.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Sun, 26 Sep 2021 19:52:09 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate expression at planning time for two more cases"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.